The modern internet is a battlefield for our attention, and adverts have become the primary ammunition. This is felt particularly acutely on smartphones, where intrusive banners and pop-up windows can effectively discourage you from browsing content. However, there is an effective and comprehensive solution that allows you to create your own protective shield, not only on your home network but on any device, wherever you are.
The Problem: Digital Clutter and Loss of Privacy
Anyone who has tried to read an article on a smartphone is familiar with this scenario: the content is regularly interrupted by adverts that take up a significant portion of the screen, slow down the page’s loading time, and consume precious mobile data. While this problem is irritating on desktop computers, on smaller screens it becomes a serious barrier to accessing information.
Traditional browser plug-ins solve the problem only partially and on a single device. They don’t protect us in mobile apps, on Smart TVs, or on games consoles. What’s worse, ubiquitous tracking scripts collect data about our activity, creating detailed marketing profiles.
The Solution: Centralised Management with AdGuard Home
The answer is AdGuard Home—software that acts as a DNS server, filtering traffic at a network-wide level. By installing it on a home server, such as the popular TrueNAS, we gain a central point of control over all devices connected to our network.
Installation and configuration of AdGuard Home on TrueNAS are straightforward thanks to its Apps system. A key step during installation is to tick the “Host Network” option. This allows AdGuard Home to see the real IP addresses of the devices on your network, enabling precise monitoring and management of clients in the admin panel. Without this option, all queries would appear to originate from the server’s single IP address.
After installation, the crucial step is to direct DNS queries from all devices to the address of our AdGuard server. This can be achieved in several ways, but thanks to Tailscale, the process becomes incredibly simple.
Traditional Methods vs. The Tailscale Approach
In a conventional approach, to direct traffic to AdGuard Home, we would need to change the DNS addresses in our router’s settings. When this isn’t possible (which is often the case with equipment from an internet service provider), the alternative is to configure AdGuard Home as a DHCP server, which will automatically assign the correct DNS address to devices (this requires disabling the DHCP server on the router). The last resort is to change the DNS manually on every device in the house. It must be stressed, however, that all these methods work only within the local network and are completely ineffective for mobile devices using cellular data away from home.
However, if we plan to use Tailscale for protection outside the home, we can also use it to configure the local network. This is an incredibly elegant solution: if we install the Tailscale client on all our devices (computers, phones) and set our AdGuard server’s DNS address in its admin panel, enabling the “Override local DNS” option, we don’t need to make any changes to the router or manually on individual devices. Tailscale will automatically force every device in our virtual network to use AdGuard, regardless of which physical network it is connected to.
AdGuard Home Features: Much More Than Ad Blocking
Protection against Malware: Automatically blocks access to sites known for phishing, malware, and scams.
Parental Controls: Allows you to block sites with adult content, an invaluable feature in homes with children.
Filter Customisation: We can use ready-made, regularly updated filter lists or add our own rules.
Detailed Statistics: The panel shows which queries are being blocked, which devices are most active, and which domains are generating the most traffic.
For advanced users, the ability to manage clients is particularly useful. Each device on the network can be given a friendly name (e.g., “Anna-Laptop,” “Tom-Phone”) and assigned individual filtering rules. In my case, for VPS servers that do not require ad blocking, I have set default DNS servers (e.g., 1.1.1.1 and 8.8.8.8), so their traffic is ignored by the AdGuard filters.
The Challenge: Blocking Adverts Beyond the Home Network
While protection on the local network is already a powerful tool, true freedom from adverts comes when we can use it away from home. By default, when a smartphone connects to a mobile network, it loses contact with the home AdGuard server. Attempting to expose a DNS server to the public internet by forwarding ports on your router is not only dangerous but also ineffective. Most mobile operating systems, like Android and iOS, do not allow changing the DNS server for mobile connections, making such a solution impossible. This is where Tailscale comes to the rescue.
Tailscale: Your Private Network, Anywhere
Tailscale is a service based on the WireGuard protocol that creates a secure, virtual private network (a “Tailnet”) between your devices. Regardless of where they are, computers, servers, and phones can communicate with each other as if they were on the same local network.
Installing Tailscale on TrueNAS and on mobile devices is swift and straightforward. After logging in with the same account, all devices see each other in the Tailscale admin panel. To combine the power of both tools, you need to follow these key steps:
In the Tailscale admin panel, under the DNS tab, enable the Override local DNS option.
As the global DNS server, enter the IP address of our TrueNAS server within the Tailnet (e.g., 100.x.x.x).
With this configuration, all DNS traffic from our phone, even when it’s using a 5G network on the other side of the country, is sent through a secure tunnel to the Tailscale server on TrueNAS and then processed by AdGuard Home. The result? Adverts, trackers, and malicious sites are blocked on your phone, anytime and anywhere.
Advanced Tailscale Features: Subnet Routes and Exit Node
Tailscale offers two powerful features that further extend the capabilities of our network:
Subnet routes: This allows you to share your entire home LAN (e.g., 192.168.1.0/24) with devices on your Tailnet. After configuring your TrueNAS server as a “subnet router,” your phone, while away from home, can access not only the server itself but also your printer, IP camera, or other devices on the local network, just as if you were at home.
Exit node: This feature turns your home server into a fully-fledged VPN server. Once activated, all internet traffic from your Tailnet (not just DNS queries) is tunnelled through your home internet connection. This is the perfect solution when using untrusted public Wi-Fi networks (e.g., in a hotel or at an airport), as all your traffic is encrypted and protected. If your home server is in the UK, you also gain a UK IP address while abroad.
Checking the Effectiveness of Ad Blocking
To find out how effective your ad-blocking filters are, you can visit https://adblock.turtlecute.org/. There, you will see what types of adverts are being blocked and which are still being displayed. This will help you to fine-tune your filter lists in AdGuard Home.
Summary: Advantages and Disadvantages
Creating such a system is an investment of time, but the benefits are invaluable.
Advantages:
Complete and Unified Protection: Blocks adverts and threats on all devices, on any network, with minimal configuration.
Centralised Management: A single place to configure rules for the entire household.
Increased Privacy and Security: Reduces tracking and encrypts traffic on public networks.
Performance: Faster page loading and lower mobile data consumption.
Disadvantages:
Requires a Server: Needs a 24/7 device like a TrueNAS server to be running.
Dependency on Home Connection: The speed of DNS responses and bandwidth (in Exit Node mode) outside the home depends on your internet’s upload speed.
The combination of AdGuard Home and Tailscale is a powerful tool for anyone who values a clean, fast, and secure internet. It is a declaration of digital independence that places control back into the hands of the user, away from advertising corporations.
Section 1: Introduction: Simplifying Home Lab Access with Nginx Proxy Manager on TrueNAS Scale
Modern home labs have evolved from simple setups into complex ecosystems running dozens of services, from media servers like Plex or Jellyfin, to home automation systems such as Home Assistant, to personal clouds and password managers. Managing access to each of these services, each operating on a unique combination of an IP address and port number, quickly becomes impractical, inconvenient, and, most importantly, insecure. Exposing multiple ports to the outside world increases the attack surface and complicates maintaining a consistent security policy.
The solution to this problem, employed for years in corporate environments, is the implementation of a central gateway or a single point of entry for all incoming traffic. In networking terminology, this role is fulfilled by a reverse proxy. This is an intermediary server that receives all requests from clients and then, based on the domain name, directs them to the appropriate service running on the internal network. Such an architecture not only simplifies access, allowing the use of easy-to-remember addresses (e.g., jellyfin.mydomain.co.uk instead of 192.168.1.50:8096), but also forms a key component of a security strategy.
In this context, two technologies are gaining particular popularity among enthusiasts: TrueNAS Scale and Nginx Proxy Manager. TrueNAS Scale, based on the Debian Linux system, has transformed the traditional NAS (Network Attached Storage) device into a powerful, hyper-converged infrastructure (HCI) platform, capable of natively running containerised applications and virtual machines. In turn, Nginx Proxy Manager (NPM) is a tool that democratises reverse proxy technology. It provides a user-friendly, graphical interface for the powerful but complex-to-configure Nginx server, making advanced features, such as automatic SSL certificate management, accessible without needing to edit configuration files from the command line.
This article provides a comprehensive overview of the process of deploying Nginx Proxy Manager on the TrueNAS Scale platform. The aim is not only to present “how-to” instructions but, above all, to explain why each step is necessary. The analysis will begin with an in-depth discussion of both technologies and their interactions. Then, a detailed installation process will be carried out, considering platform-specific challenges and their solutions, including the well-known issue of the application getting stuck in the “Deploying” state. Subsequently, using the practical example of a Jellyfin media server, the configuration of a proxy host will be demonstrated, along with advanced security options. The report will conclude with a summary of the benefits and suggest further steps to fully leverage the potential of this powerful duo.
Section 2: Tool Analysis: Nginx Proxy Manager and the TrueNAS Scale Application Ecosystem
Understanding the fundamental principles of how Nginx Proxy Manager works and the architecture in which it is deployed—the TrueNAS Scale application system—is crucial for successful installation, effective configuration, and, most importantly, efficient troubleshooting. These two components, though designed to work together, each have their own unique characteristics, the ignorance of which is the most common cause of failure.
At the core of NPM’s functionality lies the concept of a reverse proxy, which is fundamental to modern network architecture. Understanding how it works allows one to appreciate the value that NPM brings.
Definition and Functions of a Reverse Proxy
A reverse proxy is a server that acts as an intermediary on the server side. Unlike a traditional (forward) proxy, which acts on behalf of the client, a reverse proxy acts on behalf of the server (or a group of servers). It receives requests from clients on the internet and forwards them to the appropriate servers on the local network that actually store the content. To an external client, the reverse proxy is the only visible point of contact; the internal network structure remains hidden.
The key benefits of this solution are:
Security: Hiding the internal network topology and the actual IP addresses of application servers significantly hinders direct attacks on these services.
Centralised SSL/TLS Management (SSL Termination): Instead of configuring SSL certificates on each of a dozen application servers, you can manage them in one place—on the reverse proxy. Traffic encryption and decryption (SSL Termination) occurs at the proxy server, which offloads the backend servers.
Load Balancing: In more advanced scenarios, a reverse proxy can distribute traffic among multiple identical application servers, ensuring high availability and service scalability.
Simplified Access: It allows access to multiple services through standard ports 80 (HTTP) and 443 (HTTPS) using different subdomains, eliminating the need to remember and open multiple ports.
NPM as a Management Layer
It should be emphasised that Nginx Proxy Manager is not a new web server competing with Nginx. It is a management application, built on the open-source Nginx, which serves as a graphical user interface (GUI) for its reverse proxy functions. Instead of manually editing complex Nginx configuration files, the user can perform the same operations with a few clicks in an intuitive web interface.
The main features that have contributed to NPM’s popularity are:
Graphical User Interface: Based on the Tabler framework, the interface is clear and easy to use, which drastically lowers the entry barrier for users who are not Nginx experts.
SSL Automation: Built-in integration with Let’s Encrypt allows for the automatic, free generation of SSL certificates and their periodic renewal. This is one of the most important and appreciated features.
Docker-based Deployment: NPM is distributed as a ready-to-use Docker image, which makes its installation on any platform that supports containers extremely simple.
Access Management: The tool offers features for creating Access Control Lists (ACLs) and managing users with different permission levels, allowing for granular control over access to individual services.
Comparison: NPM vs. Traditional Nginx
The choice between Nginx Proxy Manager and manual Nginx configuration is a classic trade-off between simplicity and flexibility. The table below outlines the key differences between these two approaches.
Aspect
Nginx Proxy Manager
Traditional Nginx
Management Interface
Graphical User Interface (GUI) simplifying configuration.
Command Line Interface (CLI) and editing text files; requires technical knowledge.
SSL Configuration
Fully automated generation and renewal of Let’s Encrypt certificates.
Manual configuration using tools like Certbot; greater control.
Learning Curve
Low; ideal for beginners and hobbyists.
Steep; requires understanding of Nginx directives and web server architecture.
Flexibility
Limited to features available in the GUI; advanced rules can be difficult to implement.
Full flexibility and the ability to create highly customised, complex configurations.
Scalability / Target User
Ideal for home labs, small to medium deployments. Hobbyist, small business owner, home lab user.
A better choice for large-scale, high-load corporate environments. Systems administrator, DevOps engineer, developer.
This table clearly shows that NPM is a tool strategically tailored to the needs of its target audience—home lab enthusiasts. These users consciously sacrifice some advanced flexibility for the significant benefits of ease of use and speed of deployment.
Subsection 2.2: Application Architecture in TrueNAS Scale
To understand why installing NPM on TrueNAS Scale can encounter specific problems, it is necessary to know how this platform manages applications. It is not a typical Docker environment.
Foundations: Linux and Hyper-convergence
A key architectural change in TrueNAS Scale compared to its predecessor, TrueNAS CORE, was the switch from the FreeBSD operating system to Debian, a Linux distribution. This decision opened the door to native support for technologies that have dominated the cloud and containerisation world, primarily Docker containers and KVM-based virtualisation. As a result, TrueNAS Scale became a hyper-converged platform, combining storage, computing, and virtualisation functions.
The Application System
Applications are distributed through Catalogs, which function as repositories. These catalogs are further divided into so-called “trains,” which define the stability and source of the applications:
stable: The default train for official, iXsystems-tested applications.
enterprise: Applications verified for business use.
community: Applications created and maintained by the community. This is where Nginx Proxy Manager is located by default.
test: Applications in the development phase.
NPM’s inclusion in the community catalog means that while it is easily accessible, its technical support relies on the community, not directly on the manufacturer of TrueNAS.
Storage Management for Applications
Before any application can be installed, TrueNAS Scale requires the user to specify a ZFS pool that will be dedicated to storing application data. When an application is installed, its data (configuration, databases, etc.) must be saved somewhere persistently. TrueNAS Scale offers several options here, but the default and recommended for simplicity is ixVolume.
ixVolume is a special type of volume that automatically creates a dedicated, system-managed ZFS dataset within the selected application pool. This dataset is isolated, and the system assigns it very specific permissions. By default, the owner of this dataset becomes the system user apps with a user ID (UID) of 568 and a group ID (GID) of 568. The running application container also operates with the permissions of this very user.
This is the crux of the problem. The standard Docker image for Nginx Proxy Manager contains startup scripts (e.g., those from Certbot, the certificate handling tool) that, on first run, attempt to change the owner (chown) of data directories, such as /data or /etc/letsencrypt, to ensure they have the correct permissions. When the NPM container starts within the sandboxed TrueNAS application environment, its startup script, running as the unprivileged apps user (UID 568), tries to execute the chown operation on the ixVolume. This operation fails because the apps user is not the owner of the parent directories and does not have permission to change the owner of files on a volume managed by K3s. This permission error causes the container’s startup script to halt, and the container itself never reaches the “running” state, which manifests in the TrueNAS Scale interface as an endless “Deploying” status.
Section 3: Installing and Configuring Nginx Proxy Manager on TrueNAS Scale
The process of installing Nginx Proxy Manager on TrueNAS Scale is straightforward, provided that attention is paid to a few key configuration parameters that are often a source of problems. The following step-by-step instructions will guide you through this process, highlighting the critical decisions that need to be made.
Step 1: Preparing TrueNAS Scale
Before proceeding with the installation of any application, you must ensure that the application service in TrueNAS Scale is configured correctly.
Log in to the TrueNAS Scale web interface.
Navigate to the Apps section.
If the service is not yet configured, the system will prompt you to select a ZFS pool to be used for storing all application data. Select the appropriate pool and save the settings. After a moment, the service status should change to “Running”.
Step 2: Finding the Application
Nginx Proxy Manager is available in the official community catalog.
In the Apps section, go to the Discover tab.
In the search box, type nginx-proxy-manager.
The application should appear in the results. Ensure it comes from the community catalog.
Click the Install button to proceed to the configuration screen.
Step 3: Key Configuration Parameters
The installation screen presents many options. Most of them can be left with their default values, but a few sections require special attention.
Application Name
In the Application Name field, enter a name for the installation, for example, nginx-proxy-manager. This name will be used to identify the application in the system.
Network Configuration
This is the most important and most problematic stage of the configuration. By default, the TrueNAS Scale management interface uses the standard web ports: 80 for HTTP and 443 for HTTPS. Since Nginx Proxy Manager, to act as a gateway for all web traffic, should also listen on these ports, a direct conflict arises. There are two main strategies to solve this problem, each with its own set of trade-offs.
Strategy A (Recommended): Change TrueNAS Scale Ports This method is considered the “cleanest” from NPM’s perspective because it allows it to operate as it was designed.
Cancel the NPM installation and go to System Settings -> General. In the GUI SSL/TLS Certificate section, change the Web Interface HTTP Port to a custom one, e.g., 880, and the Web Interface HTTPS Port to, e.g., 8443.
Save the changes. From this point on, access to the TrueNAS Scale interface will be available at http://<truenas-ip-address>:880 or https://<truenas-ip-address>:8443.
Return to the NPM installation and in the Network Configuration section, assign the HTTP Port to 80 and the HTTPS Port to 443.
Advantages: NPM runs on standard ports, which simplifies configuration and eliminates the need for port translation on the router.
Disadvantages: It changes the fundamental way of accessing the NAS itself. In rare cases, as noted on forums, this can cause unforeseen side effects, such as problems with SSH connections between TrueNAS systems.
Strategy B (Alternative): Use High Ports for NPM This method is less invasive to the TrueNAS configuration itself but shifts the complexity to the router level.
In the NPM configuration, under the Network Configuration section, leave the TrueNAS ports unchanged and assign high, unused ports to NPM, e.g., 30080 for HTTP and 30443 for HTTPS. TrueNAS Scale reserves ports below 9000 for the system, so you should choose values above this threshold.
After installing NPM, configure port forwarding on your edge router so that incoming internet traffic on port 80 is directed to port 30080 of the TrueNAS IP address, and traffic from port 443 is directed to port 30443.
Advantages: The TrueNAS Scale configuration remains untouched.
Disadvantages: Requires additional configuration on the router. Each proxied service will require explicit forwarding, which can be confusing.
The ideal solution would be to assign a dedicated IP address on the local network to NPM (e.g., using macvlan technology), which would completely eliminate the port conflict. Unfortunately, the graphical interface of the application installer in TrueNAS Scale does not provide this option in a simple way.
Storage Configuration
To ensure that the NPM configuration, including created proxy hosts and SSL certificates, survives updates or application redeployments, you must configure persistent storage.
In the Storage Configuration section, configure two volumes.
For Nginx Proxy Manager Data Storage (path /data) and Nginx Proxy Manager Certs Storage (path /etc/letsencrypt), select the ixVolume type.
Leaving these settings will ensure that TrueNAS creates dedicated ZFS datasets for the configuration and certificates, which will be independent of the application container itself.
Step 4: First Run and Securing the Application
After configuring the above parameters (and possibly applying the fixes from Section 4), click Install. After a few moments, the application should transition to the “Running” state.
Access to the NPM interface is available at http://<truenas-ip-address>:PORT, where PORT is the WebUI port configured during installation (defaults to 81 inside the container but is mapped to a higher port, e.g., 30020, if the TrueNAS ports were not changed).
The default login credentials are:
Email: admin@example.com
Password: changeme
Upon first login, the system will immediately prompt you to change these details. This is an absolutely crucial security step and must be done immediately.
Section 4: Troubleshooting the “Deploying” Issue: Diagnosis and Repair of Installation Errors
One of the most frequently encountered and frustrating problems when deploying Nginx Proxy Manager on TrueNAS Scale is the situation where the application gets permanently stuck in the “Deploying” state after installation. The user waits, refreshes the page, but the status never changes to “Running”. Viewing the container logs often does not provide a clear answer. This problem is not a bug in NPM itself but, as diagnosed earlier, a symptom of a fundamental permission conflict between the generic container and the specific, secured environment in TrueNAS Scale.
Problem Description and Root Cause
After clicking the “Install” button in the application wizard, TrueNAS Scale begins the deployment process. In the background, the Docker image is downloaded, ixVolumes are created, and the container is started with the specified configuration. The startup script inside the NPM container attempts to perform maintenance operations, including changing the owner of key directories. Because the container is running as a user with limited permissions (apps, UID 568) on a file system it does not fully control, this operation fails. The script halts its execution, and the container never signals to the system that it is ready to work. Consequently, from the perspective of the TrueNAS interface, the application remains forever in the deployment phase.
Fortunately, thanks to the work of the community and developers, there are proven and effective solutions to this problem. Interestingly, the evolution of these solutions perfectly illustrates the dynamics of open-source software development.
Solution 1: Using an Environment Variable (Recommended Method)
This is the modern, precise, and most secure solution to the problem. It was introduced by the creators of the NPM container specifically in response to problems reported by users of platforms like TrueNAS Scale. Instead of escalating permissions, the container is instructed to skip the problematic step.
To implement this solution:
During the application installation (or while editing it if it has already been created and is stuck), navigate to the Application Configuration section.
Find the Nginx Proxy Manager Configuration subsection and click Add next to Additional Environment Variables.
Configure the new environment variable as follows:
Variable Name: SKIP_CERTBOT_OWNERSHIP
Variable Value: true
Save the configuration and install or update the application.
Adding this flag informs the Certbot startup script inside the container to skip the chown (change owner) step for its configuration files. The script proceeds, the container starts correctly and reports readiness, and the application transitions to the “Running” state. This is the recommended method for all newer versions of TrueNAS Scale (Electric Eel, Dragonfish, and later).
Solution 2: Changing the User to Root (Historical Method)
This solution was the first one discovered by the community. It is a more “brute force” method that solves the problem by granting the container full permissions. Although effective, it is considered less elegant and potentially less secure from the perspective of the principle of least privilege.
To implement this solution:
During the installation or editing of the application, navigate to the User and Group Configuration section.
Change the value in the User ID field from the default 568 to 0.
Leave the Group ID unchanged or also set it to 0.
Save the configuration and deploy the application.
Setting the User ID to 0 causes the process inside the container to run with root user permissions. The root user has unlimited permissions, so the problematic chown operation executes flawlessly, and the container starts correctly. This method was particularly necessary in older versions of TrueNAS Scale (e.g., Dragonfish) and is documented as a working workaround. Although it still works, the environment variable method is preferred as it does not require escalating permissions for the entire container.
Verification
Regardless of the chosen method, after saving the changes and redeploying the application, you should observe its status in the Apps -> Installed tab. After a short while, the status should change from “Deploying” to “Running”, which means the problem has been successfully resolved and Nginx Proxy Manager is ready for configuration.
Section 5: Practical Application: Securing a Jellyfin Media Server
Theory and correct installation are just the beginning. The true power of Nginx Proxy Manager is revealed in practice when we start using it to manage access to our services. Jellyfin, a popular, free media server, is an excellent example to demonstrate this process, as its full functionality depends on one, often overlooked, setting in the proxy configuration. The following guide assumes that Jellyfin is already installed and running on the local network, accessible at IP_ADDRESS:PORT (e.g., 192.168.1.10:8096).
Step 1: DNS Configuration
Before NPM can direct traffic, the outside world needs to know where to send it.
Log in to your domain’s management panel (e.g., at your domain registrar or DNS provider like Cloudflare).
Create a new A record.
In the Name (or Host) field, enter the subdomain that will be used to access Jellyfin (e.g., jellyfin).
In the Value (or Points to) field, enter the public IP address of your home network (your router).
Step 2: Obtaining an SSL Certificate in NPM
Securing the connection with HTTPS is crucial. NPM makes this process trivial, especially when using the DNS Challenge method, which is more secure as it does not require opening any ports on your router.
In the NPM interface, go to SSL Certificates and click Add SSL Certificate, then select Let’s Encrypt.
In the Domain Names field, enter your subdomain, e.g., jellyfin.yourdomain.com. You can also generate a wildcard certificate at this stage (e.g., *.yourdomain.com), which will match all subdomains.
Enable the Use a DNS Challenge option.
From the DNS Provider list, select your DNS provider (e.g., Cloudflare).
In the Credentials File Content field, paste the API token obtained from your DNS provider. For Cloudflare, you need to generate a token with permissions to edit the DNS zone (Zone: DNS: Edit).
Accept the Let’s Encrypt terms of service and save the form. After a moment, NPM will use the API to temporarily add a TXT record in your DNS, which proves to Let’s Encrypt that you own the domain. The certificate will be generated and saved.
Step 3: Creating a Proxy Host
This is the heart of the configuration, where we link the domain, the certificate, and the internal service.
In NPM, go to Hosts -> Proxy Hosts and click Add Proxy Host.
A form with several tabs will open.
“Details” Tab
Domain Names: Enter the full domain name that was configured in DNS, e.g., jellyfin.yourdomain.com.
Scheme: Select http, as the communication between NPM and Jellyfin on the local network is typically not encrypted.
Forward Hostname / IP: Enter the local IP address of the server where Jellyfin is running, e.g., 192.168.1.10.
Forward Port: Enter the port on which Jellyfin is listening, e.g., 8096.
Websocket Support: This is an absolutely critical setting. You must tick this option. Jellyfin makes extensive use of WebSocket technology for real-time communication, for example, to update playback status on the dashboard or for the Syncplay feature to work. Without WebSocket support enabled, the Jellyfin main page will load correctly, but many key features will not work, leading to difficult-to-diagnose problems.
“SSL” Tab
SSL Certificate: From the drop-down list, select the certificate generated in the previous step for the Jellyfin domain.
Force SSL: Enable this option to automatically redirect all HTTP connections to secure HTTPS.
HTTP/2 Support: Enabling this option can improve page loading performance.
After configuring both tabs, save the proxy host.
Step 4: Testing
After saving the configuration, Nginx will reload its settings in the background. It should now be possible to open a browser and enter the address https://jellyfin.yourdomain.com. You should see the Jellyfin login page, and the connection should be secured with an SSL certificate (a padlock icon will be visible in the address bar).
The default configuration is fully functional, but to enhance security, you can add extra HTTP headers that instruct the browser on how to behave. To do this, edit the created proxy host and go to the Advanced tab. In the Custom Nginx Configuration field, you can paste additional directives.
It’s worth noting that NPM has a quirk: add_header directives added directly in this field may not be applied. A safer approach is to create a Custom Location for the path / and paste the headers in its configuration field.
The following table presents recommended security headers.
Header
Purpose
Recommended Value
Notes
Strict-Transport-Security
Forces the browser to communicate exclusively over HTTPS for a specified period.
A historical header intended to protect against Cross-Site Scripting (XSS) attacks.
add_header X-XSS-Protection “0” always;
The header is obsolete and can create new attack vectors. Modern browsers have better, built-in mechanisms. It is recommended to explicitly disable it (0).
Applying these headers provides an additional layer of defence and is considered good practice in securing web applications. However, it is critical to use up-to-date recommendations, as in the case of X-XSS-Protection, where blindly copying it from older guides could weaken security.
Section 6: Conclusions and Next Steps
Combining Nginx Proxy Manager with the TrueNAS Scale platform creates an incredibly powerful and flexible environment for managing a home lab. As demonstrated in this report, this synergy allows for centralised access management, a drastic simplification of the deployment and maintenance of SSL/TLS security, and a professionalisation of the way users interact with their self-hosted services. The key to success, however, is not just blindly following instructions, but above all, understanding the fundamental principles of how both technologies work. The awareness that applications in TrueNAS Scale operate within a restrictive ecosystem is essential for effectively diagnosing and resolving specific problems, such as the “Deploying” stall error.
Summary of Strategic Benefits
Deploying NPM on TrueNAS Scale brings tangible benefits:
Centralisation and Simplicity: All incoming requests are managed from a single, intuitive panel, eliminating the chaos of multiple IP addresses and ports.
Enhanced Security: Automation of SSL certificates, hiding the internal network topology, and the ability to implement advanced security headers create a solid first line of defence.
Professional Appearance and Convenience: Using easy-to-remember, personalised subdomains (e.g., media.mydomain.co.uk) instead of technical IP addresses significantly improves the user experience.
Recommendations and Next Steps
After successfully deploying Nginx Proxy Manager and securing your first application, it is worth exploring its further capabilities to fully utilise the tool’s potential.
Explore Access Lists: NPM allows for the creation of Access Control Lists (ACLs), which can restrict access to specific proxy hosts based on the source IP address. This is an extremely useful feature for securing administrative panels. For example, you can create a rule that allows access to the TrueNAS Scale interface or the NPM panel itself only from IP addresses on the local network, blocking any access attempts from the outside.
Backup Strategy: The Nginx Proxy Manager configuration, stored in the ixVolume, is a critical asset. Its loss would mean having to reconfigure all proxy hosts and certificates. TrueNAS Scale offers built-in tools for automating backups. You should configure a Periodic Snapshot Task for the dataset containing the NPM application data (ix-applications/releases/nginx-proxy-manager) to regularly create snapshots of its state.
Securing Other Applications: The knowledge gained during the Jellyfin configuration is universal. It can now be applied to secure virtually any other web service running in your home lab, such as Home Assistant, a file server, a personal password manager (e.g., Vaultwarden, which is a Bitwarden implementation), or the AdGuard Home ad-blocking system. Remember to enable the Websocket Support option for any application that requires real-time communication.
Monitoring and Diagnostics: The NPM interface provides access logs and error logs for each proxy host. Regularly reviewing these logs can help in diagnosing access problems, identifying unauthorised connection attempts, and optimising the configuration.
Mastering Nginx Proxy Manager on TrueNAS Scale is an investment that pays for itself many times over in the form of increased security, convenience, and control over your digital ecosystem. It is another step on the journey from a simple user to a conscious architect of your own home infrastructure.
A Permanent IP Blacklist with Fail2ban, UFW, and Ipset
Introduction: Beyond Temporary Protection
In the digital world, where server attacks are a daily occurrence, merely reacting is not enough. Although tools like Fail2ban provide a basic line of defence, their temporary blocks leave a loophole—persistent attackers can return and try again after the ban expires. This article provides a detailed guide to building a fully automated, two-layer system that turns ephemeral bans into permanent, global blocks. The combination of Fail2ban, UFW, and the powerful Ipset tool creates a mechanism that permanently protects your server from known repeat offenders.
Layer One: Reaction with Fail2ban
At the start of every attack is Fail2ban. This daemon monitors log files (e.g., sshd.log, apache.log) for patterns indicating break-in attempts, such as multiple failed login attempts. When it detects such activity, it immediately blocks the attacker’s IP address by adding it to the firewall rules for a defined period (e.g., 10 minutes, 30 days). This is an effective but short-term response.
Layer Two: Persistence with UFW and Ipset
For a ban to become permanent, we need a more robust, centralised method of managing IP addresses. This is where UFW and Ipset come in.
What is Ipset?
Ipset is a Linux kernel extension that allows you to manage sets of IP addresses, networks, or ports. It is a much more efficient solution than adding thousands of individual rules to a firewall. Instead, the firewall can refer to an entire set with a single rule.
Ipset Installation and Configuration
The first step is to install Ipset on your system. We use standard package managers for this.
sudo apt update sudo apt install ipset
Next, we create two sets: blacklist for IPv4 addresses and blacklist_v6 for IPv6.
The hashsize parameter determines the maximum number of entries, which is crucial for performance.
Integrating Ipset with the UFW Firewall
For UFW to start using our sets, we must add the appropriate commands to its rules. We edit the UFW configuration files, adding rules that block traffic originating from addresses contained in our Ipset sets. For IPv4, we edit /etc/ufw/before.rules:
sudo nano /etc/ufw/before.rules
Immediately after *filter and :ufw-before-input [0:0], add:
# Rules for the permanent blacklist (ipset) # Block any incoming traffic from IP addresses in the ‘blacklist’ set (IPv4) -A ufw-before-input -m set –match-set blacklist src -j DROP
For IPv6, we edit /etc/ufw/before6.rules:
sudo nano /etc/ufw/before6.rules
Immediately after *filter and :ufw6-before-input [0:0], add:
# Rules for the permanent blacklist (ipset) IPv6 # Block any incoming traffic from IP addresses in the ‘blacklist_v6’ set -A ufw6-before-input -m set –match-set blacklist_v6 src -j DROP
After adding the rules, we reload UFW for them to take effect:
sudo ufw reload
Script for Automatic Blacklist Updates
The core of the system is a script that acts as a bridge between Fail2ban and Ipset. Its job is to collect banned addresses, ensure they are unique, and synchronise them with the Ipset sets.
Create the script file:
sudo nano /usr/local/bin/update-blacklist.sh
Below is the content of the script. It works in several steps:
Creates a temporary, unique list of IP addresses from Fail2ban logs and the existing blacklist.
Creates temporary Ipset sets.
Reads addresses from the unique list and adds them to the appropriate temporary sets (distinguishing between IPv4 and IPv6).
Atomically swaps the old Ipset sets with the new, temporary ones, minimising the risk of protection gaps.
Destroys the old, temporary sets.
Returns a summary of the number of blocked addresses.
# Create a unique list of banned IPs from the log and the existing blacklist file (grep ‘Ban’ /var/log/fail2ban.log | awk ‘{print $(NF)}’ && cat “$BLACKLIST_FILE”) | sort -u > “$BLACKLIST_FILE.tmp” mv “$BLACKLIST_FILE.tmp” “$BLACKLIST_FILE”
# Add IPs to the temporary sets while IFS= read -r ip; do if [[ “$ip” == *”:”* ]]; then sudo ipset add “${IPSET_NAME_V6}_tmp” “$ip” else sudo ipset add “${IPSET_NAME_V4}_tmp” “$ip” fi done < “$BLACKLIST_FILE”
# Atomically swap the temporary sets with the active ones sudo ipset swap “${IPSET_NAME_V4}_tmp” “$IPSET_NAME_V4” sudo ipset swap “${IPSET_NAME_V6}_tmp” “$IPSET_NAME_V6”
After creating the script, give it execute permissions:
sudo chmod +x /usr/local/bin/update-blacklist.sh
Automation and Persistence After a Reboot
To run the script without intervention, we use a cron schedule. Open the crontab editor for the root user and add a rule to run the script every hour:
sudo crontab -e
Add this line:
0 * * * * /usr/local/bin/update-blacklist.sh
Or to run it once a day at 6 a.m.:
0 6 * * * /usr/local/bin/update-blacklist.sh
The final, crucial step is to ensure the Ipset sets survive a reboot, as they are stored in RAM by default. We create a systemd service that will save their state before the server shuts down and load it again on startup.
sudo nano /etc/systemd/system/ipset-persistent.service “`ini [Unit] Description=Saves and restores ipset sets on boot/shutdown Before=network-pre.target ConditionFileNotEmpty=/etc/ipset.rules
The entire system is an automated chain of events that works in the background to protect your server from attacks. Here is the flow of information and actions:
Attack Response (Fail2ban):
Someone tries to break into the server (e.g., by repeatedly entering the wrong password via SSH).
Fail2ban, monitoring system logs (/var/log/fail2ban.log), detects this pattern.
It immediately adds the attacker’s IP address to a temporary firewall rule, blocking their access for a specified time.
Permanent Banning (Script and Cron):
Every hour (as set in cron), the system runs the update-blacklist.sh script.
The script reads the Fail2ban logs, finds all addresses that have been banned (lines containing “Ban”), and then compares them with the existing local blacklist (/etc/fail2ban/blacklist.local).
It creates a unique list of all banned addresses.
It then creates temporary ipset sets (blacklist_tmp and blacklist_v6_tmp) and adds all addresses from the unique list to them.
It performs an ipset swap operation, which atomically replaces the old, active sets with the new, updated ones.
UFW, thanks to the previously defined rules, immediately starts blocking the new addresses that have appeared in the updated ipset sets.
Persistence After Reboot (systemd Service):
Ipset’s operation is volatile—the sets only exist in memory. The ipset-persistent.service solves this problem.
Before shutdown/reboot: systemd runs the ExecStop=/sbin/ipset save -f /etc/ipset.rules command. This saves the current state of all ipset sets to a file on the disk.
After power-on/reboot: systemd runs the ExecStart command, which restores the sets. It reads all blocked addresses from the /etc/ipset.rules file and automatically recreates the ipset sets in memory.
Thanks to this, even if the server is rebooted, the IP blacklist remains intact, and protection is active from the first moments after the system starts.
Summary and Verification
The system you have built is a fully automated, multi-layered protection mechanism. Attackers are temporarily banned by Fail2ban, and their addresses are automatically added to a permanent blacklist, which is instantly blocked by UFW and Ipset. The systemd service ensures that the blacklist survives server reboots, protecting against repeat offenders permanently. To verify its operation, you can use the following commands:
sudo ufw status verbose sudo ipset list blacklist sudo ipset list blacklist_v6 sudo systemctl status ipset-persistent.service
How to Create a Reliable IP Whitelist in UFW and Ipset
Introduction: Why a Whitelist is Crucial
When configuring advanced firewall rules, especially those that automatically block IP addresses (like in systems with Fail2ban), there is a risk of accidentally blocking yourself or key services. A whitelist is a mechanism that acts like a VIP pass for your firewall—IP addresses on this list will always have access, regardless of other, more restrictive blocking rules.
This guide will show you, step-by-step, how to create a robust and persistent whitelist using UFW (Uncomplicated Firewall) and ipset. As an example, we will use the IP address 111.222.333.444, which we want to add as trusted.
Step 1: Create a Dedicated Ipset Set for the Whitelist
The first step is to create a separate “container” for our trusted IP addresses. Using ipset is much more efficient than adding many individual rules to iptables.
Open a terminal and enter the following command:
sudo ipset create whitelist hash:ip
What did we do?
ipset create: The command to create a new set.
whitelist: The name of our set. It’s short and unambiguous.
hash:ip: The type of set. hash:ip is optimised for storing and very quickly looking up single IPv4 addresses.
Step 2: Add a Trusted IP Address
Now that we have the container ready, let’s add our example trusted IP address to it.
sudo ipset add whitelist 111.222.333.444
You can repeat this command for every address you want to add to the whitelist. To check the contents of the list, use the command:
sudo ipset list whitelist
Step 3: Modify the Firewall – Giving Priority to the Whitelist
This is the most important step. We need to modify the UFW rules so that connections from addresses on the whitelist are accepted immediately, before the firewall starts processing any blocking rules (including those from the ipset blacklist or Fail2ban).
Open the before.rules configuration file. This is the file where rules processed before the main UFW rules are located.
sudo nano /etc/ufw/before.rules
Go to the beginning of the file and find the *filter section. Just below the :ufw-before-input [0:0] line, add our new snippet. Placing it at the very top ensures it will be processed first.
*filter :ufw-before-input [0:0] # Rule for the whitelist (ipset) ALWAYS HAS PRIORITY # Accept any traffic from IP addresses in the ‘whitelist’ set -A ufw-before-input -m set –match-set whitelist src -j ACCEPT
-A ufw-before-input: We add the rule to the ufw-before-input chain.
-m set –match-set whitelist src: Condition: if the source (src) IP address matches the whitelist set…
-j ACCEPT: Action: “immediately accept (ACCEPT) the packet and stop processing further rules for this packet.”
Save the file and reload UFW:
sudo ufw reload
From this point on, any connection from the address 111.222.333.444 will be accepted immediately.
Step 4: Ensuring Whitelist Persistence
Ipset sets are stored in memory and disappear after a server reboot. To make our whitelist persistent, we need to ensure it is automatically loaded every time the system starts. We will use our previously created ipset-persistent.service for this.
Update the systemd service to “teach” it about the existence of the new whitelist set.
Find the ExecStart line and add the create command for whitelist. If you already have other sets, simply add whitelist to the line. An example of an updated line:
Save the current state of all sets to the file. This command will overwrite the old /etc/ipset.rules file with a new version that includes information about your whitelist.
sudo ipset save > /etc/ipset.rules
Restart the service to ensure it is running with the new configuration:
sudo systemctl restart ipset-persistent.service
Summary
Congratulations! You have created a solid and reliable whitelist mechanism. With it, you can securely manage your server, confident that trusted IP addresses like 111.222.333.444 will never be accidentally blocked. Remember to only add fully trusted addresses to this list, such as your home or office IP address.
How to Effectively Block IP Addresses and Subnets on a Linux Server
Blocking single IP addresses is easy, but what if attackers use multiple addresses from the same network? Manually banning each one is inefficient and time-consuming.
In this article, you will learn how to use ipset and iptables to effectively block entire subnets, automating the process and saving valuable time.
Why is Blocking Entire Subnets Better?
Many attacks, especially brute-force types, are carried out from multiple IP addresses belonging to the same operator or from the same pool of addresses (subnet). Blocking just one of them is like patching a small hole in a large dam—the rest of the traffic can still get through.
Instead, you can block an entire subnet, for example, 45.148.10.0/24. This notation means you are blocking 256 addresses at once, which is much more effective.
Script for Automatic Subnet Blocking
To automate the process, you can use the following bash script. This script is interactive—it asks you to provide the subnet to block, then adds it to an ipset list and saves it to a file, making the block persistent.
Let’s analyse the script step-by-step:
#!/bin/bash
# The name of the ipset list to which subnets will be added BLACKLIST_NAME=”blacklist_nets” # The file where blocked subnets will be appended BLACKLIST_FILE=”/etc/fail2ban/blacklist_net.local”
# 1. Create the blacklist file if it doesn’t exist touch “$BLACKLIST_FILE”
# 2. Check if the ipset list already exists. If not, create it. # Using “hash:net” allows for storing subnets, which is key. if ! sudo ipset list $BLACKLIST_NAME >/dev/null 2>&1; then sudo ipset create $BLACKLIST_NAME hash:net maxelem 65536 fi
# 3. Loop to prompt the user for subnets to block. # The loop ends when the user types “exit”. while true; do read -p “Enter the subnet address to block (e.g., 192.168.1.0/24) or type ‘exit’: ” subnet if [ “$subnet” == “exit” ]; then break elif [[ “$subnet” =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}\/[0-9]{1,2}$ ]]; then # Check if the subnet is not already in the file to avoid duplicates if ! grep -q “^$subnet$” “$BLACKLIST_FILE”; then echo “$subnet” | sudo tee -a “$BLACKLIST_FILE” > /dev/null # Add the subnet to the ipset list sudo ipset add $BLACKLIST_NAME $subnet echo “Subnet $subnet added.” else echo “Subnet $subnet is already on the list.” fi else # Check if the entered format is correct echo “Error: Invalid format. Please provide the address in ‘X.X.X.X/Y’ format.” fi done
# 4. Add a rule in iptables that blocks all traffic from addresses on the ipset list. # This ensures the rule is added only once. if ! sudo iptables -C INPUT -m set –match-set $BLACKLIST_NAME src -j DROP >/dev/null 2>&1; then sudo iptables -I INPUT -m set –match-set $BLACKLIST_NAME src -j DROP fi
# 5. Save the iptables rules to survive a reboot. # This part checks which tool the system uses. if command -v netfilter-persistent &> /dev/null; then sudo netfilter-persistent save elif command -v service &> /dev/null && service iptables status >/dev/null 2>&1; then sudo service iptables save fi
echo “Script finished. The ‘$BLACKLIST_NAME’ list has been updated, and the iptables rules are active.”
How to Use the Script
Save the script: Save the code above into a file, e.g., block_nets.sh.
Give permissions: Make sure the file has execute permissions: chmod +x block_nets.sh.
Run the script: Execute the script with root privileges: sudo ./block_nets.sh.
Provide subnets: The script will prompt you to enter subnet addresses. Simply type them in the X.X.X.X/Y format and press Enter. When you are finished, type exit.
Ensuring Persistence After a Server Reboot
Ipset sets are stored in RAM by default and disappear after a server restart. For the blocked addresses to remain active, you must use a systemd service that will load them at system startup.
If you already have such a service (e.g., ipset-persistent.service), you must update it to include the new blacklist_nets list.
Edit the service file: Open your service’s configuration file. sudo nano /etc/systemd/system/ipset-persistent.service
Update the ExecStart line: Find the ExecStart line and add the create command for the blacklist_nets set. An example updated ExecStart line should look like this (including previous sets): ExecStart=/bin/bash -c “/sbin/ipset create whitelist hash:ip –exist; /sbin/ipset create blacklist hash:ip –exist; /sbin/ipset create blacklist_v6 hash:net family inet6 –exist; /sbin/ipset create blacklist_nets hash:net –exist; /sbin/ipset restore -f /etc/ipset.rules”
Reload the systemd configuration: sudo systemctl daemon-reload
Save the current state of all sets to the file: This command will overwrite the old /etc/ipset.rules file with a new version that contains information about all your lists, including blacklist_nets. sudo ipset save > /etc/ipset.rules
Restart the service: sudo systemctl restart ipset-persistent.service
With this method, you can simply and efficiently manage your server’s security, effectively blocking entire subnets that show suspicious activity, and be sure that these rules will remain active after every reboot.
Canonical, the company behind the world’s most popular Linux distribution, offers an extended subscription called Ubuntu Pro. This service, available for free for individual users on up to five machines, elevates the standard Ubuntu experience to the level of corporate security, compliance, and extended technical support. What exactly does this offer include, and is it worth using?
Ubuntu Pro is the answer to the growing demands for cybersecurity and stability of operating systems, both in commercial and home environments. The subscription integrates a range of advanced services that were previously reserved mainly for large enterprises, making them available to a wide audience. A key benefit is the extension of the system’s life cycle (LTS) from 5 to 10 years, which provides critical security updates for thousands of software packages.
A Detailed Review of the Services Offered with Ubuntu Pro
To fully understand the value of the subscription, you should look at its individual components. After activating Pro, the user gains access to a services panel that can be freely enabled and disabled depending on their needs.
1. ESM-Infra & ESM-Apps: Ten Years of Peace of Mind
The core of the Pro offering is the Expanded Security Maintenance (ESM) service, divided into two pillars:
esm-infra (Infrastructure): Guarantees security patches for over 2,300 packages from the Ubuntu main repository for 10 years. This means the operating system and its key components are protected against newly discovered vulnerabilities (CVEs) for much longer than in the standard LTS version.
esm-apps (Applications): Extends protection to over 23,000 packages from the community-supported universe repository. This is a huge advantage, as many popular applications, programming libraries, and tools we install every day come from there. Thanks to esm-apps, they also receive critical security updates for a decade.
In practice, this means that a production server or workstation with an LTS version of the system can run safely and stably for 10 years without the need for a major system upgrade.
2. Livepatch: Kernel Updates Without a Restart
The Canonical Livepatch service is one of the most appreciated tools in environments requiring maximum uptime. It allows the installation of critical and high-risk security patches for the Linux kernel while it is running, without the need to reboot the computer. For server administrators running key services, this is a game-changing feature – it eliminates downtime and allows for an immediate response to threats.
End of server restarts. The Livepatch service revolutionises Linux updates
Updating the operating system’s kernel without having to reboot the machine is becoming the standard in environments requiring continuous availability. The Canonical Livepatch service allows critical security patches to be installed in real-time, eliminating downtime and revolutionising the work of system administrators.
In a digital world where every minute of service unavailability can generate enormous losses, planned downtime for system updates is becoming an ever greater challenge. The answer to this problem is the Livepatch technology, offered by Canonical, the creators of the popular Ubuntu distribution. It allows for the deployment of the most important Linux kernel security patches without the need to restart the server.
How does Livepatch work?
The service runs in the background, monitoring for available security updates marked as critical or high priority. When such a patch is released, Livepatch applies it directly to the running kernel. This process is invisible to users and applications, which can operate without any interruptions.
“For administrators managing a fleet of servers on which a company’s business depends, this is a game-changing feature,” a cybersecurity expert comments. “Instead of planning maintenance windows in the middle of the night and risking complications, we can respond instantly to newly discovered threats, maintaining one hundred percent business continuity.”
Who benefits most?
This solution is particularly valuable in sectors such as finance, e-commerce, telecommunications, and healthcare, where systems must operate 24/7. With Livepatch, companies can meet rigorous service level agreements (SLAs) while maintaining the highest standard of security.
Eliminating the need to restart not only saves time but also minimises the risk associated with restarting complex application environments.
Technology such as Canonical Livepatch sets a new direction in IT infrastructure management. It shifts the focus from reactive problem-solving to proactive, continuous system protection. In an age of growing cyber threats, the ability to instantly patch vulnerabilities, without affecting service availability, is no longer a convenience, but a necessity.
3. Landscape: Central Management of a Fleet of Systems
Landscape is a powerful tool for managing and administering multiple Ubuntu systems from a single, central dashboard. It enables remote updates, machine status monitoring, user and permission management, and task automation. Although its functionality may be limited in the free plan, in commercial environments it can save administrators hundreds of hours of work.
Landscape: How to Master a Fleet of Ubuntu Systems from One Place?
In today’s IT environments, where the number of servers and workstations can reach hundreds or even thousands, manually managing each system separately is not only inefficient but virtually impossible. Canonical, the company behind the most popular Linux distribution – Ubuntu, provides a solution to this problem: Landscape. It’s a powerful tool that allows administrators to centrally manage an entire fleet of machines, saving time and minimising the risk of errors.
What is Landscape?
Landscape is a system management platform that acts as a central command centre for all Ubuntu machines in your organisation. Regardless of whether they are physical servers in a server room, virtual machines in the cloud, or employees’ desktop computers, Landscape enables remote monitoring, management, and automation of key administrative tasks from a single, clear web browser.
The main goal of the tool is to simplify and automate repetitive tasks that consume most of administrators’ time. Instead of logging into each server separately to perform updates, you can do so for an entire group of machines with a few clicks.
Key Features in Practice
The strength of Landscape lies in its versatility. The most important functions include:
Remote Updates and Package Management: Landscape allows for the mass deployment of security and software updates on all connected systems. An administrator can create update profiles for different groups of servers (e.g., production, test) and schedule their installation at a convenient time, minimising the risk of downtime.
Real-time Monitoring and Alerts: The platform continuously monitors key system parameters, such as processor load, RAM usage, disk space availability, and component temperature. If predefined thresholds are exceeded, the system automatically sends alerts, allowing for a quick response before a problem escalates into a serious failure.
User and Permission Management: Creating, modifying, and deleting user accounts on multiple machines simultaneously becomes trivially simple. Landscape enables central management of permissions, which significantly increases the level of security and facilitates audits.
Task Automation: One of the most powerful features is the ability to remotely run scripts on any number of machines. This allows you to automate almost any task – from routine backups and the installation of specific software to comprehensive configuration audits.
Free Plan vs. Commercial Environments
Canonical offers Landscape on a subscription basis, but also provides a free “Landscape On-Premises” plan that allows you to manage up to 10 machines at no cost. This is an excellent option for small businesses, enthusiasts, or for testing purposes. Although the functionality in this plan may be limited compared to the full commercial versions, it provides a solid insight into the platform’s capabilities.
However, it is in large commercial environments that Landscape shows its true power. For companies managing dozens or hundreds of servers, investing in a license quickly pays for itself. Reducing the time needed for routine tasks from days to minutes translates into real financial savings and allows administrators to focus on more strategic projects. Experts estimate that implementing central management can save hundreds of hours of work per year.
Landscape is an indispensable tool for any organisation that takes the management of its Ubuntu-based infrastructure seriously. Centralisation, automation, and proactive monitoring are key elements that not only increase efficiency and security but also allow for scaling operations without a proportional increase in costs and human resources. In an age of digital transformation, effective management of a fleet of systems is no longer a luxury, but a necessity.
4. Real-time Kernel: Real-time Precision
For specific applications, such as industrial automation, robotics, telecommunications, or stock trading systems, predictability and determinism are crucial. The Real-time Kernel is a special version of the Ubuntu kernel with integrated PREEMPT_RT patches, which minimises delays and guarantees that the highest priority tasks are executed within strictly defined time frames.
In a world where machine decisions must be made in fractions of a second, standard operating systems are often unable to meet strict timing requirements. The answer to these challenges is the real-time operating system kernel (RTOS). Ubuntu, one of the most popular Linux distributions, is entering this highly specialised market with a new product: the Real-time Kernel.
What is it and why is it important?
The Real-time Kernel is a special version of the Ubuntu kernel in which a set of patches called PREEMPT_RT have been implemented. Their main task is to modify how the kernel manages tasks, so that the highest priority processes can pre-empt (interrupt) lower-priority ones almost immediately. In practice, this eliminates unpredictable delays (so-called latency) and guarantees that critical operations will be executed within a strictly defined, repeatable time window.
“The Ubuntu real-time kernel provides industrial-grade performance and resilience for software-defined manufacturing, monitoring, and operational technologies,” said Mark Shuttleworth, CEO of Canonical.
For sectors such as industrial automation, this means that PLC controllers on the assembly line can process data with absolute precision, ensuring continuity and integrity of production. In robotics, from assembly arms to autonomous vehicles, timing determinism is crucial for safety and smooth movement. Similarly, in telecommunications, especially in the context of 5G networks, the infrastructure must handle huge amounts of data with ultra-low latency, which is a necessary condition for service reliability. Stock trading systems, where milliseconds decide on transactions worth millions, also belong to the group of beneficiaries of this technology.
How does it work? Technical context
The PREEMPT_RT patches, developed for years by the Linux community, transform a standard kernel into a fully pre-emptible one. Mechanisms such as spinlocks (locks that protect against simultaneous access to data), which in a traditional kernel cannot be interrupted, become pre-emptible in the RT version. In addition, hardware interrupt handlers are transformed into threads with a specific priority, which allows for more precise management of processor time.
Thanks to these changes, the system is able to guarantee that a high-priority task will gain access to resources in a predictable, short time, regardless of the system’s load by other, less important processes.
The integration of PREEMPT_RT with the official Ubuntu kernel (available as part of the Ubuntu Pro subscription) is a significant step towards the democratisation of real-time systems. This simplifies the deployment of advanced solutions in industry, lowering the entry barrier for companies that until now had to rely on niche, often closed and expensive RTOS systems. The availability of a stable and supported real-time kernel in a popular operating system can accelerate innovation in the fields of the Internet of Things (IoT), autonomous vehicles, and smart factories, where precision and reliability are not an option but a necessity.
5. USG (Ubuntu Security Guide): Auditing and Security Hardening
USG is a tool for automating the processes of system hardening and auditing for compliance with rigorous security standards, such as CIS Benchmarks or DISA-STIG. Instead of manually configuring hundreds of system settings, an administrator can use USG to automatically apply recommended policies and generate a compliance report.
In an age of growing cyber threats and increasingly stringent compliance requirements, system administrators face the challenge of manually configuring hundreds of settings to secure IT infrastructure. Canonical, the company behind the popular Linux distribution, offers the Ubuntu Security Guide (USG) tool, which automates the processes of system hardening and auditing, ensuring compliance with key security standards, such as CIS Benchmarks and DISA-STIG.
What is the Ubuntu Security Guide and how does it work?
The Ubuntu Security Guide is an advanced command-line tool, available as part of the Ubuntu Pro subscription. Its main goal is to simplify and automate the tedious tasks associated with securing Ubuntu operating systems. Instead of manually editing configuration files, changing permissions, and verifying policies, administrators can use ready-made security profiles.
USG uses the industry-recognised OpenSCAP (Security Content Automation Protocol) tool as its backend, which ensures the consistency and reliability of the audits performed. The process is simple and is based on two key commands:
usg audit [profile] – Scans the system for compliance with the selected profile (e.g., cis_level1_server) and generates a detailed report in HTML format. This report indicates which security rules are met and which require intervention.
usg fix [profile] – Automatically applies configuration changes to adapt the system to the recommendations contained in the profile.
As Canonical emphasises in its official documentation, USG was designed to “simplify the DISA-STIG hardening process by leveraging automation.”
Compliance with CIS and DISA-STIG at Your Fingertips
For many organisations, especially in the public, financial, and defence sectors, compliance with international security standards is not just good practice but a legal and contractual obligation. CIS Benchmarks, developed by the Center for Internet Security, and DISA-STIG (Security Technical Implementation Guides), required by the US Department of Defence, are collections of hundreds of detailed configuration guidelines.
Manually implementing these standards is extremely time-consuming and prone to errors. USG addresses this problem by providing predefined profiles that map these complex requirements to specific, automated actions. Example configurations managed by USG include:
Password policies: Enforcing appropriate password length, complexity, and expiration period.
Firewall configuration: Blocking unused ports and restricting access to network services.
SSH security: Enforcing key-based authentication and disabling root account login.
File system: Setting restrictive mounting options, such as noexec and nosuid on critical partitions.
Deactivation of unnecessary services: Disabling unnecessary daemons and services to minimise the attack surface.
The ability to customise profiles using so-called “tailoring files” allows administrators to flexibly implement policies, taking into account the specific needs of their environment, without losing compliance with the general standard.
Consequences of Non-Compliance and the Role of Automation
Ignoring standards such as CIS or DISA-STIG carries serious consequences. Apart from the obvious increase in the risk of a successful cyberattack, organisations expose themselves to severe financial penalties, loss of certification, and serious reputational damage. Non-compliance can lead to the loss of key contracts, especially in the government sector.
Security experts agree that compliance automation tools are crucial in modern IT management. They allow not only for a one-time implementation of policies but also for continuous monitoring and maintenance of the desired security state in dynamically changing environments.
The Ubuntu Security Guide is a response to the growing complexity in the field of cybersecurity and regulations. By shifting the burden of manual configuration to an automated and repeatable process, USG allows administrators to save time, minimise the risk of human error, and provide measurable proof of compliance with global standards. In an era where security is the foundation of digital trust, tools like USG are becoming an indispensable part of the arsenal of every IT professional managing Ubuntu-based infrastructure.
6. Anbox Cloud: Android in the Cloud at Scale
Anbox Cloud is a platform that allows you to run the Android system in cloud containers. This is a solution aimed mainly at mobile application developers, companies in the gaming industry (cloud gaming), or automotive (infotainment systems). It enables mass application testing, process automation, and streaming of Android applications with ultra-low latency.
How to Install and Configure Ubuntu Pro? A Step-by-Step Guide
Activating Ubuntu Pro is simple and takes only a few minutes.
Requirements:
Ubuntu LTS version (e.g., 18.04, 20.04, 22.04, 24.04).
Access to an account with sudo privileges.
An Ubuntu One account (which can be created for free).
Step 1: Get your subscription token
Go to the ubuntu.com/pro website and log in to your Ubuntu One account.
You will be automatically redirected to your Ubuntu Pro dashboard.
In the dashboard, you will find a free personal token. Copy it.
Step 2: Connect your system to Ubuntu Pro
Open a terminal on your computer and execute the command below, pasting the copied string into the place of [YOUR_TOKEN]:
sudo pro attach [YOUR_TOKEN]
The system will connect to Canonical’s servers and automatically enable default services, such as esm-infra and livepatch.
Step 3: Manage services
You can check the status of your services at any time with the command:
pro status –all
You will see a list of all available services along with information on whether they are enabled or disabled.
To enable a specific service, use the enable command. For example, to activate esm-apps:
sudo pro enable esm-apps
Similarly, to disable a service, use the disable command:
sudo pro disable landscape
Alternative: Configuration via a graphical interface
On Ubuntu Desktop systems, you can also manage your subscription through a graphical interface. Open the “Software & Updates” application, go to the “Ubuntu Pro” tab, and follow the instructions to activate the subscription using your token.
Summary
Ubuntu Pro is a powerful set of tools that significantly increases the level of security, stability, and management capabilities of the Ubuntu system. Thanks to the generous free subscription offer for individual users, everyone can now take advantage of features that until recently were the domain of corporations. Whether you are a developer, a small server administrator, or simply a conscious user who cares about long-term support, activating Ubuntu Pro is a step that is definitely worth considering.
Every WordPress site administrator knows that feeling. You log into the dashboard, and there’s a message waiting for you: “A scheduled event has failed”. Your heart stops for a moment. Is the site down? Is it a serious crash?
Calm down! Before you start to panic, take a deep breath. This error, although it sounds serious, rarely means disaster. Most often, it’s simply a signal that the internal task scheduling mechanism in WordPress isn’t working optimally.
In this article, we’ll explain what this error is, why it appears, and how to fix it professionally in various server configurations.
What is WP-Cron?
WordPress needs to perform cyclical background tasks: publishing scheduled posts, creating backups, or scanning the site for viruses (as in the case of the wf_scan_monitor error from the Wordfence plugin). To handle these operations, it uses a built-in mechanism called WP-Cron.
The problem is that WP-Cron is not a real cron daemon known from Unix systems. It’s a “pseudo-cron” that has a fundamental flaw: it only runs when someone visits your website.
On sites with low traffic: If no one visits the site, tasks aren’t performed on time, which leads to errors.
On sites with high traffic: WP-Cron is called on every page load, which generates unnecessary server load.
In both cases, the solution is the same: disable the built-in WP-Cron and replace it with a stable, system-level cron job.
Scenario 1: A Single WordPress Site
This is the most basic and common configuration. The solution is simple and comes down to two steps.
Step 1: Disable the built-in WP-Cron mechanism
Edit the wp-config.php file in your site’s main directory and add the following line:
define(‘DISABLE_WP_CRON’, true);
Step 2: Configure a system cron
Log into your server via SSH and type crontab -e to edit the list of system tasks. Then, add one of the following lines, which will properly call the WordPress cron mechanism every 5 minutes.
Remember to replace yourdomain.co.uk with your actual address. From now on, tasks will be executed regularly, regardless of site traffic.
Scenario 2: Multiple Sites on a Standard Server
If you manage multiple sites, adding a separate line in crontab for each one is impractical and difficult to maintain. A better solution is to create a single script that will automatically find all WordPress installations and run their tasks.
Step 1: Create the script file
Create a file, e.g., /usr/local/bin/run_all_wp_crons.sh, and paste the following content into it. The script searches the /var/www/ directory for wp-config.php files.
#!/bin/bash # # Script to run cron jobs for all WordPress sites # Optimised for ISPConfig directory structure # The main directory where the ACTUAL site files are located in ISPConfig SITES_ROOT=”/var/www/clients/”
# Path to the PHP interpreter (may need to be adjusted) PHP_EXECUTABLE=”/usr/bin/php”
# Logging (optional, but useful for debugging) LOG_FILE=”/var/log/wp_cron_runner.log”
echo “Starting cron jobs (ISPConfig): $(date)” >> $LOG_FILE
# Find all wp-config.php files and run wp-cron.php for them find “$SITES_ROOT” -name “wp-config.php” -print | while IFS= read -r -d ” config_file; do
# Extract the directory where WordPress is located WP_DIR=$(dirname “$config_file”)
# Extract the user name (e.g., web4) from the path # It’s the sixth element in the path /var/www/clients/client4/web4/web/ WP_USER=$(echo “$WP_DIR” | awk -F’/’ ‘{print $6}’)
if [ -z “$WP_USER” ]; then echo “-> WARNING: Failed to determine user for: $WP_DIR” >> $LOG_FILE continue fi
# Check if the wp-cron.php file exists in this directory if [ -f “$WP_DIR/wp-cron.php” ]; then echo “-> Running cron for: $WP_DIR as user: $WP_USER” >> $LOG_FILE # Run wp-cron.php using PHP CLI, switching to the correct user su -s /bin/sh “$WP_USER” -c “(cd ‘$WP_DIR’ && ‘$PHP_EXECUTABLE’ wp-cron.php)” else echo “-> WARNING: Found wp-config, but no wp-cron.php in: $WP_DIR” >> $LOG_FILE fi done
The ISPConfig panel uses a specific directory structure with symlinks, e.g., /var/www/yourdomain.co.uk points to /var/www/clients/client1/web1/. Using the script above could cause tasks to be executed twice.
To avoid this, you need to modify the script to search only the target clients directory.
Step 1: Create a script optimised for ISPConfig
Create the file /usr/local/bin/run_ispconfig_crons.sh. Note the change in the SITES_ROOT variable.
#!/bin/bash # Script to run cron jobs for all WordPress sites # Optimised for ISPConfig directory structure # We only search the directory with the actual site files SITES_ROOT=”/var/www/clients/”
# Path to the PHP interpreter PHP_EXECUTABLE=”/usr/bin/php”
# Optional log file to track progress LOG_FILE=”/var/log/wp_cron_runner.log”
echo “Starting cron jobs (ISPConfig): $(date)” >> $LOG_FILE
find “$SITES_ROOT” -name “wp-config.php” -print0 | while IFS= read -r -d $’\0′ config_file; do WP_DIR=$(dirname “$config_file”) if [ -f “$WP_DIR/wp-cron.php” ]; then echo “-> Running cron for: $WP_DIR” >> $LOG_FILE (cd “$WP_DIR” && “$PHP_EXECUTABLE” wp-cron.php) fi done
Steps 2 and 3 are analogous to Scenario 2: give the script execution permissions (chmod +x) and add a single line to crontab -e, pointing to the new script file.
Summary
The “A scheduled event has failed” error is not a reason to panic, but rather an invitation to improve your infrastructure. It’s a chance to move from the unreliable, built-in WordPress mechanism to a solid, professional system solution that guarantees stability and performance.
Regardless of your server configuration, you now have the tools to sleep soundly, knowing that your scheduled tasks are running like clockwork.
Managing a web server requires an understanding of the components that make up its architecture. Each element plays a crucial role in delivering content to users quickly and reliably. This article provides an in-depth analysis of a modern server configuration based on OpenLiteSpeed (OLS), explaining its fundamental mechanisms, its collaboration with the Redis caching system, and its methods of communication with external applications.
OpenLiteSpeed (OLS) – The System’s Core
The foundation of every website is the web server—the software responsible for receiving HTTP requests from browsers and returning the appropriate resources, such as HTML files, CSS, JavaScript, or images.
What is OpenLiteSpeed?
OpenLiteSpeed (OLS) is a high-performance, lightweight, open-source web server developed by LiteSpeed Technologies. Its key advantage over traditional servers, such as Apache in its default configuration, is its event-driven architecture.
Process-based model (e.g., Apache prefork): A separate process or thread is created for each simultaneous connection. This model is simple, but with high traffic, it leads to significant consumption of RAM and CPU resources, as each process, even if inactive, reserves resources.
Event-driven model (OpenLiteSpeed, Nginx): A single server worker process can handle hundreds or thousands of connections simultaneously. It uses non-blocking I/O operations and an event loop to manage requests. When a process is waiting for an operation (e.g., reading from a disk), it doesn’t block but instead moves on to handle another connection. This architecture provides much better scalability and lower resource consumption.
Key Features of OpenLiteSpeed
OLS offers a set of features that make it a powerful and flexible tool:
Graphical Administrative Interface (WebAdmin GUI): OLS has a built-in, browser-accessible admin panel that allows you to configure all aspects of the server—from virtual hosts and PHP settings to security rules—without needing to directly edit configuration files.
Built-in Caching Module (LSCache): One of OLS’s most important features is LSCache, an advanced and highly configurable full-page cache mechanism. When combined with dedicated plugins for CMS systems (e.g., WordPress), LSCache stores fully rendered HTML pages in memory. When the next request for the same page arrives, the server delivers it directly from the cache, completely bypassing the execution of PHP code and database queries.
Support for Modern Protocols (HTTP/3): OLS natively supports the latest network protocols, including HTTP/3 (based on QUIC). This provides lower latency and better performance, especially on unstable mobile connections.
Compatibility with Apache Rules: OLS can interpret mod_rewrite directives from .htaccess files, which is a standard in the Apache ecosystem. This significantly simplifies the migration process for existing applications without the need to rewrite complex URL rewriting rules.
Redis – In-Memory Data Accelerator
Caching is a fundamental optimisation technique that involves storing the results of costly operations in a faster access medium. In the context of web applications, Redis is one of the most popular tools for this task.
What is Redis?
Redis (REmote Dictionary Server) is an in-memory data structure, most often used as a key-value database, cache, or message broker. Its power comes from the fact that it stores all data in RAM, not on a hard drive. Accessing RAM is orders of magnitude faster than accessing SSDs or HDDs, as it’s a purely electronic operation that bypasses slower I/O interfaces.
In a typical web application, Redis acts as an object cache. It stores the results of database queries, fragments of rendered HTML code, or complex PHP objects that are expensive to regenerate.
How Do OpenLiteSpeed and Redis Collaborate?
The LSCache and Redis caching mechanisms don’t exclude each other; rather, they complement each other perfectly, creating a multi-layered optimisation strategy.
Request flow (simplified):
A user sends a request for a dynamic page (e.g., a blog post).
OpenLiteSpeed receives the request. The first step is to check the LSCache.
LSCache Hit: If an up-to-date, fully rendered version of the page is in the LSCache, OLS returns it immediately. The process ends here. This is the fastest possible scenario.
LSCache Miss: If the page is not in the cache, OLS forwards the request to the appropriate external application (e.g., a PHP interpreter) to generate it.
The PHP application begins building the page. To do this, it needs to fetch data from the database (e.g., MySQL).
Before PHP executes costly database queries, it first checks the Redis object cache.
Redis Hit: If the required data (e.g., SQL query results) are in Redis, they are returned instantly. PHP uses this data to build the page, bypassing communication with the database.
Redis Miss: If the data is not in the cache, PHP executes the database queries, fetches the results, and then saves them to Redis for future requests.
PHP finishes generating the HTML page and returns it to OpenLiteSpeed.
OLS sends the page to the user and, at the same time, saves it to the LSCache so that subsequent requests can be served much faster.
This two-tiered strategy ensures that both the first and subsequent visits to a page are maximally optimised. LSCache eliminates the need to run PHP, while Redis drastically speeds up the page generation process itself when necessary.
Delegating Tasks – External Applications in OLS
Modern web servers are optimised to handle network connections and deliver static files (images, CSS). The execution of application code (dynamic content) is delegated to specialised external programmes. This division of responsibilities increases stability and security.
OpenLiteSpeed manages these programmes through the External Applications system. The most important types are described below:
LSAPI Application (LiteSpeed SAPI App): The most efficient and recommended method of communication with PHP, Python, or Ruby applications. LSAPI is a proprietary, optimised protocol that minimises communication overhead between the server and the application interpreter.
FastCGI Application: A more universal, standard protocol for communicating with external application processes. This is a good solution for applications that don’t support LSAPI. It works on a similar principle to LSAPI (by maintaining permanent worker processes), but with slightly more protocol overhead.
Web Server (Proxy): This type configures OLS to act as a reverse proxy. OLS receives a request from the client and then forwards it in its entirety to another server running in the background (the “backend”), e.g., an application server written in Node.js, Java, or Go. This is crucial for building microservices-based architectures.
CGI Application: The historical and slowest method. A new application process is launched for each request and is closed after returning a response. Due to the huge performance overhead, it’s only used for older applications that don’t support newer protocols.
OLS routes traffic to the appropriate application using Script Handlers, which map file extensions (e.g., .php) to a specific application, or Contexts, which map URL paths (e.g., /api/) to a proxy type application.
Communication Language – A Comparison of SAPI Architectures
SAPI (Server Application Programming Interface) is an interface that defines how a web server communicates with an application interpreter (e.g., PHP). The choice of SAPI implementation has a fundamental impact on the performance and stability of the entire system.
The Evolution of SAPI
CGI (Common Gateway Interface): The first standard. Stable, but inefficient due to launching a new process for each request.
Embedded Module (e.g., mod_php in Apache): The PHP interpreter is loaded directly into the server process. This provides very fast communication, but at the cost of stability (a PHP crash causes the server to crash) and security.
FastCGI: A compromise between performance and stability. It maintains a pool of independent, long-running PHP processes, which eliminates the cost of constantly launching them. Communication takes place via a socket, which provides isolation from the web server.
LSAPI (LiteSpeed SAPI): An evolution of the FastCGI model. It uses the same architecture with separate processes, but the communication protocol itself was designed from scratch to minimise overhead, which translates to even higher performance than standard FastCGI.
SAPI Architecture Comparison Table
Feature
CGI
Embedded Module (mod_php)
FastCGI
LiteSpeed SAPI (LSAPI)
Process Model
New process per request
Shared process with server
Permanent external processes
Permanent external processes
Performance
Low
Very high
High
Highest
Stability / Isolation
Excellent
Low
High
High
Resource Consumption
Very high
Moderate
Low
Very low
Overhead
High (process launch)
Minimal (shared memory)
Moderate (protocol)
Low (optimised protocol)
Main Advantage
Full isolation
Communication speed
Balanced performance & stability
Optimised performance & stability
Main Disadvantage
Very low performance
Instability, security issues
More complex configuration
Technology specific to LiteSpeed
Comparison of Communication Sockets
Communication between processes (e.g., OLS and Redis, or OLS and PHP processes) occurs via sockets. The choice of socket type affects performance.
Feature
TCP/IP Socket (on localhost)
Unix Domain Socket (UDS)
Addressing
IP Address + Port (e.g., 127.0.0.1:6379)
File path (e.g., /var/run/redis.sock)
Scope
Same machine or over a network
Same machine only (IPC)
Performance (locally)
Lower
Higher
Overhead
Higher (goes through the network stack)
Minimal (bypasses the network stack)
Security Model
Firewall rules
File system permissions
For local communication, UDS is a more efficient solution because it bypasses the entire operating system network stack, which reduces latency and CPU overhead. This is why it’s preferred in optimised configurations for connections between OLS, Redis, and LSAPI processes.
Practical Implementation and Management
To translate theory into practice, let’s analyse a real server configuration for the virtual host solutionsinc.co.uk.
5.1 Analysis of the solutionsinc.co.uk Configuration Example
External App Definition:
In the “External App” panel, a LiteSpeed SAPI App named solutionsinc.co.uk has been defined. This is the central configuration point for handling the dynamic content of the site.
Address:UDS://tmp/lshttpd/solutionsinc.co.uk.sock. This line is crucial. It informs OLS that a Unix Domain Socket (UDS) will be used to communicate with the PHP application, not a TCP/IP network socket. The .sock file at this path is the physical endpoint of this efficient communication channel.
Command:/usr/local/lsws/lsphp84/bin/lsphp. This is the direct path to the executable file of the LiteSpeed PHP interpreter, version 8.4. OLS knows it should run this specific programme to process scripts.
Other parameters, such as LSAPI_CHILDREN = 50 and memory limits, are used for precise resource and performance management of the PHP process pool.
Linking with PHP Files (Script Handler):
The application definition alone isn’t enough. In the “Script Handler” panel, we tell OLS when to use it.
For the .php suffix (extension), LiteSpeed SAPI is set as the handler.
[VHost Level]: solutionsinc.co.uk is chosen as the Handler Name, which directly points to the application defined in the previous step.
Conclusion: From now on, every request for a file with the .php extension on this site will be passed through the UDS socket to one of the lsphp84 processes.
This configuration is an excellent example of an optimised and secure environment: OLS handles the connections, while dedicated, isolated lsphp84 processes execute the application code, communicating through the fastest available channel—a Unix domain socket.
5.2 Managing Unix Domain Sockets (.sock) and Troubleshooting
The .sock file, as seen in the solutionsinc.co.uk.sock example, isn’t a regular file. It’s a special file in Unix systems that acts as an endpoint for inter-process communication (IPC). Instead of communicating through the network layer (even locally), processes can write to and read data directly from this file, which is much faster.
When OpenLiteSpeed launches an external LSAPI application, it creates such a socket file. The PHP processes listen on this socket for incoming requests from OLS.
Practical tip: A ‘Stubborn’ .sock file
Sometimes, after making changes to the PHP configuration (e.g., modifying the php.ini file or installing a new extension) and restarting the OpenLiteSpeed server (lsws), the changes may not be visible on the site. This happens because the lsphp processes may not have been correctly restarted with the server, and OLS is still communicating with the old processes through the existing, “old” .sock file.
In such a situation, when a standard restart doesn’t help, an effective solution is to:
Stop the OpenLiteSpeed server.
Manually delete the relevant .sock file, for example, using the terminal command: rm /tmp/lshttpd/solutionsinc.co.uk.sock
Restart the OpenLiteSpeed server.
After restarting OLS, not finding the existing socket file, it will be forced to create a new one. More importantly, it will launch a new pool of lsphp processes that will load the fresh configuration from the php.ini file. Deleting the .sock file acts as a hard reset of the communication channel between the server and the PHP application, guaranteeing that all components are initialised from scratch with the current settings.
Summary
The server configuration presented is a precisely designed system in which each element plays a vital role.
OpenLiteSpeed acts as an efficient, event-driven core, managing connections.
LSCache provides instant delivery of pages from the full-page cache.
Redis acts as an object cache, drastically accelerating the generation of dynamic content when needed.
LSAPI UDS creates optimised communication channels, minimising overhead and latency.
An understanding of these dependencies allows for informed server management and optimisation to achieve maximum performance and reliability.
Sometimes it happens that the disk space allocated to our websites on our server turns out to be too small. If our operating system is installed on LVM (Logical Volume Manager), we can relatively easily expand the disk size to what we need. In this article, I will show you how to do it.
Our working environment:
Virtual machine running on XCP-ng
Ubuntu 20.04 Server Edition operating system
OpenLiteSpeed web server
CyberPanel management panel
Disk space based on LVM (Logical Volume Manager)
The 64GB allocated for websites on our server some time ago proved to be too little after a while. After exceeding 80% disk usage, the server slowed down and opening pages became uncomfortable.
What is LVM (Logical Volume Manager)?
LVM is an extremely flexible tool that allows you to conveniently manage disk space on your servers. LVM can be located on different hard disks and different partitions of different capacities, and we can change the size of the disk space on the fly without even having to restart the computer or virtual machine.
Checking hard disk usage
To check the usage of our hard disk, use the df command with the -h parameter, which shows the disk size in a human-friendly format.
In the case of our system, the LVM disk size (/dev/mapper/ubuntu-vg-ubuntu-lv) is 62GB, and 56GB is occupied, which gives 95% disk space usage. This is definitely too little free space for the server to work efficiently. It’s time to allocate more space to the server.
Increasing disk size in XCP-ng
For our virtual machine on which the Ubuntu system with a web server is installed, we have only allocated 64GB, so the first step will be to increase the virtual hard disk for our virtual machine. To perform this operation, we will have to turn off our virtual machine for a while. To do this, we run the XCP-ng center application. Then we select the virtual machine we are interested in, and turn it off. We go to the Storage tab, click on the storage we want to enlarge and click Properties. We select Size and Location and increase the size of the virtual hard disk. Then we can restart our virtual machine.
Checking free space in Volume Group
To display information about our volume groups, type vgs.
Our volume group is ubuntu-vg with a new size of 126.50GB and it has 63.25GB of free space. To display more information about the volume group, use the vgdisplay command.
Here we can see that the space allocated to our volume group is 63.25GB and we have another 63.25GB available, which we can add to our volume group.
Displaying a list of logical volumes
To display our logical volumes, type lvs.
In our case, the logical volume ubuntu-lv belongs to the ubuntu-vg volume group.
NOTE: Remember to replace our volume names with your own when typing commands.
Increasing the size of our logical volume
To assign more disk space to our volume group, we will use the lvextend command. Remember to replace ubuntu-vg and ubuntu-lv with the volumes used in your system.
The -L parameter allows us to specify the size by which we want to increase our logical volume, in our case we are increasing it by 63.25GB.
Note: Remember that we do not specify units TB, GB, MB etc., but P for Petabytes, T for Terabytes, G for Gigabytes, M for Megabytes etc.
After re-entering the vgdisplay command, we see in the Alloc PE/Size field that the additional disk space has been correctly allocated.
Enlarging the file system
Our new disk space is not yet visible to the system. We must first enlarge the file system with the resize2fs command.
By giving the resize2fs command, we must indicate where our logical volume is mounted, in our case it is /dev/mapper/ubuntu--vg-ubuntu--lv.
After this operation, our new disk space is already visible to the operating system and CyberPanel. 47% disk usage is a decent result and should be enough for a while.
Summary
As you can see, LVM has many advantages. If you had an operating system installed on normal disks and partitions, you probably would not be able to do without formatting the disks and installing the system from scratch. Whereas LVM allows you to add a new disk to your computer, create a partition on it and add it to an already existing group of volumes and logical volumes. Easy, fast and enjoyable.
If you have any questions about increasing the capacity of LVM volumes, do not hesitate to ask them in the comments.
In this article, you will learn how to remove unused JavaScript from your site, which will help to speed up its loading time. The issue is that a significant portion of JavaScript code is not utilised on the page and slows it down unnecessarily, as the more code on a page, the longer it takes to load. And nobody likes a slow-running website. Therefore, it is crucial to be able to locate and remove unnecessary JavaScript from your website.
What is JavaScript?
JavaScript is a programming language that allows, among other things, for the creation of dynamically updated content and the management of multimedia, and much, much more. It’s incredible how many effects can be achieved with just a few lines of JavaScript code.
Through the Document Object Model (DOM) API, JavaScript is also used to dynamically edit HTML and CSS code to refresh content on web pages. When creating a page, remember that the website’s code typically loads and executes in the order it appears on the page. So, if JavaScript that uses HTML code is loaded and run before the HTML it needs has loaded, errors may occur on the page.
What is Unused JavaScript?
Unused JavaScript refers to resources that are not used or required for rendering and displaying the site’s content.
Despite this code not being used, the user’s browser still has to load it into memory. There is no benefit from this; it only slows down the page loading and burdens the user’s computer memory.
Why is it Worth Removing Unused JavaScript?
Loading unnecessary JavaScript can have a significant negative impact on site performance and the overall user experience. The “First Contentful Paint” (FCP) parameter, one of the main indicators in Google PageSpeed Insights that assesses user experience, is heavily influenced by JavaScript code.
Types of Unnecessary JavaScript Code
Let’s divide unused JavaScript code into two categories:
Non-critical JavaScript: This is used elsewhere on the site but does not need to be loaded first and block the loading of other site elements. Such code blocking the site’s loading worsens our FCP parameter.
Dead JavaScript: This is not used at all on the page and can be safely removed.
Disadvantages of Loading Unused JavaScript Code
Unused JavaScript has a negative impact on website performance and delays page loading. Our site’s position in Google search results depends, among other things, on its speed. This is one of the key parameters. Furthermore, the likelihood that users will leave our website and visit another increases if our site runs slowly. This affects the bounce rate, which is extremely important for SEO and causes Google to lower our site’s search rankings.
It is important to distinguish between two different things: the actual loading time of the site is not the same as how users perceive its loading time. The most important thing for the user is to see the first elements at the top of the page as quickly as possible and for them to be responsive. The rest of the page elements below can load later; it is important that the user sees a blank page for the shortest possible time.
Advantages of Removing Unused JavaScript Code
It is obvious that the more JavaScript, HTML, or CSS the browser has to download and execute, the longer the page takes to load. It doesn’t matter whether the JavaScript code is needed or not; if it is on the page, the browser will always download and run it.
This is especially important for mobile users on smartphones. Not only does unnecessary code use up our data allowance, but mobile devices are also not as powerful as desktop computers. To speed up your site’s loading, you should be able to find and remove unused JavaScript, or at least make it load later so it doesn’t block the initial stages of page loading.
How to Reduce the Amount of Unused JavaScript?
First, we will try to find large JavaScript files using GTMetrix, and then I will show you ways to remove the unwanted code.
Use GTMetrix to Find Large JavaScript Files
Go to the GTMetrix website and enter your website address in the appropriate field.
Click on the Waterfall tab.
Below, click on the JS tab, and you will see the JavaScript files. Click on Size to sort them from largest to smallest.
JavaScript Minification
JavaScript minification involves removing unnecessary code from the script.
NOTE: Improper code minification can damage your site. Before using this guide, be sure to make a full backup of your entire site, including the database.
Removing Unnecessary JavaScript Code with the LiteSpeed Cache Plugin.
If your site runs on a LiteSpeed or OpenLiteSpeed web server, you can use the LiteSpeed Cache plugin to minify JavaScript code. If you also use the CyberPanel, this plugin is installed by default.
Go to your WordPress Dashboard and click on LiteSpeed Cache.
Click on Page Optimization.
Click on JS Settings at the top, and then enable JS Minify.
Click Save Changes.
Removing Unwanted JavaScript Code in Elementor
If you use the Elementor plugin, you can remove unwanted JavaScript code with it.
Go to the WordPress Dashboard, click on Elementor on the left, and then on Settings.
Click the Experiments tab at the top.
Scroll down to the Stable Features section and enable the Improved Asset Loading option.
Go to the very bottom of the page and click Save Changes.
Delaying the Loading of Necessary but Non-Critical JavaScript with Async JavaScript
Go to your WordPress Dashboard and install the Async JavaScript plugin if you don’t have it.
Click on Settings, then Async JavaScript.
Click on the Settings tab at the top, then click the Apply Defer button.
Go to the very bottom of the page and click Save Settings.
Removing Unused Code with the Asset CleanUp Plugin
When certain files or plugins do not need to be loaded on a specific subpage of our site, we can disable them using the Asset CleanUp: Page Speed Booster plugin. This plugin is a powerful tool, but in inexperienced hands, it can cause damage to our site. Remember to make a backup of your site before editing it.
Install Asset CleanUp if you do not already have this plugin.
Go to the WordPress Dashboard, then click on Asset CleanUp and Settings on the left.
Click the Test Mode tab on the left and enable the Test Mode function. This is a very useful feature. When enabled, only logged-in administrators will see the changes made. If it turns out during testing that our site is not working correctly, only administrators will see it, and the pages will still work correctly for normal users of our site. Only after making sure that everything is working correctly can we disable the Test Mode, and from then on, the site will load faster without the unnecessary JavaScript code.
After making changes, scroll down the page and click Update All Settings.
On the left, click the Optimize JavaScript tab.
Enable the option Combine loaded JS (JavaScript) into fewer files.
Scroll to the bottom of the page and click Update All Settings to save the settings.
Summary
Loading unnecessary JavaScript code makes a site load more slowly because the user’s browser has to download, parse, compile, and execute it needlessly. Unused code consumes mobile data allowance and slows down page rendering. All of this worsens the user experience and lowers our site’s ranking in Google search results.
By minifying JavaScript and removing unnecessary code, you will speed up your site’s loading time and improve its overall functionality.
TrueNAS Scale, with its powerful web interface, makes installing and managing applications simple and intuitive. However, any advanced user will sooner or later discover that the real power and flexibility lie in the command line. It’s worth noting that since version 24.04 (Electric Eel), TrueNAS Scale has undergone a significant transformation, moving away from the previous k3s system (a lightweight Kubernetes distribution) in favour of native container management using Docker. This change has significantly simplified the architecture and made working directly with containers more accessible.
True freedom comes from a direct SSH connection, which bypasses the limitations of the in-browser terminal. It allows you to transform from a regular user into a knowledgeable administrator who can look ‘under the bonnet’ of any application, diagnose problems in real-time, and manage the system with a precision unavailable from the graphical user interface. This article is a comprehensive guide to managing applications in TrueNAS Scale using the terminal, based on the native Docker commands that have become the new foundation of the application system.
Step 1: Identifying Running Applications
Before we can start managing applications, we need to know what is actually running on our system. The graphical interface shows us the application names, but the terminal gives us insight into the actual containers.
Listing Containers: docker ps
The basic command is docker ps. It displays a list of all currently running containers.
docker ps
The output of this command is a table with key information:
CONTAINER ID: A unique identifier.
IMAGE: The name of the image from which the container was created.
STATUS: Information on how long the container has been running.
PORTS: Port mappings.
NAMES: The most important piece of information for us – the user-friendly name of the container, which we will use in subsequent commands (e.g., ix-jellyfin-jellyfin-1).
If you also want to see stopped containers, add the -a flag: docker ps -a.
Monitoring Resources in Real-Time: docker stats
An even better way to get a quick overview is docker stats. This command displays a dynamic, live-updating table showing CPU, RAM, and network resource usage for each container. It’s the perfect tool to identify at a glance which application is putting a load on the system.
docker stats
Step 2: Getting Inside a Container – docker exec
Once you’ve identified a container, you can get inside it to browse files, edit configurations, or perform advanced diagnostics.
docker exec -it ix-jellyfin-jellyfin-1 /bin/bash
Let’s break down this command:
docker exec: Execute a command in a running container.
-it: Key flags that signify an interactive session (-i) with a pseudo-terminal allocated (-t).
ix-jellyfin-jellyfin-1: The name of our container.
/bin/bash: The command we want to run inside – in this case, the Bash shell.
After running the command, the terminal prompt will change, indicating that you are now “inside”. You can move freely around the container’s file system using commands like ls, cd, etc. To exit and return to TrueNAS, simply type exit or use the shortcut Ctrl + D.
Why are Tools like top, ps, or nano Missing?
While working inside a container, you might encounter command not found errors. This is intentional. Many modern Docker images (including the official Jellyfin one) are so-called minimalist or “distroless” images. They do not contain any additional tools, only the application itself and its libraries. This is a best practice that increases security and reduces the image size.
In such cases, you must rely on the external tools provided by Docker itself.
Step 3: Diagnostics and Troubleshooting
When an application isn’t working correctly, the terminal is your best friend.
Viewing Logs: docker logs
This is the most important diagnostic command. It displays everything the application has written to its logs.
docker logs ix-nextcloud-nextcloud-1
If you want to follow the logs in real-time, add the -f (--follow) flag:
docker logs -f ix-nextcloud-nextcloud-1
Detailed Inspection: docker inspect
The docker inspect command returns a vast amount of detailed information about a container in JSON format – its IP address, attached volumes, environment variables, and much more.
docker inspect ix-tailscale-tailscale-1
Step 4: Managing Files and the Application Lifecycle
The terminal gives you full control over the files and the state of your applications.
Copying Files: docker cp
This is an extremely useful command for transferring files between the TrueNAS system and a container, without needing to go inside it.
Copying from a container to TrueNAS (e.g., for a configuration backup):
Instead of clicking in the graphical interface, you can quickly manage your applications:
To stop an application:
docker stop ix-qbittorrent-qbittorrent-1
To start a stopped application:
docker start ix-qbittorrent-qbittorrent-1
To restart an application (the most common operation):
docker restart ix-qbittorrent-qbittorrent-1
From User to Administrator
Mastering a few basic Docker commands in the SSH terminal opens up a whole new dimension of managing TrueNAS Scale. You are no longer dependent on the limitations of the graphical interface and gain the tools to understand how your applications really work.
The ability to quickly check logs, monitor resources in real-time, edit any configuration file, or make a swift backup – all this makes working with the system more efficient and troubleshooting faster. Connecting via SSH is not just a convenience; it’s a fundamental tool for any conscientious administrator who wants full control over their server.