Category: Nginx

  • Nginx Proxy Manager on TrueNAS Scale: Installation, Configuration, and Troubleshooting

    Nginx Proxy Manager on TrueNAS Scale: Installation, Configuration, and Troubleshooting

    Section 1: Introduction: Simplifying Home Lab Access with Nginx Proxy Manager on TrueNAS Scale

    Modern home labs have evolved from simple setups into complex ecosystems running dozens of services, from media servers like Plex or Jellyfin, to home automation systems such as Home Assistant, to personal clouds and password managers. Managing access to each of these services, each operating on a unique combination of an IP address and port number, quickly becomes impractical, inconvenient, and, most importantly, insecure. Exposing multiple ports to the outside world increases the attack surface and complicates maintaining a consistent security policy.

    The solution to this problem, employed for years in corporate environments, is the implementation of a central gateway or a single point of entry for all incoming traffic. In networking terminology, this role is fulfilled by a reverse proxy. This is an intermediary server that receives all requests from clients and then, based on the domain name, directs them to the appropriate service running on the internal network. Such an architecture not only simplifies access, allowing the use of easy-to-remember addresses (e.g., jellyfin.mydomain.co.uk instead of 192.168.1.50:8096), but also forms a key component of a security strategy.

    In this context, two technologies are gaining particular popularity among enthusiasts: TrueNAS Scale and Nginx Proxy Manager. TrueNAS Scale, based on the Debian Linux system, has transformed the traditional NAS (Network Attached Storage) device into a powerful, hyper-converged infrastructure (HCI) platform, capable of natively running containerised applications and virtual machines. In turn, Nginx Proxy Manager (NPM) is a tool that democratises reverse proxy technology. It provides a user-friendly, graphical interface for the powerful but complex-to-configure Nginx server, making advanced features, such as automatic SSL certificate management, accessible without needing to edit configuration files from the command line.

    This article provides a comprehensive overview of the process of deploying Nginx Proxy Manager on the TrueNAS Scale platform. The aim is not only to present “how-to” instructions but, above all, to explain why each step is necessary. The analysis will begin with an in-depth discussion of both technologies and their interactions. Then, a detailed installation process will be carried out, considering platform-specific challenges and their solutions, including the well-known issue of the application getting stuck in the “Deploying” state. Subsequently, using the practical example of a Jellyfin media server, the configuration of a proxy host will be demonstrated, along with advanced security options. The report will conclude with a summary of the benefits and suggest further steps to fully leverage the potential of this powerful duo.

    Nginx Proxy Manager Login Page

    Section 2: Tool Analysis: Nginx Proxy Manager and the TrueNAS Scale Application Ecosystem

    Understanding the fundamental principles of how Nginx Proxy Manager works and the architecture in which it is deployed—the TrueNAS Scale application system—is crucial for successful installation, effective configuration, and, most importantly, efficient troubleshooting. These two components, though designed to work together, each have their own unique characteristics, the ignorance of which is the most common cause of failure.

    Subsection 2.1: Understanding Nginx Proxy Manager (NPM)

    At the core of NPM’s functionality lies the concept of a reverse proxy, which is fundamental to modern network architecture. Understanding how it works allows one to appreciate the value that NPM brings.

    Definition and Functions of a Reverse Proxy

    A reverse proxy is a server that acts as an intermediary on the server side. Unlike a traditional (forward) proxy, which acts on behalf of the client, a reverse proxy acts on behalf of the server (or a group of servers). It receives requests from clients on the internet and forwards them to the appropriate servers on the local network that actually store the content. To an external client, the reverse proxy is the only visible point of contact; the internal network structure remains hidden.

    The key benefits of this solution are:

    • Security: Hiding the internal network topology and the actual IP addresses of application servers significantly hinders direct attacks on these services.
    • Centralised SSL/TLS Management (SSL Termination): Instead of configuring SSL certificates on each of a dozen application servers, you can manage them in one place—on the reverse proxy. Traffic encryption and decryption (SSL Termination) occurs at the proxy server, which offloads the backend servers.
    • Load Balancing: In more advanced scenarios, a reverse proxy can distribute traffic among multiple identical application servers, ensuring high availability and service scalability.
    • Simplified Access: It allows access to multiple services through standard ports 80 (HTTP) and 443 (HTTPS) using different subdomains, eliminating the need to remember and open multiple ports.

    NPM as a Management Layer

    It should be emphasised that Nginx Proxy Manager is not a new web server competing with Nginx. It is a management application, built on the open-source Nginx, which serves as a graphical user interface (GUI) for its reverse proxy functions. Instead of manually editing complex Nginx configuration files, the user can perform the same operations with a few clicks in an intuitive web interface.

    The main features that have contributed to NPM’s popularity are:

    • Graphical User Interface: Based on the Tabler framework, the interface is clear and easy to use, which drastically lowers the entry barrier for users who are not Nginx experts.
    • SSL Automation: Built-in integration with Let’s Encrypt allows for the automatic, free generation of SSL certificates and their periodic renewal. This is one of the most important and appreciated features.
    • Docker-based Deployment: NPM is distributed as a ready-to-use Docker image, which makes its installation on any platform that supports containers extremely simple.
    • Access Management: The tool offers features for creating Access Control Lists (ACLs) and managing users with different permission levels, allowing for granular control over access to individual services.

    Comparison: NPM vs. Traditional Nginx

    The choice between Nginx Proxy Manager and manual Nginx configuration is a classic trade-off between simplicity and flexibility. The table below outlines the key differences between these two approaches.

    AspectNginx Proxy ManagerTraditional Nginx
    Management InterfaceGraphical User Interface (GUI) simplifying configuration.Command Line Interface (CLI) and editing text files; requires technical knowledge.
    SSL ConfigurationFully automated generation and renewal of Let’s Encrypt certificates.Manual configuration using tools like Certbot; greater control.
    Learning CurveLow; ideal for beginners and hobbyists.Steep; requires understanding of Nginx directives and web server architecture.
    FlexibilityLimited to features available in the GUI; advanced rules can be difficult to implement.Full flexibility and the ability to create highly customised, complex configurations.
    Scalability / Target UserIdeal for home labs, small to medium deployments. Hobbyist, small business owner, home lab user.A better choice for large-scale, high-load corporate environments. Systems administrator, DevOps engineer, developer.

    This table clearly shows that NPM is a tool strategically tailored to the needs of its target audience—home lab enthusiasts. These users consciously sacrifice some advanced flexibility for the significant benefits of ease of use and speed of deployment.

    Nginx Proxy Manager Dashboard

    Subsection 2.2: Application Architecture in TrueNAS Scale

    To understand why installing NPM on TrueNAS Scale can encounter specific problems, it is necessary to know how this platform manages applications. It is not a typical Docker environment.

    Foundations: Linux and Hyper-convergence

    A key architectural change in TrueNAS Scale compared to its predecessor, TrueNAS CORE, was the switch from the FreeBSD operating system to Debian, a Linux distribution. This decision opened the door to native support for technologies that have dominated the cloud and containerisation world, primarily Docker containers and KVM-based virtualisation. As a result, TrueNAS Scale became a hyper-converged platform, combining storage, computing, and virtualisation functions.

    The Application System

    Applications are distributed through Catalogs, which function as repositories. These catalogs are further divided into so-called “trains,” which define the stability and source of the applications:

    • stable: The default train for official, iXsystems-tested applications.
    • enterprise: Applications verified for business use.
    • community: Applications created and maintained by the community. This is where Nginx Proxy Manager is located by default.
    • test: Applications in the development phase.

    NPM’s inclusion in the community catalog means that while it is easily accessible, its technical support relies on the community, not directly on the manufacturer of TrueNAS.

    Storage Management for Applications

    Before any application can be installed, TrueNAS Scale requires the user to specify a ZFS pool that will be dedicated to storing application data. When an application is installed, its data (configuration, databases, etc.) must be saved somewhere persistently. TrueNAS Scale offers several options here, but the default and recommended for simplicity is ixVolume.

    ixVolume is a special type of volume that automatically creates a dedicated, system-managed ZFS dataset within the selected application pool. This dataset is isolated, and the system assigns it very specific permissions. By default, the owner of this dataset becomes the system user apps with a user ID (UID) of 568 and a group ID (GID) of 568. The running application container also operates with the permissions of this very user.

    This is the crux of the problem. The standard Docker image for Nginx Proxy Manager contains startup scripts (e.g., those from Certbot, the certificate handling tool) that, on first run, attempt to change the owner (chown) of data directories, such as /data or /etc/letsencrypt, to ensure they have the correct permissions. When the NPM container starts within the sandboxed TrueNAS application environment, its startup script, running as the unprivileged apps user (UID 568), tries to execute the chown operation on the ixVolume. This operation fails because the apps user is not the owner of the parent directories and does not have permission to change the owner of files on a volume managed by K3s. This permission error causes the container’s startup script to halt, and the container itself never reaches the “running” state, which manifests in the TrueNAS Scale interface as an endless “Deploying” status.

    Section 3: Installing and Configuring Nginx Proxy Manager on TrueNAS Scale

    The process of installing Nginx Proxy Manager on TrueNAS Scale is straightforward, provided that attention is paid to a few key configuration parameters that are often a source of problems. The following step-by-step instructions will guide you through this process, highlighting the critical decisions that need to be made.

    Step 1: Preparing TrueNAS Scale

    Before proceeding with the installation of any application, you must ensure that the application service in TrueNAS Scale is configured correctly.

    1. Log in to the TrueNAS Scale web interface.
    2. Navigate to the Apps section.
    3. If the service is not yet configured, the system will prompt you to select a ZFS pool to be used for storing all application data. Select the appropriate pool and save the settings. After a moment, the service status should change to “Running”.

    Step 2: Finding the Application

    Nginx Proxy Manager is available in the official community catalog.

    1. In the Apps section, go to the Discover tab.
    2. In the search box, type nginx-proxy-manager.
    3. The application should appear in the results. Ensure it comes from the community catalog.
    4. Click the Install button to proceed to the configuration screen.

    Step 3: Key Configuration Parameters

    The installation screen presents many options. Most of them can be left with their default values, but a few sections require special attention.

    Application Name

    In the Application Name field, enter a name for the installation, for example, nginx-proxy-manager. This name will be used to identify the application in the system.

    Network Configuration

    This is the most important and most problematic stage of the configuration. By default, the TrueNAS Scale management interface uses the standard web ports: 80 for HTTP and 443 for HTTPS. Since Nginx Proxy Manager, to act as a gateway for all web traffic, should also listen on these ports, a direct conflict arises. There are two main strategies to solve this problem, each with its own set of trade-offs.

    • Strategy A (Recommended): Change TrueNAS Scale Ports
      This method is considered the “cleanest” from NPM’s perspective because it allows it to operate as it was designed.
    1. Cancel the NPM installation and go to System Settings -> General. In the GUI SSL/TLS Certificate section, change the Web Interface HTTP Port to a custom one, e.g., 880, and the Web Interface HTTPS Port to, e.g., 8443.
    2. Save the changes. From this point on, access to the TrueNAS Scale interface will be available at http://<truenas-ip-address>:880 or https://<truenas-ip-address>:8443.
    3. Return to the NPM installation and in the Network Configuration section, assign the HTTP Port to 80 and the HTTPS Port to 443.
    • Advantages: NPM runs on standard ports, which simplifies configuration and eliminates the need for port translation on the router.
    • Disadvantages: It changes the fundamental way of accessing the NAS itself. In rare cases, as noted on forums, this can cause unforeseen side effects, such as problems with SSH connections between TrueNAS systems.
    • Strategy B (Alternative): Use High Ports for NPM
      This method is less invasive to the TrueNAS configuration itself but shifts the complexity to the router level.
    1. In the NPM configuration, under the Network Configuration section, leave the TrueNAS ports unchanged and assign high, unused ports to NPM, e.g., 30080 for HTTP and 30443 for HTTPS. TrueNAS Scale reserves ports below 9000 for the system, so you should choose values above this threshold.
    2. After installing NPM, configure port forwarding on your edge router so that incoming internet traffic on port 80 is directed to port 30080 of the TrueNAS IP address, and traffic from port 443 is directed to port 30443.
    • Advantages: The TrueNAS Scale configuration remains untouched.
    • Disadvantages: Requires additional configuration on the router. Each proxied service will require explicit forwarding, which can be confusing.

    The ideal solution would be to assign a dedicated IP address on the local network to NPM (e.g., using macvlan technology), which would completely eliminate the port conflict. Unfortunately, the graphical interface of the application installer in TrueNAS Scale does not provide this option in a simple way.

    Storage Configuration

    To ensure that the NPM configuration, including created proxy hosts and SSL certificates, survives updates or application redeployments, you must configure persistent storage.

    1. In the Storage Configuration section, configure two volumes.
    2. For Nginx Proxy Manager Data Storage (path /data) and Nginx Proxy Manager Certs Storage (path /etc/letsencrypt), select the ixVolume type.
    3. Leaving these settings will ensure that TrueNAS creates dedicated ZFS datasets for the configuration and certificates, which will be independent of the application container itself.

    Step 4: First Run and Securing the Application

    After configuring the above parameters (and possibly applying the fixes from Section 4), click Install. After a few moments, the application should transition to the “Running” state.

    1. Access to the NPM interface is available at http://<truenas-ip-address>:PORT, where PORT is the WebUI port configured during installation (defaults to 81 inside the container but is mapped to a higher port, e.g., 30020, if the TrueNAS ports were not changed).
    2. The default login credentials are:
    • Email: admin@example.com
    • Password: changeme
    1. Upon first login, the system will immediately prompt you to change these details. This is an absolutely crucial security step and must be done immediately.

    Section 4: Troubleshooting the “Deploying” Issue: Diagnosis and Repair of Installation Errors

    One of the most frequently encountered and frustrating problems when deploying Nginx Proxy Manager on TrueNAS Scale is the situation where the application gets permanently stuck in the “Deploying” state after installation. The user waits, refreshes the page, but the status never changes to “Running”. Viewing the container logs often does not provide a clear answer. This problem is not a bug in NPM itself but, as diagnosed earlier, a symptom of a fundamental permission conflict between the generic container and the specific, secured environment in TrueNAS Scale.

    Nginx Proxy Manager Log

    Problem Description and Root Cause

    After clicking the “Install” button in the application wizard, TrueNAS Scale begins the deployment process. In the background, the Docker image is downloaded, ixVolumes are created, and the container is started with the specified configuration. The startup script inside the NPM container attempts to perform maintenance operations, including changing the owner of key directories. Because the container is running as a user with limited permissions (apps, UID 568) on a file system it does not fully control, this operation fails. The script halts its execution, and the container never signals to the system that it is ready to work. Consequently, from the perspective of the TrueNAS interface, the application remains forever in the deployment phase.

    Fortunately, thanks to the work of the community and developers, there are proven and effective solutions to this problem. Interestingly, the evolution of these solutions perfectly illustrates the dynamics of open-source software development.

    Solution 1: Using an Environment Variable (Recommended Method)

    This is the modern, precise, and most secure solution to the problem. It was introduced by the creators of the NPM container specifically in response to problems reported by users of platforms like TrueNAS Scale. Instead of escalating permissions, the container is instructed to skip the problematic step.

    To implement this solution:

    1. During the application installation (or while editing it if it has already been created and is stuck), navigate to the Application Configuration section.
    2. Find the Nginx Proxy Manager Configuration subsection and click Add next to Additional Environment Variables.
    3. Configure the new environment variable as follows:
    • Variable Name: SKIP_CERTBOT_OWNERSHIP
    • Variable Value: true
    1. Save the configuration and install or update the application.

    Adding this flag informs the Certbot startup script inside the container to skip the chown (change owner) step for its configuration files. The script proceeds, the container starts correctly and reports readiness, and the application transitions to the “Running” state. This is the recommended method for all newer versions of TrueNAS Scale (Electric Eel, Dragonfish, and later).

    Solution 2: Changing the User to Root (Historical Method)

    This solution was the first one discovered by the community. It is a more “brute force” method that solves the problem by granting the container full permissions. Although effective, it is considered less elegant and potentially less secure from the perspective of the principle of least privilege.

    To implement this solution:

    1. During the installation or editing of the application, navigate to the User and Group Configuration section.
    2. Change the value in the User ID field from the default 568 to 0.
    3. Leave the Group ID unchanged or also set it to 0.
    4. Save the configuration and deploy the application.

    Setting the User ID to 0 causes the process inside the container to run with root user permissions. The root user has unlimited permissions, so the problematic chown operation executes flawlessly, and the container starts correctly. This method was particularly necessary in older versions of TrueNAS Scale (e.g., Dragonfish) and is documented as a working workaround. Although it still works, the environment variable method is preferred as it does not require escalating permissions for the entire container.

    Verification

    Regardless of the chosen method, after saving the changes and redeploying the application, you should observe its status in the Apps -> Installed tab. After a short while, the status should change from “Deploying” to “Running”, which means the problem has been successfully resolved and Nginx Proxy Manager is ready for configuration.

    Section 5: Practical Application: Securing a Jellyfin Media Server

    Theory and correct installation are just the beginning. The true power of Nginx Proxy Manager is revealed in practice when we start using it to manage access to our services. Jellyfin, a popular, free media server, is an excellent example to demonstrate this process, as its full functionality depends on one, often overlooked, setting in the proxy configuration. The following guide assumes that Jellyfin is already installed and running on the local network, accessible at IP_ADDRESS:PORT (e.g., 192.168.1.10:8096).

    Step 1: DNS Configuration

    Before NPM can direct traffic, the outside world needs to know where to send it.

    1. Log in to your domain’s management panel (e.g., at your domain registrar or DNS provider like Cloudflare).
    2. Create a new A record.
    3. In the Name (or Host) field, enter the subdomain that will be used to access Jellyfin (e.g., jellyfin).
    4. In the Value (or Points to) field, enter the public IP address of your home network (your router).

    Step 2: Obtaining an SSL Certificate in NPM

    Securing the connection with HTTPS is crucial. NPM makes this process trivial, especially when using the DNS Challenge method, which is more secure as it does not require opening any ports on your router.

    1. In the NPM interface, go to SSL Certificates and click Add SSL Certificate, then select Let’s Encrypt.
    2. In the Domain Names field, enter your subdomain, e.g., jellyfin.yourdomain.com. You can also generate a wildcard certificate at this stage (e.g., *.yourdomain.com), which will match all subdomains.
    3. Enable the Use a DNS Challenge option.
    4. From the DNS Provider list, select your DNS provider (e.g., Cloudflare).
    5. In the Credentials File Content field, paste the API token obtained from your DNS provider. For Cloudflare, you need to generate a token with permissions to edit the DNS zone (Zone: DNS: Edit).
    6. Accept the Let’s Encrypt terms of service and save the form. After a moment, NPM will use the API to temporarily add a TXT record in your DNS, which proves to Let’s Encrypt that you own the domain. The certificate will be generated and saved.
    Nginx Proxy Manager SSL

    Step 3: Creating a Proxy Host

    This is the heart of the configuration, where we link the domain, the certificate, and the internal service.

    1. In NPM, go to Hosts -> Proxy Hosts and click Add Proxy Host.
    2. A form with several tabs will open.

    “Details” Tab

    • Domain Names: Enter the full domain name that was configured in DNS, e.g., jellyfin.yourdomain.com.
    • Scheme: Select http, as the communication between NPM and Jellyfin on the local network is typically not encrypted.
    • Forward Hostname / IP: Enter the local IP address of the server where Jellyfin is running, e.g., 192.168.1.10.
    • Forward Port: Enter the port on which Jellyfin is listening, e.g., 8096.
    • Websocket Support: This is an absolutely critical setting. You must tick this option. Jellyfin makes extensive use of WebSocket technology for real-time communication, for example, to update playback status on the dashboard or for the Syncplay feature to work. Without WebSocket support enabled, the Jellyfin main page will load correctly, but many key features will not work, leading to difficult-to-diagnose problems.

    “SSL” Tab

    • SSL Certificate: From the drop-down list, select the certificate generated in the previous step for the Jellyfin domain.
    • Force SSL: Enable this option to automatically redirect all HTTP connections to secure HTTPS.
    • HTTP/2 Support: Enabling this option can improve page loading performance.

    After configuring both tabs, save the proxy host.

    Step 4: Testing

    After saving the configuration, Nginx will reload its settings in the background. It should now be possible to open a browser and enter the address https://jellyfin.yourdomain.com. You should see the Jellyfin login page, and the connection should be secured with an SSL certificate (a padlock icon will be visible in the address bar).

    Subsection 5.1: Advanced Security Hardening (Optional)

    The default configuration is fully functional, but to enhance security, you can add extra HTTP headers that instruct the browser on how to behave. To do this, edit the created proxy host and go to the Advanced tab. In the Custom Nginx Configuration field, you can paste additional directives.

    It’s worth noting that NPM has a quirk: add_header directives added directly in this field may not be applied. A safer approach is to create a Custom Location for the path / and paste the headers in its configuration field.

    The following table presents recommended security headers.

    HeaderPurposeRecommended ValueNotes
    Strict-Transport-SecurityForces the browser to communicate exclusively over HTTPS for a specified period.add_header Strict-Transport-Security “max-age=31536000; includeSubDomains; preload” always;Deploy with caution. It’s wise to start with a lower max-age and remove preload until you are certain about the configuration.
    X-Frame-OptionsProtects against “clickjacking” attacks by preventing the page from being embedded in an <iframe> on another site.add_header X-Frame-Options “SAMEORIGIN” always;SAMEORIGIN allows embedding only within the same domain.
    X-Content-Type-OptionsPrevents attacks related to the browser misinterpreting MIME types (“MIME-sniffing”).add_header X-Content-Type-Options “nosniff” always;This is a standard and safe setting.
    Referrer-PolicyControls what referrer information is sent during navigation.add_header ‘Referrer-Policy’ ‘origin-when-cross-origin’;A good compromise between privacy and usability.
    X-XSS-ProtectionA historical header intended to protect against Cross-Site Scripting (XSS) attacks.add_header X-XSS-Protection “0” always;The header is obsolete and can create new attack vectors. Modern browsers have better, built-in mechanisms. It is recommended to explicitly disable it (0).

    Applying these headers provides an additional layer of defence and is considered good practice in securing web applications. However, it is critical to use up-to-date recommendations, as in the case of X-XSS-Protection, where blindly copying it from older guides could weaken security.

    Section 6: Conclusions and Next Steps

    Combining Nginx Proxy Manager with the TrueNAS Scale platform creates an incredibly powerful and flexible environment for managing a home lab. As demonstrated in this report, this synergy allows for centralised access management, a drastic simplification of the deployment and maintenance of SSL/TLS security, and a professionalisation of the way users interact with their self-hosted services. The key to success, however, is not just blindly following instructions, but above all, understanding the fundamental principles of how both technologies work. The awareness that applications in TrueNAS Scale operate within a restrictive ecosystem is essential for effectively diagnosing and resolving specific problems, such as the “Deploying” stall error.

    Summary of Strategic Benefits

    Deploying NPM on TrueNAS Scale brings tangible benefits:

    • Centralisation and Simplicity: All incoming requests are managed from a single, intuitive panel, eliminating the chaos of multiple IP addresses and ports.
    • Enhanced Security: Automation of SSL certificates, hiding the internal network topology, and the ability to implement advanced security headers create a solid first line of defence.
    • Professional Appearance and Convenience: Using easy-to-remember, personalised subdomains (e.g., media.mydomain.co.uk) instead of technical IP addresses significantly improves the user experience.

    Recommendations and Next Steps

    After successfully deploying Nginx Proxy Manager and securing your first application, it is worth exploring its further capabilities to fully utilise the tool’s potential.

    • Explore Access Lists: NPM allows for the creation of Access Control Lists (ACLs), which can restrict access to specific proxy hosts based on the source IP address. This is an extremely useful feature for securing administrative panels. For example, you can create a rule that allows access to the TrueNAS Scale interface or the NPM panel itself only from IP addresses on the local network, blocking any access attempts from the outside.
    • Backup Strategy: The Nginx Proxy Manager configuration, stored in the ixVolume, is a critical asset. Its loss would mean having to reconfigure all proxy hosts and certificates. TrueNAS Scale offers built-in tools for automating backups. You should configure a Periodic Snapshot Task for the dataset containing the NPM application data (ix-applications/releases/nginx-proxy-manager) to regularly create snapshots of its state.
    • Securing Other Applications: The knowledge gained during the Jellyfin configuration is universal. It can now be applied to secure virtually any other web service running in your home lab, such as Home Assistant, a file server, a personal password manager (e.g., Vaultwarden, which is a Bitwarden implementation), or the AdGuard Home ad-blocking system. Remember to enable the Websocket Support option for any application that requires real-time communication.
    • Monitoring and Diagnostics: The NPM interface provides access logs and error logs for each proxy host. Regularly reviewing these logs can help in diagnosing access problems, identifying unauthorised connection attempts, and optimising the configuration.

    Mastering Nginx Proxy Manager on TrueNAS Scale is an investment that pays for itself many times over in the form of increased security, convenience, and control over your digital ecosystem. It is another step on the journey from a simple user to a conscious architect of your own home infrastructure.

  • Your Server is Secure: A Guide to Permanently Blocking Attacks

    Your Server is Secure: A Guide to Permanently Blocking Attacks

    A Permanent IP Blacklist with Fail2ban, UFW, and Ipset

    Introduction: Beyond Temporary Protection

    In the digital world, where server attacks are a daily occurrence, merely reacting is not enough. Although tools like Fail2ban provide a basic line of defence, their temporary blocks leave a loophole—persistent attackers can return and try again after the ban expires. This article provides a detailed guide to building a fully automated, two-layer system that turns ephemeral bans into permanent, global blocks. The combination of Fail2ban, UFW, and the powerful Ipset tool creates a mechanism that permanently protects your server from known repeat offenders.

    Layer One: Reaction with Fail2ban

    At the start of every attack is Fail2ban. This daemon monitors log files (e.g., sshd.log, apache.log) for patterns indicating break-in attempts, such as multiple failed login attempts. When it detects such activity, it immediately blocks the attacker’s IP address by adding it to the firewall rules for a defined period (e.g., 10 minutes, 30 days). This is an effective but short-term response.

    Layer Two: Persistence with UFW and Ipset

    For a ban to become permanent, we need a more robust, centralised method of managing IP addresses. This is where UFW and Ipset come in.

    What is Ipset?

    Ipset is a Linux kernel extension that allows you to manage sets of IP addresses, networks, or ports. It is a much more efficient solution than adding thousands of individual rules to a firewall. Instead, the firewall can refer to an entire set with a single rule.

    Ipset Installation and Configuration

    The first step is to install Ipset on your system. We use standard package managers for this.

    sudo apt update
    sudo apt install ipset

    Next, we create two sets: blacklist for IPv4 addresses and blacklist_v6 for IPv6.

    sudo ipset create blacklist hash:ip hashsize 4096
    sudo ipset create blacklist_v6 hash:net family inet6 hashsize 4096

    The hashsize parameter determines the maximum number of entries, which is crucial for performance.

    Integrating Ipset with the UFW Firewall

    For UFW to start using our sets, we must add the appropriate commands to its rules. We edit the UFW configuration files, adding rules that block traffic originating from addresses contained in our Ipset sets. For IPv4, we edit /etc/ufw/before.rules:

    sudo nano /etc/ufw/before.rules

    Immediately after *filter and :ufw-before-input [0:0], add:

    # Rules for the permanent blacklist (ipset)
    # Block any incoming traffic from IP addresses in the ‘blacklist’ set (IPv4)
    -A ufw-before-input -m set –match-set blacklist src -j DROP

    For IPv6, we edit /etc/ufw/before6.rules:

    sudo nano /etc/ufw/before6.rules

    Immediately after *filter and :ufw6-before-input [0:0], add:

    # Rules for the permanent blacklist (ipset) IPv6
    # Block any incoming traffic from IP addresses in the ‘blacklist_v6’ set
    -A ufw6-before-input -m set –match-set blacklist_v6 src -j DROP

    After adding the rules, we reload UFW for them to take effect:

    sudo ufw reload

    Script for Automatic Blacklist Updates

    The core of the system is a script that acts as a bridge between Fail2ban and Ipset. Its job is to collect banned addresses, ensure they are unique, and synchronise them with the Ipset sets.

    Create the script file:

    sudo nano /usr/local/bin/update-blacklist.sh

    Below is the content of the script. It works in several steps:

    1. Creates a temporary, unique list of IP addresses from Fail2ban logs and the existing blacklist.
    2. Creates temporary Ipset sets.
    3. Reads addresses from the unique list and adds them to the appropriate temporary sets (distinguishing between IPv4 and IPv6).
    4. Atomically swaps the old Ipset sets with the new, temporary ones, minimising the risk of protection gaps.
    5. Destroys the old, temporary sets.
    6. Returns a summary of the number of blocked addresses.

    #!/bin/bash

    BLACKLIST_FILE=”/etc/fail2ban/blacklist.local”
    IPSET_NAME_V4=”blacklist”
    IPSET_NAME_V6=”blacklist_v6″

    touch “$BLACKLIST_FILE”

    # Create a unique list of banned IPs from the log and the existing blacklist file
    (grep ‘Ban’ /var/log/fail2ban.log | awk ‘{print $(NF)}’ && cat “$BLACKLIST_FILE”) | sort -u > “$BLACKLIST_FILE.tmp”
    mv “$BLACKLIST_FILE.tmp” “$BLACKLIST_FILE”

    # Create temporary ipsets
    sudo ipset create “${IPSET_NAME_V4}_tmp” hash:ip hashsize 4096 –exist
    sudo ipset create “${IPSET_NAME_V6}_tmp” hash:net family inet6 hashsize 4096 –exist

    # Add IPs to the temporary sets
    while IFS= read -r ip; do
        if [[ “$ip” == *”:”* ]]; then
            sudo ipset add “${IPSET_NAME_V6}_tmp” “$ip”
        else
            sudo ipset add “${IPSET_NAME_V4}_tmp” “$ip”
        fi
    done < “$BLACKLIST_FILE”

    # Atomically swap the temporary sets with the active ones
    sudo ipset swap “${IPSET_NAME_V4}_tmp” “$IPSET_NAME_V4”
    sudo ipset swap “${IPSET_NAME_V6}_tmp” “$IPSET_NAME_V6”

    # Destroy the temporary sets
    sudo ipset destroy “${IPSET_NAME_V4}_tmp”
    sudo ipset destroy “${IPSET_NAME_V6}_tmp”

    # Count the number of entries
    COUNT_V4=$(sudo ipset list “$IPSET_NAME_V4” | wc -l)
    COUNT_V6=$(sudo ipset list “$IPSET_NAME_V6” | wc -l)

    # Subtract header lines from count
    let COUNT_V4=$COUNT_V4-7
    let COUNT_V6=$COUNT_V6-7

    # Ensure count is not negative
    [ $COUNT_V4 -lt 0 ] && COUNT_V4=0
    [ $COUNT_V6 -lt 0 ] && COUNT_V6=0

    echo “Blacklist and ipset updated. Blocked IPv4: $COUNT_V4, Blocked IPv6: $COUNT_V6”
    exit 0

    After creating the script, give it execute permissions:

    sudo chmod +x /usr/local/bin/update-blacklist.sh

    Automation and Persistence After a Reboot

    To run the script without intervention, we use a cron schedule. Open the crontab editor for the root user and add a rule to run the script every hour:

    sudo crontab -e

    Add this line:

    0 * * * * /usr/local/bin/update-blacklist.sh

    Or to run it once a day at 6 a.m.:

    0 6 * * * /usr/local/bin/update-blacklist.sh

    The final, crucial step is to ensure the Ipset sets survive a reboot, as they are stored in RAM by default. We create a systemd service that will save their state before the server shuts down and load it again on startup.

    sudo nano /etc/systemd/system/ipset-persistent.service
    “`ini
    [Unit]
    Description=Saves and restores ipset sets on boot/shutdown
    Before=network-pre.target
    ConditionFileNotEmpty=/etc/ipset.rules

    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/bin/bash -c “/sbin/ipset create blacklist hash:ip –exist; /sbin/ipset create blacklist_v6 hash:net family inet6 –exist; /sbin/ipset restore -f /etc/ipset.rules”
    ExecStop=/sbin/ipset save -f /etc/ipset.rules

    [Install]
    WantedBy=multi-user.target

    Finally, enable and start the service:

    sudo systemctl daemon-reload
    sudo systemctl enable –now ipset-persistent.service

    How Does It Work in Practice?

    The entire system is an automated chain of events that works in the background to protect your server from attacks. Here is the flow of information and actions:

    1. Attack Response (Fail2ban):
    • Someone tries to break into the server (e.g., by repeatedly entering the wrong password via SSH).
    • Fail2ban, monitoring system logs (/var/log/fail2ban.log), detects this pattern.
    • It immediately adds the attacker’s IP address to a temporary firewall rule, blocking their access for a specified time.
    1. Permanent Banning (Script and Cron):
    • Every hour (as set in cron), the system runs the update-blacklist.sh script.
    • The script reads the Fail2ban logs, finds all addresses that have been banned (lines containing “Ban”), and then compares them with the existing local blacklist (/etc/fail2ban/blacklist.local).
    • It creates a unique list of all banned addresses.
    • It then creates temporary ipset sets (blacklist_tmp and blacklist_v6_tmp) and adds all addresses from the unique list to them.
    • It performs an ipset swap operation, which atomically replaces the old, active sets with the new, updated ones.
    • UFW, thanks to the previously defined rules, immediately starts blocking the new addresses that have appeared in the updated ipset sets.
    1. Persistence After Reboot (systemd Service):
    • Ipset’s operation is volatile—the sets only exist in memory. The ipset-persistent.service solves this problem.
    • Before shutdown/reboot: systemd runs the ExecStop=/sbin/ipset save -f /etc/ipset.rules command. This saves the current state of all ipset sets to a file on the disk.
    • After power-on/reboot: systemd runs the ExecStart command, which restores the sets. It reads all blocked addresses from the /etc/ipset.rules file and automatically recreates the ipset sets in memory.

    Thanks to this, even if the server is rebooted, the IP blacklist remains intact, and protection is active from the first moments after the system starts.

    Summary and Verification

    The system you have built is a fully automated, multi-layered protection mechanism. Attackers are temporarily banned by Fail2ban, and their addresses are automatically added to a permanent blacklist, which is instantly blocked by UFW and Ipset. The systemd service ensures that the blacklist survives server reboots, protecting against repeat offenders permanently. To verify its operation, you can use the following commands:

    sudo ufw status verbose
    sudo ipset list blacklist
    sudo ipset list blacklist_v6
    sudo systemctl status ipset-persistent.service

    How to Create a Reliable IP Whitelist in UFW and Ipset

    Introduction: Why a Whitelist is Crucial

    When configuring advanced firewall rules, especially those that automatically block IP addresses (like in systems with Fail2ban), there is a risk of accidentally blocking yourself or key services. A whitelist is a mechanism that acts like a VIP pass for your firewall—IP addresses on this list will always have access, regardless of other, more restrictive blocking rules.

    This guide will show you, step-by-step, how to create a robust and persistent whitelist using UFW (Uncomplicated Firewall) and ipset. As an example, we will use the IP address 111.222.333.444, which we want to add as trusted.

    Step 1: Create a Dedicated Ipset Set for the Whitelist

    The first step is to create a separate “container” for our trusted IP addresses. Using ipset is much more efficient than adding many individual rules to iptables.

    Open a terminal and enter the following command:

    sudo ipset create whitelist hash:ip

    What did we do?

    • ipset create: The command to create a new set.
    • whitelist: The name of our set. It’s short and unambiguous.
    • hash:ip: The type of set. hash:ip is optimised for storing and very quickly looking up single IPv4 addresses.

    Step 2: Add a Trusted IP Address

    Now that we have the container ready, let’s add our example trusted IP address to it.

    sudo ipset add whitelist 111.222.333.444

    You can repeat this command for every address you want to add to the whitelist. To check the contents of the list, use the command:

    sudo ipset list whitelist

    Step 3: Modify the Firewall – Giving Priority to the Whitelist

    This is the most important step. We need to modify the UFW rules so that connections from addresses on the whitelist are accepted immediately, before the firewall starts processing any blocking rules (including those from the ipset blacklist or Fail2ban).

    Open the before.rules configuration file. This is the file where rules processed before the main UFW rules are located.

    sudo nano /etc/ufw/before.rules

    Go to the beginning of the file and find the *filter section. Just below the :ufw-before-input [0:0] line, add our new snippet. Placing it at the very top ensures it will be processed first.

    *filter
    :ufw-before-input [0:0]
    # Rule for the whitelist (ipset) ALWAYS HAS PRIORITY
    # Accept any traffic from IP addresses in the ‘whitelist’ set
    -A ufw-before-input -m set –match-set whitelist src -j ACCEPT

    • -A ufw-before-input: We add the rule to the ufw-before-input chain.
    • -m set –match-set whitelist src: Condition: if the source (src) IP address matches the whitelist set…
    • -j ACCEPT: Action: “immediately accept (ACCEPT) the packet and stop processing further rules for this packet.”

    Save the file and reload UFW:

    sudo ufw reload

    From this point on, any connection from the address 111.222.333.444 will be accepted immediately.

    Step 4: Ensuring Whitelist Persistence

    Ipset sets are stored in memory and disappear after a server reboot. To make our whitelist persistent, we need to ensure it is automatically loaded every time the system starts. We will use our previously created ipset-persistent.service for this.

    Update the systemd service to “teach” it about the existence of the new whitelist set.

    sudo nano /etc/systemd/system/ipset-persistent.service

    Find the ExecStart line and add the create command for whitelist. If you already have other sets, simply add whitelist to the line. An example of an updated line:

    ExecStart=/bin/bash -c “/sbin/ipset create whitelist hash:ip –exist; /sbin/ipset create blacklist hash:ip –exist; /sbin/ipset create blacklist_v6 hash:net family inet6 –exist; /sbin/ipset restore -f /etc/ipset.rules”

    Reload the systemd configuration:

    sudo systemctl daemon-reload

    Save the current state of all sets to the file. This command will overwrite the old /etc/ipset.rules file with a new version that includes information about your whitelist.

    sudo ipset save > /etc/ipset.rules

    Restart the service to ensure it is running with the new configuration:

    sudo systemctl restart ipset-persistent.service

    Summary

    Congratulations! You have created a solid and reliable whitelist mechanism. With it, you can securely manage your server, confident that trusted IP addresses like 111.222.333.444 will never be accidentally blocked. Remember to only add fully trusted addresses to this list, such as your home or office IP address.

    How to Effectively Block IP Addresses and Subnets on a Linux Server

    Blocking single IP addresses is easy, but what if attackers use multiple addresses from the same network? Manually banning each one is inefficient and time-consuming.

    In this article, you will learn how to use ipset and iptables to effectively block entire subnets, automating the process and saving valuable time.

    Why is Blocking Entire Subnets Better?

    Many attacks, especially brute-force types, are carried out from multiple IP addresses belonging to the same operator or from the same pool of addresses (subnet). Blocking just one of them is like patching a small hole in a large dam—the rest of the traffic can still get through.

    Instead, you can block an entire subnet, for example, 45.148.10.0/24. This notation means you are blocking 256 addresses at once, which is much more effective.

    Script for Automatic Subnet Blocking

    To automate the process, you can use the following bash script. This script is interactive—it asks you to provide the subnet to block, then adds it to an ipset list and saves it to a file, making the block persistent.

    Let’s analyse the script step-by-step:

    #!/bin/bash

    # The name of the ipset list to which subnets will be added
    BLACKLIST_NAME=”blacklist_nets”
    # The file where blocked subnets will be appended
    BLACKLIST_FILE=”/etc/fail2ban/blacklist_net.local”

    # 1. Create the blacklist file if it doesn’t exist
    touch “$BLACKLIST_FILE”

    # 2. Check if the ipset list already exists. If not, create it.
    # Using “hash:net” allows for storing subnets, which is key.
    if ! sudo ipset list $BLACKLIST_NAME >/dev/null 2>&1; then
        sudo ipset create $BLACKLIST_NAME hash:net maxelem 65536
    fi

    # 3. Loop to prompt the user for subnets to block.
    # The loop ends when the user types “exit”.
    while true; do
        read -p “Enter the subnet address to block (e.g., 192.168.1.0/24) or type ‘exit’: ” subnet
        if [ “$subnet” == “exit” ]; then
            break
        elif [[ “$subnet” =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}\/[0-9]{1,2}$ ]]; then
            # Check if the subnet is not already in the file to avoid duplicates
            if ! grep -q “^$subnet$” “$BLACKLIST_FILE”; then
                echo “$subnet” | sudo tee -a “$BLACKLIST_FILE” > /dev/null
                # Add the subnet to the ipset list
                sudo ipset add $BLACKLIST_NAME $subnet
                echo “Subnet $subnet added.”
            else
                echo “Subnet $subnet is already on the list.”
            fi
        else
            # Check if the entered format is correct
            echo “Error: Invalid format. Please provide the address in ‘X.X.X.X/Y’ format.”
        fi
    done

    # 4. Add a rule in iptables that blocks all traffic from addresses on the ipset list.
    # This ensures the rule is added only once.
    if ! sudo iptables -C INPUT -m set –match-set $BLACKLIST_NAME src -j DROP >/dev/null 2>&1; then
        sudo iptables -I INPUT -m set –match-set $BLACKLIST_NAME src -j DROP
    fi

    # 5. Save the iptables rules to survive a reboot.
    # This part checks which tool the system uses.
    if command -v netfilter-persistent &> /dev/null; then
        sudo netfilter-persistent save
    elif command -v service &> /dev/null && service iptables status >/dev/null 2>&1; then
        sudo service iptables save
    fi

    echo “Script finished. The ‘$BLACKLIST_NAME’ list has been updated, and the iptables rules are active.”

    How to Use the Script

    1. Save the script: Save the code above into a file, e.g., block_nets.sh.
    2. Give permissions: Make sure the file has execute permissions: chmod +x block_nets.sh.
    3. Run the script: Execute the script with root privileges: sudo ./block_nets.sh.
    4. Provide subnets: The script will prompt you to enter subnet addresses. Simply type them in the X.X.X.X/Y format and press Enter. When you are finished, type exit.

    Ensuring Persistence After a Server Reboot

    Ipset sets are stored in RAM by default and disappear after a server restart. For the blocked addresses to remain active, you must use a systemd service that will load them at system startup.

    If you already have such a service (e.g., ipset-persistent.service), you must update it to include the new blacklist_nets list.

    1. Edit the service file: Open your service’s configuration file.
      sudo nano /etc/systemd/system/ipset-persistent.service
    2. Update the ExecStart line: Find the ExecStart line and add the create command for the blacklist_nets set. An example updated ExecStart line should look like this (including previous sets):
      ExecStart=/bin/bash -c “/sbin/ipset create whitelist hash:ip –exist; /sbin/ipset create blacklist hash:ip –exist; /sbin/ipset create blacklist_v6 hash:net family inet6 –exist; /sbin/ipset create blacklist_nets hash:net –exist; /sbin/ipset restore -f /etc/ipset.rules”
    3. Reload the systemd configuration:
      sudo systemctl daemon-reload
    4. Save the current state of all sets to the file: This command will overwrite the old /etc/ipset.rules file with a new version that contains information about all your lists, including blacklist_nets.
      sudo ipset save > /etc/ipset.rules
    5. Restart the service:
      sudo systemctl restart ipset-persistent.service

    With this method, you can simply and efficiently manage your server’s security, effectively blocking entire subnets that show suspicious activity, and be sure that these rules will remain active after every reboot.

  • WordPress and the “A scheduled event has failed” error

    WordPress and the “A scheduled event has failed” error

    Every WordPress site administrator knows that feeling. You log into the dashboard, and there’s a message waiting for you: “A scheduled event has failed”. Your heart stops for a moment. Is the site down? Is it a serious crash?

    Calm down! Before you start to panic, take a deep breath. This error, although it sounds serious, rarely means disaster. Most often, it’s simply a signal that the internal task scheduling mechanism in WordPress isn’t working optimally.

    In this article, we’ll explain what this error is, why it appears, and how to fix it professionally in various server configurations.

    What is WP-Cron?

    WordPress needs to perform cyclical background tasks: publishing scheduled posts, creating backups, or scanning the site for viruses (as in the case of the wf_scan_monitor error from the Wordfence plugin). To handle these operations, it uses a built-in mechanism called WP-Cron.

    The problem is that WP-Cron is not a real cron daemon known from Unix systems. It’s a “pseudo-cron” that has a fundamental flaw: it only runs when someone visits your website.

    • On sites with low traffic: If no one visits the site, tasks aren’t performed on time, which leads to errors.
    • On sites with high traffic: WP-Cron is called on every page load, which generates unnecessary server load.

    In both cases, the solution is the same: disable the built-in WP-Cron and replace it with a stable, system-level cron job.

    Scenario 1: A Single WordPress Site

    This is the most basic and common configuration. The solution is simple and comes down to two steps.

    Step 1: Disable the built-in WP-Cron mechanism

    Edit the wp-config.php file in your site’s main directory and add the following line:

    define(‘DISABLE_WP_CRON’, true);

    Step 2: Configure a system cron

    Log into your server via SSH and type crontab -e to edit the list of system tasks. Then, add one of the following lines, which will properly call the WordPress cron mechanism every 5 minutes.

    • wget method: */5 * * * * wget -q -O – https://yourdomain.co.uk/wp-cron.php?doing_wp_cron >/dev/null 2>&1
    • curl method: */5 * * * * curl https://yourdomain.co.uk/wp-cron.php?doing_wp_cron >/dev/null 2>&1

    Remember to replace yourdomain.co.uk with your actual address. From now on, tasks will be executed regularly, regardless of site traffic.

    Scenario 2: Multiple Sites on a Standard Server

    If you manage multiple sites, adding a separate line in crontab for each one is impractical and difficult to maintain. A better solution is to create a single script that will automatically find all WordPress installations and run their tasks.

    Step 1: Create the script file

    Create a file, e.g., /usr/local/bin/run_all_wp_crons.sh, and paste the following content into it. The script searches the /var/www/ directory for wp-config.php files.

    #!/bin/bash
    #
    # Script to run cron jobs for all WordPress sites
    # Optimised for ISPConfig directory structure
    # The main directory where the ACTUAL site files are located in ISPConfig
    SITES_ROOT=”/var/www/clients/”

    # Path to the PHP interpreter (may need to be adjusted)
    PHP_EXECUTABLE=”/usr/bin/php”

    # Logging (optional, but useful for debugging)
    LOG_FILE=”/var/log/wp_cron_runner.log”

    echo “Starting cron jobs (ISPConfig): $(date)” >> $LOG_FILE

    # Find all wp-config.php files and run wp-cron.php for them
    find “$SITES_ROOT” -name “wp-config.php” -print | while IFS= read -r -d ” config_file; do

        # Extract the directory where WordPress is located
        WP_DIR=$(dirname “$config_file”)

        # Extract the user name (e.g., web4) from the path
        # It’s the sixth element in the path /var/www/clients/client4/web4/web/
        WP_USER=$(echo “$WP_DIR” | awk -F’/’ ‘{print $6}’)

        if [ -z “$WP_USER” ]; then
            echo “-> WARNING: Failed to determine user for: $WP_DIR” >> $LOG_FILE
            continue
        fi

        # Check if the wp-cron.php file exists in this directory
        if [ -f “$WP_DIR/wp-cron.php” ]; then
            echo “-> Running cron for: $WP_DIR as user: $WP_USER” >> $LOG_FILE
            # Run wp-cron.php using PHP CLI, switching to the correct user
            su -s /bin/sh “$WP_USER” -c “(cd ‘$WP_DIR’ && ‘$PHP_EXECUTABLE’ wp-cron.php)”
        else
            echo “-> WARNING: Found wp-config, but no wp-cron.php in: $WP_DIR” >> $LOG_FILE
        fi
    done

    echo “Finished: $(date)” >> $LOG_FILE
    echo “—” >> $LOG_FILE

    Step 2: Grant the script execution permissions

    chmod +x /usr/local/bin/run_all_wp_crons.sh

    Step 3: Create a single cron job to manage everything

    Now your crontab can contain just one line:

    */5 * * * * /usr/local/bin/run_all_wp_crons.sh >/dev/null 2>&1

    Scenario 3: Multiple Sites with ISPConfig Panel

    The ISPConfig panel uses a specific directory structure with symlinks, e.g., /var/www/yourdomain.co.uk points to /var/www/clients/client1/web1/. Using the script above could cause tasks to be executed twice.

    To avoid this, you need to modify the script to search only the target clients directory.

    Step 1: Create a script optimised for ISPConfig

    Create the file /usr/local/bin/run_ispconfig_crons.sh. Note the change in the SITES_ROOT variable.

    #!/bin/bash
    # Script to run cron jobs for all WordPress sites
    # Optimised for ISPConfig directory structure
    # We only search the directory with the actual site files
    SITES_ROOT=”/var/www/clients/”

    # Path to the PHP interpreter
    PHP_EXECUTABLE=”/usr/bin/php”

    # Optional log file to track progress
    LOG_FILE=”/var/log/wp_cron_runner.log”

    echo “Starting cron jobs (ISPConfig): $(date)” >> $LOG_FILE

    find “$SITES_ROOT” -name “wp-config.php” -print0 | while IFS= read -r -d $’\0′ config_file; do
        WP_DIR=$(dirname “$config_file”)
        if [ -f “$WP_DIR/wp-cron.php” ]; then
            echo “-> Running cron for: $WP_DIR” >> $LOG_FILE
            (cd “$WP_DIR” && “$PHP_EXECUTABLE” wp-cron.php)
        fi
    done

    echo “Finished: $(date)” >> $LOG_FILE
    echo “—” >> $LOG_FILE

    Steps 2 and 3 are analogous to Scenario 2: give the script execution permissions (chmod +x) and add a single line to crontab -e, pointing to the new script file.

    Summary

    The “A scheduled event has failed” error is not a reason to panic, but rather an invitation to improve your infrastructure. It’s a chance to move from the unreliable, built-in WordPress mechanism to a solid, professional system solution that guarantees stability and performance.

    Regardless of your server configuration, you now have the tools to sleep soundly, knowing that your scheduled tasks are running like clockwork.

  • OpenLiteSpeed (OLS) with Redis. Fast Cache for WordPress Sites.

    OpenLiteSpeed (OLS) with Redis. Fast Cache for WordPress Sites.

    Managing a web server requires an understanding of the components that make up its architecture. Each element plays a crucial role in delivering content to users quickly and reliably. This article provides an in-depth analysis of a modern server configuration based on OpenLiteSpeed (OLS), explaining its fundamental mechanisms, its collaboration with the Redis caching system, and its methods of communication with external applications.

    OpenLiteSpeed (OLS) – The System’s Core

    The foundation of every website is the web server—the software responsible for receiving HTTP requests from browsers and returning the appropriate resources, such as HTML files, CSS, JavaScript, or images.

    What is OpenLiteSpeed?

    OpenLiteSpeed (OLS) is a high-performance, lightweight, open-source web server developed by LiteSpeed Technologies. Its key advantage over traditional servers, such as Apache in its default configuration, is its event-driven architecture.

    • Process-based model (e.g., Apache prefork): A separate process or thread is created for each simultaneous connection. This model is simple, but with high traffic, it leads to significant consumption of RAM and CPU resources, as each process, even if inactive, reserves resources.
    • Event-driven model (OpenLiteSpeed, Nginx): A single server worker process can handle hundreds or thousands of connections simultaneously. It uses non-blocking I/O operations and an event loop to manage requests. When a process is waiting for an operation (e.g., reading from a disk), it doesn’t block but instead moves on to handle another connection. This architecture provides much better scalability and lower resource consumption.

    Key Features of OpenLiteSpeed

    OLS offers a set of features that make it a powerful and flexible tool:

    • Graphical Administrative Interface (WebAdmin GUI): OLS has a built-in, browser-accessible admin panel that allows you to configure all aspects of the server—from virtual hosts and PHP settings to security rules—without needing to directly edit configuration files.
    • Built-in Caching Module (LSCache): One of OLS’s most important features is LSCache, an advanced and highly configurable full-page cache mechanism. When combined with dedicated plugins for CMS systems (e.g., WordPress), LSCache stores fully rendered HTML pages in memory. When the next request for the same page arrives, the server delivers it directly from the cache, completely bypassing the execution of PHP code and database queries.
    • Support for Modern Protocols (HTTP/3): OLS natively supports the latest network protocols, including HTTP/3 (based on QUIC). This provides lower latency and better performance, especially on unstable mobile connections.
    • Compatibility with Apache Rules: OLS can interpret mod_rewrite directives from .htaccess files, which is a standard in the Apache ecosystem. This significantly simplifies the migration process for existing applications without the need to rewrite complex URL rewriting rules.

    Redis – In-Memory Data Accelerator

    Caching is a fundamental optimisation technique that involves storing the results of costly operations in a faster access medium. In the context of web applications, Redis is one of the most popular tools for this task.

    What is Redis?

    Redis (REmote Dictionary Server) is an in-memory data structure, most often used as a key-value database, cache, or message broker. Its power comes from the fact that it stores all data in RAM, not on a hard drive. Accessing RAM is orders of magnitude faster than accessing SSDs or HDDs, as it’s a purely electronic operation that bypasses slower I/O interfaces.

    In a typical web application, Redis acts as an object cache. It stores the results of database queries, fragments of rendered HTML code, or complex PHP objects that are expensive to regenerate.

    How Do OpenLiteSpeed and Redis Collaborate?

    The LSCache and Redis caching mechanisms don’t exclude each other; rather, they complement each other perfectly, creating a multi-layered optimisation strategy.

    Request flow (simplified):

    1. A user sends a request for a dynamic page (e.g., a blog post).
    2. OpenLiteSpeed receives the request. The first step is to check the LSCache.
      • LSCache Hit: If an up-to-date, fully rendered version of the page is in the LSCache, OLS returns it immediately. The process ends here. This is the fastest possible scenario.
      • LSCache Miss: If the page is not in the cache, OLS forwards the request to the appropriate external application (e.g., a PHP interpreter) to generate it.
    3. The PHP application begins building the page. To do this, it needs to fetch data from the database (e.g., MySQL).
    4. Before PHP executes costly database queries, it first checks the Redis object cache.
      • Redis Hit: If the required data (e.g., SQL query results) are in Redis, they are returned instantly. PHP uses this data to build the page, bypassing communication with the database.
      • Redis Miss: If the data is not in the cache, PHP executes the database queries, fetches the results, and then saves them to Redis for future requests.
    5. PHP finishes generating the HTML page and returns it to OpenLiteSpeed.
    6. OLS sends the page to the user and, at the same time, saves it to the LSCache so that subsequent requests can be served much faster.

    This two-tiered strategy ensures that both the first and subsequent visits to a page are maximally optimised. LSCache eliminates the need to run PHP, while Redis drastically speeds up the page generation process itself when necessary.

    Delegating Tasks – External Applications in OLS

    Modern web servers are optimised to handle network connections and deliver static files (images, CSS). The execution of application code (dynamic content) is delegated to specialised external programmes. This division of responsibilities increases stability and security.

    OpenLiteSpeed manages these programmes through the External Applications system. The most important types are described below:

    • LSAPI Application (LiteSpeed SAPI App): The most efficient and recommended method of communication with PHP, Python, or Ruby applications. LSAPI is a proprietary, optimised protocol that minimises communication overhead between the server and the application interpreter.
    • FastCGI Application: A more universal, standard protocol for communicating with external application processes. This is a good solution for applications that don’t support LSAPI. It works on a similar principle to LSAPI (by maintaining permanent worker processes), but with slightly more protocol overhead.
    • Web Server (Proxy): This type configures OLS to act as a reverse proxy. OLS receives a request from the client and then forwards it in its entirety to another server running in the background (the “backend”), e.g., an application server written in Node.js, Java, or Go. This is crucial for building microservices-based architectures.
    • CGI Application: The historical and slowest method. A new application process is launched for each request and is closed after returning a response. Due to the huge performance overhead, it’s only used for older applications that don’t support newer protocols.

    OLS routes traffic to the appropriate application using Script Handlers, which map file extensions (e.g., .php) to a specific application, or Contexts, which map URL paths (e.g., /api/) to a proxy type application.

    Communication Language – A Comparison of SAPI Architectures

    SAPI (Server Application Programming Interface) is an interface that defines how a web server communicates with an application interpreter (e.g., PHP). The choice of SAPI implementation has a fundamental impact on the performance and stability of the entire system.

    The Evolution of SAPI

    1. CGI (Common Gateway Interface): The first standard. Stable, but inefficient due to launching a new process for each request.
    2. Embedded Module (e.g., mod_php in Apache): The PHP interpreter is loaded directly into the server process. This provides very fast communication, but at the cost of stability (a PHP crash causes the server to crash) and security.
    3. FastCGI: A compromise between performance and stability. It maintains a pool of independent, long-running PHP processes, which eliminates the cost of constantly launching them. Communication takes place via a socket, which provides isolation from the web server.
    4. LSAPI (LiteSpeed SAPI): An evolution of the FastCGI model. It uses the same architecture with separate processes, but the communication protocol itself was designed from scratch to minimise overhead, which translates to even higher performance than standard FastCGI.

    SAPI Architecture Comparison Table

    FeatureCGIEmbedded Module (mod_php)FastCGILiteSpeed SAPI (LSAPI)
    Process ModelNew process per requestShared process with serverPermanent external processesPermanent external processes
    PerformanceLowVery highHighHighest
    Stability / IsolationExcellentLowHighHigh
    Resource ConsumptionVery highModerateLowVery low
    OverheadHigh (process launch)Minimal (shared memory)Moderate (protocol)Low (optimised protocol)
    Main AdvantageFull isolationCommunication speedBalanced performance & stabilityOptimised performance & stability
    Main DisadvantageVery low performanceInstability, security issuesMore complex configurationTechnology specific to LiteSpeed

    Comparison of Communication Sockets

    Communication between processes (e.g., OLS and Redis, or OLS and PHP processes) occurs via sockets. The choice of socket type affects performance.

    FeatureTCP/IP Socket (on localhost)Unix Domain Socket (UDS)
    AddressingIP Address + Port (e.g., 127.0.0.1:6379)File path (e.g., /var/run/redis.sock)
    ScopeSame machine or over a networkSame machine only (IPC)
    Performance (locally)LowerHigher
    OverheadHigher (goes through the network stack)Minimal (bypasses the network stack)
    Security ModelFirewall rulesFile system permissions

    For local communication, UDS is a more efficient solution because it bypasses the entire operating system network stack, which reduces latency and CPU overhead. This is why it’s preferred in optimised configurations for connections between OLS, Redis, and LSAPI processes.

    Practical Implementation and Management

    To translate theory into practice, let’s analyse a real server configuration for the virtual host solutionsinc.co.uk.

    5.1 Analysis of the solutionsinc.co.uk Configuration Example

    1. External App Definition:
      • In the “External App” panel, a LiteSpeed SAPI App named solutionsinc.co.uk has been defined. This is the central configuration point for handling the dynamic content of the site.
      • Address: UDS://tmp/lshttpd/solutionsinc.co.uk.sock. This line is crucial. It informs OLS that a Unix Domain Socket (UDS) will be used to communicate with the PHP application, not a TCP/IP network socket. The .sock file at this path is the physical endpoint of this efficient communication channel.
      • Command: /usr/local/lsws/lsphp84/bin/lsphp. This is the direct path to the executable file of the LiteSpeed PHP interpreter, version 8.4. OLS knows it should run this specific programme to process scripts.
      • Other parameters, such as LSAPI_CHILDREN = 50 and memory limits, are used for precise resource and performance management of the PHP process pool.
    2. Linking with PHP Files (Script Handler):
      • The application definition alone isn’t enough. In the “Script Handler” panel, we tell OLS when to use it.
      • For the .php suffix (extension), LiteSpeed SAPI is set as the handler.
      • [VHost Level]: solutionsinc.co.uk is chosen as the Handler Name, which directly points to the application defined in the previous step.
      • Conclusion: From now on, every request for a file with the .php extension on this site will be passed through the UDS socket to one of the lsphp84 processes.
    image 114
    image 115

    This configuration is an excellent example of an optimised and secure environment: OLS handles the connections, while dedicated, isolated lsphp84 processes execute the application code, communicating through the fastest available channel—a Unix domain socket.

    5.2 Managing Unix Domain Sockets (.sock) and Troubleshooting

    The .sock file, as seen in the solutionsinc.co.uk.sock example, isn’t a regular file. It’s a special file in Unix systems that acts as an endpoint for inter-process communication (IPC). Instead of communicating through the network layer (even locally), processes can write to and read data directly from this file, which is much faster.

    When OpenLiteSpeed launches an external LSAPI application, it creates such a socket file. The PHP processes listen on this socket for incoming requests from OLS.

    Practical tip: A ‘Stubborn’ .sock file

    Sometimes, after making changes to the PHP configuration (e.g., modifying the php.ini file or installing a new extension) and restarting the OpenLiteSpeed server (lsws), the changes may not be visible on the site. This happens because the lsphp processes may not have been correctly restarted with the server, and OLS is still communicating with the old processes through the existing, “old” .sock file.

    In such a situation, when a standard restart doesn’t help, an effective solution is to:

    1. Stop the OpenLiteSpeed server.
    2. Manually delete the relevant .sock file, for example, using the terminal command: rm /tmp/lshttpd/solutionsinc.co.uk.sock
    3. Restart the OpenLiteSpeed server.

    After restarting OLS, not finding the existing socket file, it will be forced to create a new one. More importantly, it will launch a new pool of lsphp processes that will load the fresh configuration from the php.ini file. Deleting the .sock file acts as a hard reset of the communication channel between the server and the PHP application, guaranteeing that all components are initialised from scratch with the current settings.

    Summary

    The server configuration presented is a precisely designed system in which each element plays a vital role.

    • OpenLiteSpeed acts as an efficient, event-driven core, managing connections.
    • LSCache provides instant delivery of pages from the full-page cache.
    • Redis acts as an object cache, drastically accelerating the generation of dynamic content when needed.
    • LSAPI UDS creates optimised communication channels, minimising overhead and latency.

    An understanding of these dependencies allows for informed server management and optimisation to achieve maximum performance and reliability.

  • Ubuntu Server 24.04, ISPConfig (Nginx) & OpenLiteSpeed: A Modern, High-Performance Web and Email Server Without CyberPanel

    Introduction

    Hello there, fellow server administrators, Linux enthusiasts, and all those who don’t run screaming at the mere mention of a “reverse proxy”. Today, I’d like to share the story of a migration that – much like a decent bit of sci-fi – features a few plot twists, surprises, and, with any luck, a happy ending (well, at least for now!).

    For the past few years, my go-to solution for managing both web and email servers has been OpenLiteSpeed paired with CyberPanel. But, as fate (or rather, the CyberPanel developers) would have it, Ubuntu 24.04 LTS simply isn’t on their agenda. And so it’s been for more than three years now… Updates? Forget it! Support for newer Ubuntu releases? You must be joking! So here we are in June 2025, after countless requests and mounting anticipation, and CyberPanel still refuses to support the latest Ubuntu. Frankly, it doesn’t look like that’s about to change any time soon.

    But every cloud has a silver lining… So I decided to take matters into my own hands. The result? A 21st-century hybrid: Ubuntu Server 24.04, ISPConfig with Nginx acting as a reverse proxy, and OpenLiteSpeed, all without the dead weight of CyberPanel. Why this particular combination? Because I like to keep control, I value performance and flexibility, and – let’s be honest – I don’t want to be held hostage by some creaky old admin panel.

    This setup brings a few clear advantages:

    • A fresh, supported operating system with long-term support (not some digital fossil from 2022)
    • Complete control over your configuration, free from the shackles imposed by CyberPanel
    • The performance of OpenLiteSpeed, combined with the straightforward management of ISPConfig
    • Trouble-free handling of both websites and email (and if you’ve ever tried to wrangle mail on OLS + CyberPanel, you know exactly what digital purgatory looks like)
    • Modernity, flexibility, and readiness for the future – and, as a bonus, fewer grey hairs along the way

    Is this solution for everyone? Probably not – but if you’re fed up with waiting for panel updates, looking for something genuinely efficient, and like having everything under your control, stick around. In the next sections, I’ll walk you through setting up a server like this, step by step – no dark magic or needless frustration required.

    Why create a separate client for each website in ISPConfig?

    When setting up a server with ISPConfig, keeping things tidy and secure should be your top priority. One of the best practices is to create a separate client for every website you plan to host. This might seem a bit more effort at first, but it pays off hugely in the long run, both in terms of security and organisation.

    Benefits of using a dedicated client for each website:

    1. Data isolation – Each client (and therefore their sites, emails, databases) operates in its own “sandbox”. If one site is ever compromised, the others are much better protected.
    2. Easier administration – Managing permissions, backups, and resource limits is much clearer and can be done per client.
    3. Transparent billing – If you offer hosting as a service, it’s much easier to keep track of accounts and invoices for each client.
    4. Secure email separation – Email accounts for each client are separate, so a spam incident or security breach in one does not affect the others.
    5. Simpler migrations or deletions – Removing a website (along with the client) has no effect on the other sites on your server.

    In this article, I’ll walk you through the process of adding a new client for a real-world example: solutionsinc.co.uk.

    Limits and security in ISPConfig – why do they matter?

    Once you’ve created a client in ISPConfig, one of the most important steps is setting usage limits and making sure the account is secure. ISPConfig gives you granular control over how many resources (websites, email accounts, databases, etc.) each client can use – a perfect tool for both commercial hosting and private projects.

    Examples of limits you can set:

    • Maximum number of web domains and subdomains
    • Disk space quota (Web Quota)
    • Data transfer quota (Traffic Quota)
    • Number of FTP accounts, SSH users
    • Access to specific technologies (PHP, Python, Ruby, SSL, Let’s Encrypt, etc.)
    • Limits for emails, databases, cron jobs, and other services

    Defining these limits helps you keep everything tidy, prevents accidental server overloads, and protects against abuse (such as from a misconfigured app or a DDoS attack).

    Strong password – your first line of defence

    A crucial part of setting up any client account is choosing a strong, complex password – ideally a random one generated by a password manager. I recommend using a password up to 64 characters long, mixing upper- and lower-case letters, numbers, and special characters. With a password manager, you won’t need to memorise it, and your account will be much safer against attacks.

    Remember: Weak, repetitive or short passwords are practically an open invitation for cybercriminals!

    Description of the “Web Limits” options in ISPConfig

    • Webservers: The web server(s) where the client’s websites will be hosted. Usually, you select from servers previously defined in the panel.
    • Max. number of web domains: The maximum number of domains the client can add. -1 means unlimited.
    • Web Quota: The disk space limit for the client’s website files (in MB). -1 = unlimited.
    • Traffic Quota: Monthly data transfer limit (in MB) for all the client’s sites. -1 = unlimited.
    • PHP Options: Enables/disables PHP and lets you choose the handler (e.g. PHP-FPM, Disabled). Important for both security and performance.
    • CGI available: Whether the client can run CGI scripts (usually disabled for security reasons).
    • SSI available: Allows use of Server Side Includes.
    • Perl available / Ruby available / Python available: Whether these scripting languages are available to the client’s sites.
    • SuEXEC forced: Forces scripts to run with the domain user’s permissions, increasing security.
    • Custom error docs available: Whether the client can set up custom error pages (e.g. 404.html).
    • Wildcard subdomain available: Allows support for wildcard subdomains like *.yourdomain.com.
    • SSL available: Whether the client can enable SSL on their sites (https).
    • Let’s Encrypt available: Enables free SSL certificate generation from Let’s Encrypt for the client’s domains.
    • Max. number of web aliasdomains: The maximum number of alias domains (extra domains pointing to the same site).
    • Max. number of web subdomains: The maximum number of subdomains the client can create.
    • Max. number of FTP users: Number of FTP users the client can create.
    • Max. number of Shell users: Number of SSH users (usually set to 0 for security).
    • SSH-Chroot Options: Whether SSH users are restricted (chrooted) to their home directory (“Jailkit”).
    • Max. number of Webdav users: Limit for the number of WebDAV users.
    • Backupfunction available: Whether the client can perform their own backups via the panel.
    • Show web server config selection: Allows the client to select additional web server configuration options (for advanced users).

    Description of the “Email Limits” options in ISPConfig

    • Mailservers: The mail server where the client’s email domains and mailboxes are hosted.
    • Max. number of email domains: Maximum number of email domains the client can create (-1 = unlimited).
    • Max. number of mailboxes: Maximum number of mailboxes for the client (-1 = unlimited).
    • Max. number of email aliases: Limit for email aliases (additional addresses that forward to mailboxes).
    • Max. number of domain aliases: Number of domain aliases (extra domains mapped to the same mailboxes).
    • Max. number of mailing lists: Limit for mailing lists (group distribution lists).
    • Max. number of email forwarders: Maximum number of email forwarders (forwards).
    • Max. number of email catchall accounts: Number of “catchall” accounts that receive all mail sent to non-existent addresses in the domain.
    • Max. number of email routes: Number of email routing rules (advanced – redirecting mail to other servers based on custom rules).
    • Max. number of email white / blacklist entries: Maximum number of entries on the client’s white/blacklist.
    • Max. number of email filters: Limit for email filters (automatic sorting, labelling, etc.).
    • Max. number of fetchmail accounts: Number of external fetchmail accounts (to collect mail from other servers).
    • Mailbox quota: Mailbox size limit (in MB). -1 = unlimited.
    • Max. number of spamfilter white / blacklist filters: Number of white/blacklist rules in the spam filter.
    • Max. number of spamfilter users: Number of users with their own spamfilter settings.
    • Max. number of spamfilter policies: Number of spamfilter policies (sets of rules).
    • E-mail backup function available: Whether the client can create email backups via the panel.

    Creating a database user for your website – ISPConfig

    Once you’ve set up your client and defined all necessary limits, the next step is to create a dedicated database user for your website (e.g. solutionsinc.co.uk). Go to the Sites tab and find the database management section.

    1. Select your client from the list (e.g. SolutionsInc).
    2. Enter the database username – ISPConfig automatically suggests a prefix linked to the client (e.g. c2_). This ensures each user is unique and easy to identify.
    3. Set a strong database password. Use the Generate Password button and choose a long, random password (ideally stored in your password manager). Strong passwords are essential for security, and you won’t need to remember them.
    4. Repeat the password in the confirmation field.
    5. Click Save to create the user.

    Creating a separate database user for each website is a key security step – if the password is ever compromised, only that single site is affected. Even a serious application bug won’t give an attacker access to data from other sites on the server.

    Description of the “Domain” tab options when creating a website in ISPConfig

    • Server: The web server on which the domain will be hosted. Select the appropriate server from the available list.
    • Client: The client to whom this domain/website will be assigned.
    • IPv4-Address: The IPv4 address assigned to this domain (default is *, meaning any available IP on the server).
    • IPv6-Address: IPv6 address, if used (optional).
    • Domain: The domain name you wish to add (e.g. solutionsinc.co.uk).
    • Harddisk Quota: The disk space limit for this particular site (in MB). -1 means unlimited.
    • Traffic Quota: The monthly data transfer limit for this site (in MB). -1 means unlimited.
    • CGI: Allow running CGI scripts on the site (usually disabled for security reasons).
    • SSI: Enable Server Side Includes support.
    • Own Error-Documents: Allows you to set up custom error pages (e.g. 404.html).
    • Auto-Subdomain: Default subdomain that will be automatically added (usually www).
    • SSL: Tick this if you have your own SSL certificate and want to manually upload certificates via the panel.
    • Let’s Encrypt SSL: Select this if you want ISPConfig to automatically generate a free Let’s Encrypt SSL certificate for this domain (no need to have your own certificate).
    • PHP: If you intend to use Nginx as a Reverse Proxy for OpenLiteSpeed, you must select PHP-FPM and exactly the same PHP version as used by OLS. Other modes (e.g. Disabled) will not work properly in this setup.
    • Web server config: Additional web server configuration options (advanced, can usually be left as default).
    • Active: Whether the website should be active (enabled by default – the site will work once saved).

    Description of the “Redirect” tab options in ISPConfig (Reverse Proxy)

    • Redirect Type: Defines the type of redirect for the domain. If you are setting up Nginx as a reverse proxy for OpenLiteSpeed, make sure to select proxy here. This ensures HTTP/S traffic is properly forwarded to the backend (OLS).
    • Redirect Path: The target address (URL/backend) where the traffic should be proxied. For reverse proxy, enter the backend address here (e.g. http://127.0.0.1:8088/ – do not use port 8080, as this port is used by ISPConfig!).
    • SEO Redirect: Optional SEO redirects (e.g. 301, 302, non-www→www). Usually set to “No redirect” unless you have specific SEO requirements.
    • Rewrite Rules: Field for custom rewrite rules, compatible with the nginx_http_rewrite_module. Here you can add additional HTTP instructions, e.g. break, if, return, rewrite, set (full list on the nginx documentation site).
    • Rewrite HTTP to HTTPS: If checked, automatically redirects all HTTP traffic to HTTPS. Recommended for sites requiring SSL.

    Note: For reverse proxy setups, it is essential to set Redirect Type to proxy and specify the correct backend address in Redirect Path.

    SSL tab in ISPConfig – what should you set here?

    • If you have ticked Let’s Encrypt SSL in the Domain tab, you don’t need to fill in anything here. The certificate will be generated automatically and these fields can remain empty.
    • If you are using your own SSL certificates, paste the relevant content into the fields below:
      • SSL Key: The private key
      • SSL Request: The Certificate Signing Request (CSR) – optional, if you use it
      • SSL Certificate: The actual SSL certificate (public certificate)
      • SSL Bundle: Any intermediate certificates/CA Bundle (if required by your certificate provider)
    • SSL Domain: The domain for which the certificate is generated/installed (autofilled)
    • SSL Action: By default, “Save certificate” – saves the details you enter

    Tip: If you switch certificate type (for example, from a custom certificate to Let’s Encrypt or vice versa), remember to untick any unnecessary options in the Domain tab and save your changes.

    Statistics tab in ISPConfig – your own web analytics

    If you want an independent statistics system for your website (other than what’s provided by WordPress or Google Analytics), this is the place. Here you can choose which program will generate and display detailed traffic statistics for your site. Here’s a brief overview of the available options:

    • AWStats: The most popular tool for detailed website statistics. It analyses server logs and presents readable charts, traffic summaries, referrers, search phrases, and more. Features a web interface and multi-language support.
    • GoAccess: A modern, real-time log analyser with its own web interface. Very fast, provides clear summaries of key metrics (unique visitors, most popular pages, error codes, etc.). Slightly more technical than AWStats.
    • Webalizer: An older but very lightweight and fast log analyser. Shows basic traffic stats, hourly/daily graphs, top visited pages, but with less detail than AWStats.
    • None: No statistics will be collected. Useful if you use only external analytics solutions or want to save system resources.

    Tip: Make sure to set a password for the statistics panel if you want to keep access restricted!

    Backup tab – Your safety net (and peace of mind)

    This tab lets you set up automatic backups of your website and database, or make a manual backup whenever you’re feeling sensible (or have a sudden flash of paranoia – both are valid).

    Available options:

    • Backup interval: How often to run backups (e.g. daily, weekly, monthly, or never – but I definitely don’t recommend that last one!).
    • Number of backup copies: How many recent backups to keep on the server.
    • Excluded Directories: Folders to skip during backup (like cache or temp data).
    • Compression options: Compress your backups so they don’t fill up your server – highly recommended!
    • Encryption options: Encrypt your backups, so even if someone gets their hands on them, your data stays safe.

    Manual backup: Two handy buttons – make a database backup or a web files backup in a single click.

    Anecdote: There are two kinds of people: those who make backups, and those who will start making them… right after their first real disaster. Trust me, you want to be in the first group!

    Options tab – crucial for Reverse Proxy (Nginx → OLS)

    If you use Nginx as a Reverse Proxy for OpenLiteSpeed (or another backend), you must configure the correct headers in the Proxy Directives field. Paste the following lines there:

    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;

    Why is this important?

    • proxy_set_header Host $host;
      Passes the original host name (domain) that the visitor used. This ensures the backend (OLS) knows which virtual host the request is for and serves the correct website.
    • proxy_set_header X-Real-IP $remote_addr;
      Forwards the real client IP address (not the proxy’s address). This is crucial for logging, statistics, and security features – you’ll always see the true visitor’s IP.
    • proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      Adds the client’s original IP address to the X-Forwarded-For header. Especially useful if your traffic passes through multiple proxies, so you can trace the full request path.
    • proxy_set_header X-Forwarded-Proto $scheme;
      Tells the backend whether the original request used HTTPS or HTTP. Essential for your apps to generate correct return links (e.g., https:// instead of http://).

    Summary:
    Without these headers, your backend (OLS) won’t know who’s really visiting your site, which domain they’re using, or whether they’re using HTTPS. The consequences? Incorrect logs, broken redirects, SSL issues, and even security holes.

    Final step: configuring OpenLiteSpeed for Reverse Proxy

    Once all ISPConfig settings are in place, head over to the OpenLiteSpeed admin panel at http://SERVER_IP:7080.

    What to do:

    1. Go to the Listeners section.
    2. Delete all listeners except for the Listener Default on port 8088 (this is where OLS will receive requests from the Nginx reverse proxy).
    3. Click the magnifying glass icon for your listener and open the Virtual Host Mappings tab.
    4. Add a mapping for your site, e.g. solutionsinc.co.uk (or whichever domain you configured in ISPConfig).

    Why this way?

    • SSL management (certificates, renewals, redirects, etc.) is now fully handled by Nginx and ISPConfig – no need to set up SSL in OLS.
    • Other listener settings can be left as-is – from now on, Nginx will handle all the “heavy lifting”.

    Summary:
    From now on, Nginx will handle all incoming traffic (including HTTPS) and forward clean requests to OLS over the local port (8088). This is a perfect blend of performance, flexibility, and security.

    Virtual Hosts > Basic in OpenLiteSpeed – how to fill in the fields correctly?

    1. Virtual Host Name: Enter the virtual host’s name, e.g. solutionsinc.co.uk .
    2. Virtual Host Root: Use exactly the path from the Document Root field in ISPConfig (e.g. /var/www/clients/client1/web1/), making sure to include the trailing slash /. This ensures OLS serves files from the right location.
    3. Config File: Enter bashKopiujEdytuj$SERVER_ROOT/conf/vhosts/$VH_NAME/vhost.conf When you try to save, OLS will warn you that this file does not exist – click the link below the field to create it automatically. You’ll then be able to save your settings.
    4. suEXEC User and suEXEC Group:
      Select the exact user and group that ISPConfig created for your domain. You can check this in ISPConfig under Sites > Website > Options.
      This is crucial – it ensures OLS runs PHP scripts with the correct permissions, improving security and isolating your site from other users’ files.
    5. External App Set UID Mode: Choose DocRoot UID. This ensures external apps (like PHP) run as the user assigned to your site’s directory.
    6. In the Security section, you’ll find options that are essential for the secure and correct functioning of your virtual host. Set each of them to Yes:
      • Follow Symbolic Link:
        Allows OLS to follow symbolic links in your site directory. This is required by some applications, frameworks, or during software updates.
      • Enable Scripts/ExtApps:
        Lets you run scripts (like PHP) and external applications. Without this, your website won’t be able to process PHP or use any dynamic features.
      • Restrained:
        Enables “restrained” mode – increases security by limiting which system commands the virtual host can execute.
      • Remember:
      • All these options must be set to Yes for your site to work correctly and securely with ISPConfig and a reverse proxy setup.

    General tab – OpenLiteSpeed Virtual Host

    The most important options:

    • Document Root:
      The directory where your website’s files are located. Paste the path from ISPConfig (Sites → Website → Document Root), then add your site’s directory, usually /web/, at the end, e.g. /var/www/clients/client1/web1/web/. Always check this path in ISPConfig, as it may differ!

    Other options:

    • Domain Name:
      (Optional) Main domain name for this vhost.
    • Domain Aliases:
      Alternative domains that should also point to this vhost (e.g. www and non-www versions).
    • Administrator Email:
      Email address of the site admin (for error notifications).
    • Enable GZIP Compression:
      Enables GZIP compression on the server – speeds up page loading.
    • Enable Brotli Compression:
      Alternative, more efficient compression method than GZIP.
    • Enable GeoLocation Lookup:
      Lets the server detect users’ country (e.g. for stats).
    • cgroups:
      Optional resource limits for this vhost (CPU/RAM quotas).

    Index Files section:

    • Use Server Index Files:
      Should be set to “No” so OLS uses the index files defined below, not global server defaults.
    • Index Files:
      List of filenames that should be treated as the site’s entry point (e.g. index.php, index.html). The first found file will be used.
    • Auto Index:
      Allows automatic directory index if no index file exists (recommended to leave off for security reasons).
    • Auto Index URI:
      Lets you define a URL to show the auto-generated index.

    Customized Error Pages:

    • Specify custom error pages here (e.g. for 404 or 500 errors).

    Expires Settings:

    • Enable Expires, Expires Default, Expires By Type:
      Control how browsers cache static files (how long files should be kept). Usually left as default; you can manage cache with .htaccess or your app settings.

    File Upload:

    • Temporary File Path, Temporary File Permission, Pass Upload Data by File Path:
      Advanced file upload settings – where temp files go, their permissions, and whether upload data is passed by path (usually leave as default).

    php.ini Override:

    • Lets you specify custom PHP settings just for this vhost – if you don’t need to tweak (like upload_max_filesize), leave it blank.

    Log tab – separate logs for every site

    Why use separate logs?

    • Isolation: Per-site logs make management, troubleshooting, and auditing much easier – no need to search a massive, shared file.
    • Security: If you have multiple clients or projects, separate logs help maintain data privacy and organisation.
    • Easier analysis: You can quickly spot attacks, errors, or unusual requests for each website.

    Virtual Host Log (error log)

    • Use Server’s Log:
      Set to “No” to keep this site’s error logs separate from global server logs. Recommended – makes it much easier to spot issues for a specific site.
    • File Name:
      Path to this site’s error log (e.g. /var/www/clients/client1/solutionsinc.co.uk/log/error.log). Keeping logs in the domain directory makes backup and access easier.
    • Log Level:
      How detailed the logs should be (DEBUG, INFO, NOTICE, WARN, ERROR, CRIT).
      DEBUG is the most detailed – great for troubleshooting, but for production use WARN or ERROR.
    • Rolling Size (bytes):
      Max log file size before a new one is created (e.g. 10M). Prevents disk space exhaustion.
    • Keep Days:
      How many days to keep old logs. Useful for auditing and investigating past incidents.
    • Compress Archive:
      Whether to archive and compress old logs. Recommended if you have lots of logs – saves disk space.

    Access Log

    • Log Control:
      Choose “Own Log File” so each vhost has its own access log. Makes it much easier to analyse traffic for individual sites.
    • File Name:
      Path to the access log file (e.g. /var/www/clients/client1/solutionsinc.co.uk/log/access_log).
    • Piped Logger:
      Advanced – logs can be processed “live” by external programs. Leave empty unless needed.
    • Log Format:
      Log entry format. The default works for most needs, but you can customise it for special analysis.
    • Log Headers:
      Tick “Referrer”, “UserAgent”, “Host” – this ensures logs have key info about visitors, sources, and devices.
    • Rolling Size (bytes):
      Max log file size before rolling over (e.g. 50M).
    • Keep Days:
      How long to keep access logs (e.g. 365 – a full year’s history).
    • Compress Archive:
      Set to “Yes” so old logs are compressed automatically.
    • Bytes log:
      Optional: a file for tracking bytes transferred (mainly for advanced analysis).

    Recommended settings:

    • For production, set Log Level to WARN or ERROR; use DEBUG only during setup and testing.
    • Always compress archived logs.
    • Keep logs for at least 30 days – 90 or more is ideal if you have the space.
    • For access logs: always use a separate file per domain, and always include Referrer, UserAgent, and Host.

    Security tab – explanation of all options

    Note:
    You do NOT need to configure any of these settings for your site to work with Nginx as a Reverse Proxy!
    Unless you are an advanced user, it is best to get your site working first, then come back here to experiment with security settings.


    Section: LS reCAPTCHA

    • Enable reCAPTCHA, Site Key, Secret Key, reCAPTCHA Type, Max Tries, Concurrent Request Limit
      Allows you to enable reCAPTCHA mechanisms (protection against bots and brute-force attacks at the server level). You need to provide Google keys and set attempt limits.
      Practice: Use only if you know exactly what you are doing – misconfiguration may block access to your website!

    Section: Containers

    • Bubblewrap Container
      Runs the vhost in an isolated Bubblewrap container (extra security, as apps are “cut off” from the rest of the system).
      Advanced! Not recommended for beginners.
    • Namespace Container, Additional Namespace Template File
      Lets you isolate the vhost in its own Linux namespace (further boosts security, but requires knowledge of Linux containers).

    Section: Access Control

    • Allowed List
      List of IP addresses that are allowed access to the site.
    • Denied List
      List of IP addresses that are denied access. Warning: If set incorrectly, you might accidentally lock yourself out of your own website!

    Section: Realm List

    • Here you can set up “realms” – server-level authentication zones (like password-protecting a directory).

    Practical summary

    • For your site to work with Nginx as Reverse Proxy, you do NOT need to set anything here.
    • These features are mainly for advanced admins – for most users, it’s best to leave them at default until your site is working smoothly.
    • Recommendation: Get your site up and running first, check everything is OK, then (optionally) return here to tweak security settings.

    External App tab – options and recommendations

    This tab is responsible for connecting your web server to the PHP interpreter, so your PHP applications work correctly.

    Key fields:

    • Name:
      Name of the external application, e.g. solutionsinc.co.uk. Use your domain name for clarity.
    • Address:
      IMPORTANT!
      The socket for communicating with PHP, e.g. UDS:///tmp/lshttpd/solutionsinc.co.uk.sock
      UDS (Unix Domain Socket) must be uppercase! This enables faster and more secure communication than TCP.
    • Notes:
      Any optional notes/description.
    • Max Connections:
      Max number of simultaneous PHP connections. 50 is good for most sites. Increase for heavy-traffic sites.
    • Environment:
      Additional environment variables. Example: LSAPI_CHILDREN=50 – sets the number of PHP child processes.
    • Initial Request Timeout (secs):
      How long to wait for the first PHP response (e.g. 600 seconds). Increase for slower servers or long-running scripts.
    • Retry Timeout (secs):
      How long the server waits and retries if connecting to PHP fails.
    • Persistent Connection:
      “Yes” is best – keeps PHP connections alive for faster handling of multiple requests.
    • Connection Keep-Alive Timeout:
      How long (in seconds) to keep PHP connections open after serving a request (default 1).
    • Response Buffering:
      “No” means responses are sent to the client immediately – recommended for dynamic websites.
    • Start By Server:
      “Yes (Through CGI Daemon)” – the server starts PHP automatically; this is safer and more convenient.
    • Command:
      CRUCIAL!
      Path to the PHP interpreter, e.g. /usr/local/lsws/lsphp83/bin/lsphp
      Make sure this file exists and matches your desired PHP version. How to check:
      In your server console, type: bashKopiujEdytujls -l /usr/local/lsws/lsphp83/bin/lsphp If the file exists, the path is correct. If not, check if PHP 8.3 is installed via LiteSpeed (use the LiteSpeed manager or the lsphp command).
    • Back Log:
      Max number of pending PHP requests (100 is recommended).
    • Instances:
      Number of app instances (i.e., separate PHP processes). Normally 1 unless you have special needs.
    • Run As User / Run As Group:
      Must match the user and group defined in ISPConfig for this site (e.g. web1/client1). This ensures each vhost runs with only its own permissions – much better security.
    • umask:
      Permission mask for new files – leave blank unless you have a reason.
    • Run On Start Up, Max Idle Time, Priority:
      Advanced – normally leave as default.
    • Memory Soft Limit / Hard Limit:
      Soft/hard RAM limits for the PHP process (in bytes).
      Recommended: 2047M (2GB). Adjust for your server and application needs.
    • Process Soft Limit / Hard Limit:
      Limit the number of processes (soft/hard).
      Soft = warning, hard = cap. Example: Soft 400, Hard 500.

    Script Handler tab – what does it do?

    This section determines which application handles specific script file types (e.g. PHP) at the Virtual Host level. Without the right handler, PHP simply won’t work!

    Key options:

    • Suffixes
      Enter the file extension(s) you want this handler to process.
      Typically, just enter: nginxKopiujEdytujphp This ensures all files ending with .php are processed by PHP.

    Suffix – what is it and how should you set it?

    Suffix specifies which script file extensions will be handled by this script handler. Each suffix must be unique within your configuration.

    Syntax:

    • Comma-delimited list without the period (“.” character is prohibited).
    • Example: KopiujEdytujphp,php83

    Important notes (based on OLS documentation):

    • The server will automatically add a special MIME type (application/x-httpd-[suffix]) for the first suffix in the list.
      • For example, for php83, MIME type application/x-httpd-php83 will be added.
    • If you wish to use additional suffixes (like php53,php74), you must manually set up the corresponding MIME types in the “MIME Settings” after the first one.
    • Although this field lists suffixes, script handlers actually use MIME types, not suffixes, to decide which scripts to process.
    • Only specify suffixes you really need – avoid listing unused extensions, as this could introduce security or configuration risks.

    Example of correct configuration:

    To handle only .php files:

    php

    To also handle .php83 files:

    php,php83

    Remember: for extra suffixes, add the appropriate MIME types in your server’s settings!

    • Handler Type
      The type of script handler.
      LiteSpeed SAPI is the fastest and most direct way to run PHP on OpenLiteSpeed – it links directly to your PHP process as configured in the External App tab.
    • Handler Name
      Select the application you defined in the External App section.
      If you have several virtual hosts, you can have different PHP versions/configurations for each.
      Tip:
      It should say [VHost Level]: solutionsinc.co.uk – meaning the handler set up specifically for this website.

    Why is this important?

    • Without a proper handler for PHP, your server won’t know how to execute .php files (it might try to send them as plain text instead of running the code!).
    • Using LiteSpeed SAPI ensures the best performance, security, and PHP compatibility.
    • If you host multiple sites, each with different PHP needs (e.g. one site needs PHP 8.3, another needs PHP 8.1), you can assign a different interpreter to each virtual host.

    Rewrite tab – what does it do?

    The Rewrite tab allows you to manage URL rewriting rules (“mod_rewrite”), which are essential for:

    • Clean/friendly URLs (for WordPress, Prestashop, Laravel, etc.).
    • Forced redirects (e.g. http→https, non-www→www).
    • Custom rewrite logic (folder masking, domain aliases, etc.).

    Field descriptions

    Rewrite Control

    • Enable Rewrite
      Enables the rewrite engine (mod_rewrite) for this Virtual Host.
      Important: Without this, no .htaccess or custom rewrite rules will work!
    • Auto Load from .htaccess
      Automatically loads rewrite rules from .htaccess files found in your site’s directories.
      Important:
      • For WordPress and most CMSes, this must be “Yes”.
      • If you want maximum performance, you can place rewrite rules directly in “Rewrite Rules” and turn this off.
    • Log Level
      Specifies the detail level of the rewrite engine’s debug output.
      • Value range: 0–9
        • 0 – disables rewrite logging.
        • 9 – produces the most detailed debug log.
      • The higher the value, the more information you get about how rewrite rules are processed.
      • For this setting to take effect, the server and/or virtual host error log must be set to at least INFO.
      • Especially useful for testing or debugging rewrite rules.
      • Syntax: Integer between 0 and 9.

    Rewrite Map
    Lets you define rewrite maps (advanced usage – e.g. dynamic redirects using patterns or files).

    • Most users don’t need this.

    Rewrite Rules
    Here you can manually enter mod_rewrite rules (Apache style).

    Example:

    RewriteEngine On

    RewriteCond %{HTTPS} !=on

    RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

    Rules here override those from .htaccess.


    Why is this important?

    • Without rewrite, many apps will not work properly (no pretty URLs, 404 errors).
    • Automatic .htaccess loading allows you to use ready-made .htaccess files from popular CMSes.
    • Manual rules are useful for maximum control or performance.

    Context Tab – Field Descriptions

    1. URI

    • Description: The path (URI) of the directory or subdirectory to which you want to apply this context.
    • Crucial! This must match exactly the folder where your website is located (e.g. /web/ for /var/www/clients/client1/web1/web/).
    • Note: If the URI ends with a /, all subdirectories beneath it are included in this context. If your site is in the root, simply use /.

    2. LSAPI App

    • Description: Select the LiteSpeed SAPI application (e.g. PHP) that should handle this context. The dropdown list contains applications defined under External App.
    • Why it matters: This links PHP (or another language) processing to your website. Without it, PHP will not work in this directory!

    3. Notes

    • Description: An informational field where you can add your own notes.
    • Tip: Useful for larger installations or when testing, but not required for basic setups.

    4. Header Operations

    • Description: Allows you to add, append, or unset HTTP response headers (e.g. for cache control or security).
    • Syntax: Similar to Apache’s mod_headers.
    • Example: pgsqlKopiujEdytujset Cache-control no-cache append Cache-control no-store
    • Tip: Handy if you want to control caching or security headers for a particular directory.

    5. Realm / Authentication Name / Require (Authorised Users/Groups) / Access Allowed / Access Denied / Authorizer

    • Description: Options for restricting access to this directory (you can set up basic authentication, or allow only certain users or groups).
    • Tip: Leave these blank unless you want to protect a folder (such as an admin area or testing section) with a password or by user group.

    6. Add Default Charset

    • Description: Controls whether a default character set is added to HTTP responses.
    • Default: Off
    • Set if: You need to enforce a particular character encoding (for example, UTF-8) for your website.

    7. Customized Default Charset

    • Description: Lets you specify your own default character set (e.g. UTF-8, ISO-8859-2).
    • Set if: Your site requires a specific encoding.

    8. Enable GeoLocation Lookup

    • Description: Enables geographical IP lookup for users visiting this context.
    • Tip: Leave off unless you specifically need to personalise content or restrict access by country.

    Why is the Context tab important?

    • The Context tab allows you to precisely control how the server handles specific directories or subdirectories – for example, using a different PHP version for /admin/, securing a login area, or setting security headers for a particular folder.
    • The URI field is essential: if this path is incorrect, your settings will not take effect.
    • This flexibility goes far beyond what typical shared hosting offers, letting you tailor configuration to your needs.

    Summary

    After completing all of the above steps – from ISPConfig setup, to configuring Nginx as a reverse proxy, and fine-tuning the options in OpenLiteSpeed – your WordPress site or application should be up and running, provided you’ve set your DNS records correctly (for example, in CloudFlare). Without those, even the most beautifully configured server will be as empty as the office on a Friday at 5pm!

    If everything works – congratulations! You can now put the kettle on and enjoy the feeling of running a truly robust, modern server setup. And if things aren’t quite right… double-check your DNS, your logs, and maybe question the wisdom of midnight sysadmin adventures. Good luck, and may your uptime be as solid as your sense of humour!