Category: LiteSpeed EN

  • Your Server is Secure: A Guide to Permanently Blocking Attacks

    Your Server is Secure: A Guide to Permanently Blocking Attacks

    A Permanent IP Blacklist with Fail2ban, UFW, and Ipset

    Introduction: Beyond Temporary Protection

    In the digital world, where server attacks are a daily occurrence, merely reacting is not enough. Although tools like Fail2ban provide a basic line of defence, their temporary blocks leave a loophole—persistent attackers can return and try again after the ban expires. This article provides a detailed guide to building a fully automated, two-layer system that turns ephemeral bans into permanent, global blocks. The combination of Fail2ban, UFW, and the powerful Ipset tool creates a mechanism that permanently protects your server from known repeat offenders.

    Layer One: Reaction with Fail2ban

    At the start of every attack is Fail2ban. This daemon monitors log files (e.g., sshd.log, apache.log) for patterns indicating break-in attempts, such as multiple failed login attempts. When it detects such activity, it immediately blocks the attacker’s IP address by adding it to the firewall rules for a defined period (e.g., 10 minutes, 30 days). This is an effective but short-term response.

    Layer Two: Persistence with UFW and Ipset

    For a ban to become permanent, we need a more robust, centralised method of managing IP addresses. This is where UFW and Ipset come in.

    What is Ipset?

    Ipset is a Linux kernel extension that allows you to manage sets of IP addresses, networks, or ports. It is a much more efficient solution than adding thousands of individual rules to a firewall. Instead, the firewall can refer to an entire set with a single rule.

    Ipset Installation and Configuration

    The first step is to install Ipset on your system. We use standard package managers for this.

    sudo apt update
    sudo apt install ipset

    Next, we create two sets: blacklist for IPv4 addresses and blacklist_v6 for IPv6.

    sudo ipset create blacklist hash:ip hashsize 4096
    sudo ipset create blacklist_v6 hash:net family inet6 hashsize 4096

    The hashsize parameter determines the maximum number of entries, which is crucial for performance.

    Integrating Ipset with the UFW Firewall

    For UFW to start using our sets, we must add the appropriate commands to its rules. We edit the UFW configuration files, adding rules that block traffic originating from addresses contained in our Ipset sets. For IPv4, we edit /etc/ufw/before.rules:

    sudo nano /etc/ufw/before.rules

    Immediately after *filter and :ufw-before-input [0:0], add:

    # Rules for the permanent blacklist (ipset)
    # Block any incoming traffic from IP addresses in the ‘blacklist’ set (IPv4)
    -A ufw-before-input -m set –match-set blacklist src -j DROP

    For IPv6, we edit /etc/ufw/before6.rules:

    sudo nano /etc/ufw/before6.rules

    Immediately after *filter and :ufw6-before-input [0:0], add:

    # Rules for the permanent blacklist (ipset) IPv6
    # Block any incoming traffic from IP addresses in the ‘blacklist_v6’ set
    -A ufw6-before-input -m set –match-set blacklist_v6 src -j DROP

    After adding the rules, we reload UFW for them to take effect:

    sudo ufw reload

    Script for Automatic Blacklist Updates

    The core of the system is a script that acts as a bridge between Fail2ban and Ipset. Its job is to collect banned addresses, ensure they are unique, and synchronise them with the Ipset sets.

    Create the script file:

    sudo nano /usr/local/bin/update-blacklist.sh

    Below is the content of the script. It works in several steps:

    1. Creates a temporary, unique list of IP addresses from Fail2ban logs and the existing blacklist.
    2. Creates temporary Ipset sets.
    3. Reads addresses from the unique list and adds them to the appropriate temporary sets (distinguishing between IPv4 and IPv6).
    4. Atomically swaps the old Ipset sets with the new, temporary ones, minimising the risk of protection gaps.
    5. Destroys the old, temporary sets.
    6. Returns a summary of the number of blocked addresses.

    #!/bin/bash

    BLACKLIST_FILE=”/etc/fail2ban/blacklist.local”
    IPSET_NAME_V4=”blacklist”
    IPSET_NAME_V6=”blacklist_v6″

    touch “$BLACKLIST_FILE”

    # Create a unique list of banned IPs from the log and the existing blacklist file
    (grep ‘Ban’ /var/log/fail2ban.log | awk ‘{print $(NF)}’ && cat “$BLACKLIST_FILE”) | sort -u > “$BLACKLIST_FILE.tmp”
    mv “$BLACKLIST_FILE.tmp” “$BLACKLIST_FILE”

    # Create temporary ipsets
    sudo ipset create “${IPSET_NAME_V4}_tmp” hash:ip hashsize 4096 –exist
    sudo ipset create “${IPSET_NAME_V6}_tmp” hash:net family inet6 hashsize 4096 –exist

    # Add IPs to the temporary sets
    while IFS= read -r ip; do
        if [[ “$ip” == *”:”* ]]; then
            sudo ipset add “${IPSET_NAME_V6}_tmp” “$ip”
        else
            sudo ipset add “${IPSET_NAME_V4}_tmp” “$ip”
        fi
    done < “$BLACKLIST_FILE”

    # Atomically swap the temporary sets with the active ones
    sudo ipset swap “${IPSET_NAME_V4}_tmp” “$IPSET_NAME_V4”
    sudo ipset swap “${IPSET_NAME_V6}_tmp” “$IPSET_NAME_V6”

    # Destroy the temporary sets
    sudo ipset destroy “${IPSET_NAME_V4}_tmp”
    sudo ipset destroy “${IPSET_NAME_V6}_tmp”

    # Count the number of entries
    COUNT_V4=$(sudo ipset list “$IPSET_NAME_V4” | wc -l)
    COUNT_V6=$(sudo ipset list “$IPSET_NAME_V6” | wc -l)

    # Subtract header lines from count
    let COUNT_V4=$COUNT_V4-7
    let COUNT_V6=$COUNT_V6-7

    # Ensure count is not negative
    [ $COUNT_V4 -lt 0 ] && COUNT_V4=0
    [ $COUNT_V6 -lt 0 ] && COUNT_V6=0

    echo “Blacklist and ipset updated. Blocked IPv4: $COUNT_V4, Blocked IPv6: $COUNT_V6”
    exit 0

    After creating the script, give it execute permissions:

    sudo chmod +x /usr/local/bin/update-blacklist.sh

    Automation and Persistence After a Reboot

    To run the script without intervention, we use a cron schedule. Open the crontab editor for the root user and add a rule to run the script every hour:

    sudo crontab -e

    Add this line:

    0 * * * * /usr/local/bin/update-blacklist.sh

    Or to run it once a day at 6 a.m.:

    0 6 * * * /usr/local/bin/update-blacklist.sh

    The final, crucial step is to ensure the Ipset sets survive a reboot, as they are stored in RAM by default. We create a systemd service that will save their state before the server shuts down and load it again on startup.

    sudo nano /etc/systemd/system/ipset-persistent.service
    “`ini
    [Unit]
    Description=Saves and restores ipset sets on boot/shutdown
    Before=network-pre.target
    ConditionFileNotEmpty=/etc/ipset.rules

    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/bin/bash -c “/sbin/ipset create blacklist hash:ip –exist; /sbin/ipset create blacklist_v6 hash:net family inet6 –exist; /sbin/ipset restore -f /etc/ipset.rules”
    ExecStop=/sbin/ipset save -f /etc/ipset.rules

    [Install]
    WantedBy=multi-user.target

    Finally, enable and start the service:

    sudo systemctl daemon-reload
    sudo systemctl enable –now ipset-persistent.service

    How Does It Work in Practice?

    The entire system is an automated chain of events that works in the background to protect your server from attacks. Here is the flow of information and actions:

    1. Attack Response (Fail2ban):
    • Someone tries to break into the server (e.g., by repeatedly entering the wrong password via SSH).
    • Fail2ban, monitoring system logs (/var/log/fail2ban.log), detects this pattern.
    • It immediately adds the attacker’s IP address to a temporary firewall rule, blocking their access for a specified time.
    1. Permanent Banning (Script and Cron):
    • Every hour (as set in cron), the system runs the update-blacklist.sh script.
    • The script reads the Fail2ban logs, finds all addresses that have been banned (lines containing “Ban”), and then compares them with the existing local blacklist (/etc/fail2ban/blacklist.local).
    • It creates a unique list of all banned addresses.
    • It then creates temporary ipset sets (blacklist_tmp and blacklist_v6_tmp) and adds all addresses from the unique list to them.
    • It performs an ipset swap operation, which atomically replaces the old, active sets with the new, updated ones.
    • UFW, thanks to the previously defined rules, immediately starts blocking the new addresses that have appeared in the updated ipset sets.
    1. Persistence After Reboot (systemd Service):
    • Ipset’s operation is volatile—the sets only exist in memory. The ipset-persistent.service solves this problem.
    • Before shutdown/reboot: systemd runs the ExecStop=/sbin/ipset save -f /etc/ipset.rules command. This saves the current state of all ipset sets to a file on the disk.
    • After power-on/reboot: systemd runs the ExecStart command, which restores the sets. It reads all blocked addresses from the /etc/ipset.rules file and automatically recreates the ipset sets in memory.

    Thanks to this, even if the server is rebooted, the IP blacklist remains intact, and protection is active from the first moments after the system starts.

    Summary and Verification

    The system you have built is a fully automated, multi-layered protection mechanism. Attackers are temporarily banned by Fail2ban, and their addresses are automatically added to a permanent blacklist, which is instantly blocked by UFW and Ipset. The systemd service ensures that the blacklist survives server reboots, protecting against repeat offenders permanently. To verify its operation, you can use the following commands:

    sudo ufw status verbose
    sudo ipset list blacklist
    sudo ipset list blacklist_v6
    sudo systemctl status ipset-persistent.service

    How to Create a Reliable IP Whitelist in UFW and Ipset

    Introduction: Why a Whitelist is Crucial

    When configuring advanced firewall rules, especially those that automatically block IP addresses (like in systems with Fail2ban), there is a risk of accidentally blocking yourself or key services. A whitelist is a mechanism that acts like a VIP pass for your firewall—IP addresses on this list will always have access, regardless of other, more restrictive blocking rules.

    This guide will show you, step-by-step, how to create a robust and persistent whitelist using UFW (Uncomplicated Firewall) and ipset. As an example, we will use the IP address 111.222.333.444, which we want to add as trusted.

    Step 1: Create a Dedicated Ipset Set for the Whitelist

    The first step is to create a separate “container” for our trusted IP addresses. Using ipset is much more efficient than adding many individual rules to iptables.

    Open a terminal and enter the following command:

    sudo ipset create whitelist hash:ip

    What did we do?

    • ipset create: The command to create a new set.
    • whitelist: The name of our set. It’s short and unambiguous.
    • hash:ip: The type of set. hash:ip is optimised for storing and very quickly looking up single IPv4 addresses.

    Step 2: Add a Trusted IP Address

    Now that we have the container ready, let’s add our example trusted IP address to it.

    sudo ipset add whitelist 111.222.333.444

    You can repeat this command for every address you want to add to the whitelist. To check the contents of the list, use the command:

    sudo ipset list whitelist

    Step 3: Modify the Firewall – Giving Priority to the Whitelist

    This is the most important step. We need to modify the UFW rules so that connections from addresses on the whitelist are accepted immediately, before the firewall starts processing any blocking rules (including those from the ipset blacklist or Fail2ban).

    Open the before.rules configuration file. This is the file where rules processed before the main UFW rules are located.

    sudo nano /etc/ufw/before.rules

    Go to the beginning of the file and find the *filter section. Just below the :ufw-before-input [0:0] line, add our new snippet. Placing it at the very top ensures it will be processed first.

    *filter
    :ufw-before-input [0:0]
    # Rule for the whitelist (ipset) ALWAYS HAS PRIORITY
    # Accept any traffic from IP addresses in the ‘whitelist’ set
    -A ufw-before-input -m set –match-set whitelist src -j ACCEPT

    • -A ufw-before-input: We add the rule to the ufw-before-input chain.
    • -m set –match-set whitelist src: Condition: if the source (src) IP address matches the whitelist set…
    • -j ACCEPT: Action: “immediately accept (ACCEPT) the packet and stop processing further rules for this packet.”

    Save the file and reload UFW:

    sudo ufw reload

    From this point on, any connection from the address 111.222.333.444 will be accepted immediately.

    Step 4: Ensuring Whitelist Persistence

    Ipset sets are stored in memory and disappear after a server reboot. To make our whitelist persistent, we need to ensure it is automatically loaded every time the system starts. We will use our previously created ipset-persistent.service for this.

    Update the systemd service to “teach” it about the existence of the new whitelist set.

    sudo nano /etc/systemd/system/ipset-persistent.service

    Find the ExecStart line and add the create command for whitelist. If you already have other sets, simply add whitelist to the line. An example of an updated line:

    ExecStart=/bin/bash -c “/sbin/ipset create whitelist hash:ip –exist; /sbin/ipset create blacklist hash:ip –exist; /sbin/ipset create blacklist_v6 hash:net family inet6 –exist; /sbin/ipset restore -f /etc/ipset.rules”

    Reload the systemd configuration:

    sudo systemctl daemon-reload

    Save the current state of all sets to the file. This command will overwrite the old /etc/ipset.rules file with a new version that includes information about your whitelist.

    sudo ipset save > /etc/ipset.rules

    Restart the service to ensure it is running with the new configuration:

    sudo systemctl restart ipset-persistent.service

    Summary

    Congratulations! You have created a solid and reliable whitelist mechanism. With it, you can securely manage your server, confident that trusted IP addresses like 111.222.333.444 will never be accidentally blocked. Remember to only add fully trusted addresses to this list, such as your home or office IP address.

    How to Effectively Block IP Addresses and Subnets on a Linux Server

    Blocking single IP addresses is easy, but what if attackers use multiple addresses from the same network? Manually banning each one is inefficient and time-consuming.

    In this article, you will learn how to use ipset and iptables to effectively block entire subnets, automating the process and saving valuable time.

    Why is Blocking Entire Subnets Better?

    Many attacks, especially brute-force types, are carried out from multiple IP addresses belonging to the same operator or from the same pool of addresses (subnet). Blocking just one of them is like patching a small hole in a large dam—the rest of the traffic can still get through.

    Instead, you can block an entire subnet, for example, 45.148.10.0/24. This notation means you are blocking 256 addresses at once, which is much more effective.

    Script for Automatic Subnet Blocking

    To automate the process, you can use the following bash script. This script is interactive—it asks you to provide the subnet to block, then adds it to an ipset list and saves it to a file, making the block persistent.

    Let’s analyse the script step-by-step:

    #!/bin/bash

    # The name of the ipset list to which subnets will be added
    BLACKLIST_NAME=”blacklist_nets”
    # The file where blocked subnets will be appended
    BLACKLIST_FILE=”/etc/fail2ban/blacklist_net.local”

    # 1. Create the blacklist file if it doesn’t exist
    touch “$BLACKLIST_FILE”

    # 2. Check if the ipset list already exists. If not, create it.
    # Using “hash:net” allows for storing subnets, which is key.
    if ! sudo ipset list $BLACKLIST_NAME >/dev/null 2>&1; then
        sudo ipset create $BLACKLIST_NAME hash:net maxelem 65536
    fi

    # 3. Loop to prompt the user for subnets to block.
    # The loop ends when the user types “exit”.
    while true; do
        read -p “Enter the subnet address to block (e.g., 192.168.1.0/24) or type ‘exit’: ” subnet
        if [ “$subnet” == “exit” ]; then
            break
        elif [[ “$subnet” =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}\/[0-9]{1,2}$ ]]; then
            # Check if the subnet is not already in the file to avoid duplicates
            if ! grep -q “^$subnet$” “$BLACKLIST_FILE”; then
                echo “$subnet” | sudo tee -a “$BLACKLIST_FILE” > /dev/null
                # Add the subnet to the ipset list
                sudo ipset add $BLACKLIST_NAME $subnet
                echo “Subnet $subnet added.”
            else
                echo “Subnet $subnet is already on the list.”
            fi
        else
            # Check if the entered format is correct
            echo “Error: Invalid format. Please provide the address in ‘X.X.X.X/Y’ format.”
        fi
    done

    # 4. Add a rule in iptables that blocks all traffic from addresses on the ipset list.
    # This ensures the rule is added only once.
    if ! sudo iptables -C INPUT -m set –match-set $BLACKLIST_NAME src -j DROP >/dev/null 2>&1; then
        sudo iptables -I INPUT -m set –match-set $BLACKLIST_NAME src -j DROP
    fi

    # 5. Save the iptables rules to survive a reboot.
    # This part checks which tool the system uses.
    if command -v netfilter-persistent &> /dev/null; then
        sudo netfilter-persistent save
    elif command -v service &> /dev/null && service iptables status >/dev/null 2>&1; then
        sudo service iptables save
    fi

    echo “Script finished. The ‘$BLACKLIST_NAME’ list has been updated, and the iptables rules are active.”

    How to Use the Script

    1. Save the script: Save the code above into a file, e.g., block_nets.sh.
    2. Give permissions: Make sure the file has execute permissions: chmod +x block_nets.sh.
    3. Run the script: Execute the script with root privileges: sudo ./block_nets.sh.
    4. Provide subnets: The script will prompt you to enter subnet addresses. Simply type them in the X.X.X.X/Y format and press Enter. When you are finished, type exit.

    Ensuring Persistence After a Server Reboot

    Ipset sets are stored in RAM by default and disappear after a server restart. For the blocked addresses to remain active, you must use a systemd service that will load them at system startup.

    If you already have such a service (e.g., ipset-persistent.service), you must update it to include the new blacklist_nets list.

    1. Edit the service file: Open your service’s configuration file.
      sudo nano /etc/systemd/system/ipset-persistent.service
    2. Update the ExecStart line: Find the ExecStart line and add the create command for the blacklist_nets set. An example updated ExecStart line should look like this (including previous sets):
      ExecStart=/bin/bash -c “/sbin/ipset create whitelist hash:ip –exist; /sbin/ipset create blacklist hash:ip –exist; /sbin/ipset create blacklist_v6 hash:net family inet6 –exist; /sbin/ipset create blacklist_nets hash:net –exist; /sbin/ipset restore -f /etc/ipset.rules”
    3. Reload the systemd configuration:
      sudo systemctl daemon-reload
    4. Save the current state of all sets to the file: This command will overwrite the old /etc/ipset.rules file with a new version that contains information about all your lists, including blacklist_nets.
      sudo ipset save > /etc/ipset.rules
    5. Restart the service:
      sudo systemctl restart ipset-persistent.service

    With this method, you can simply and efficiently manage your server’s security, effectively blocking entire subnets that show suspicious activity, and be sure that these rules will remain active after every reboot.

  • OpenLiteSpeed (OLS) with Redis. Fast Cache for WordPress Sites.

    OpenLiteSpeed (OLS) with Redis. Fast Cache for WordPress Sites.

    Managing a web server requires an understanding of the components that make up its architecture. Each element plays a crucial role in delivering content to users quickly and reliably. This article provides an in-depth analysis of a modern server configuration based on OpenLiteSpeed (OLS), explaining its fundamental mechanisms, its collaboration with the Redis caching system, and its methods of communication with external applications.

    OpenLiteSpeed (OLS) – The System’s Core

    The foundation of every website is the web server—the software responsible for receiving HTTP requests from browsers and returning the appropriate resources, such as HTML files, CSS, JavaScript, or images.

    What is OpenLiteSpeed?

    OpenLiteSpeed (OLS) is a high-performance, lightweight, open-source web server developed by LiteSpeed Technologies. Its key advantage over traditional servers, such as Apache in its default configuration, is its event-driven architecture.

    • Process-based model (e.g., Apache prefork): A separate process or thread is created for each simultaneous connection. This model is simple, but with high traffic, it leads to significant consumption of RAM and CPU resources, as each process, even if inactive, reserves resources.
    • Event-driven model (OpenLiteSpeed, Nginx): A single server worker process can handle hundreds or thousands of connections simultaneously. It uses non-blocking I/O operations and an event loop to manage requests. When a process is waiting for an operation (e.g., reading from a disk), it doesn’t block but instead moves on to handle another connection. This architecture provides much better scalability and lower resource consumption.

    Key Features of OpenLiteSpeed

    OLS offers a set of features that make it a powerful and flexible tool:

    • Graphical Administrative Interface (WebAdmin GUI): OLS has a built-in, browser-accessible admin panel that allows you to configure all aspects of the server—from virtual hosts and PHP settings to security rules—without needing to directly edit configuration files.
    • Built-in Caching Module (LSCache): One of OLS’s most important features is LSCache, an advanced and highly configurable full-page cache mechanism. When combined with dedicated plugins for CMS systems (e.g., WordPress), LSCache stores fully rendered HTML pages in memory. When the next request for the same page arrives, the server delivers it directly from the cache, completely bypassing the execution of PHP code and database queries.
    • Support for Modern Protocols (HTTP/3): OLS natively supports the latest network protocols, including HTTP/3 (based on QUIC). This provides lower latency and better performance, especially on unstable mobile connections.
    • Compatibility with Apache Rules: OLS can interpret mod_rewrite directives from .htaccess files, which is a standard in the Apache ecosystem. This significantly simplifies the migration process for existing applications without the need to rewrite complex URL rewriting rules.

    Redis – In-Memory Data Accelerator

    Caching is a fundamental optimisation technique that involves storing the results of costly operations in a faster access medium. In the context of web applications, Redis is one of the most popular tools for this task.

    What is Redis?

    Redis (REmote Dictionary Server) is an in-memory data structure, most often used as a key-value database, cache, or message broker. Its power comes from the fact that it stores all data in RAM, not on a hard drive. Accessing RAM is orders of magnitude faster than accessing SSDs or HDDs, as it’s a purely electronic operation that bypasses slower I/O interfaces.

    In a typical web application, Redis acts as an object cache. It stores the results of database queries, fragments of rendered HTML code, or complex PHP objects that are expensive to regenerate.

    How Do OpenLiteSpeed and Redis Collaborate?

    The LSCache and Redis caching mechanisms don’t exclude each other; rather, they complement each other perfectly, creating a multi-layered optimisation strategy.

    Request flow (simplified):

    1. A user sends a request for a dynamic page (e.g., a blog post).
    2. OpenLiteSpeed receives the request. The first step is to check the LSCache.
      • LSCache Hit: If an up-to-date, fully rendered version of the page is in the LSCache, OLS returns it immediately. The process ends here. This is the fastest possible scenario.
      • LSCache Miss: If the page is not in the cache, OLS forwards the request to the appropriate external application (e.g., a PHP interpreter) to generate it.
    3. The PHP application begins building the page. To do this, it needs to fetch data from the database (e.g., MySQL).
    4. Before PHP executes costly database queries, it first checks the Redis object cache.
      • Redis Hit: If the required data (e.g., SQL query results) are in Redis, they are returned instantly. PHP uses this data to build the page, bypassing communication with the database.
      • Redis Miss: If the data is not in the cache, PHP executes the database queries, fetches the results, and then saves them to Redis for future requests.
    5. PHP finishes generating the HTML page and returns it to OpenLiteSpeed.
    6. OLS sends the page to the user and, at the same time, saves it to the LSCache so that subsequent requests can be served much faster.

    This two-tiered strategy ensures that both the first and subsequent visits to a page are maximally optimised. LSCache eliminates the need to run PHP, while Redis drastically speeds up the page generation process itself when necessary.

    Delegating Tasks – External Applications in OLS

    Modern web servers are optimised to handle network connections and deliver static files (images, CSS). The execution of application code (dynamic content) is delegated to specialised external programmes. This division of responsibilities increases stability and security.

    OpenLiteSpeed manages these programmes through the External Applications system. The most important types are described below:

    • LSAPI Application (LiteSpeed SAPI App): The most efficient and recommended method of communication with PHP, Python, or Ruby applications. LSAPI is a proprietary, optimised protocol that minimises communication overhead between the server and the application interpreter.
    • FastCGI Application: A more universal, standard protocol for communicating with external application processes. This is a good solution for applications that don’t support LSAPI. It works on a similar principle to LSAPI (by maintaining permanent worker processes), but with slightly more protocol overhead.
    • Web Server (Proxy): This type configures OLS to act as a reverse proxy. OLS receives a request from the client and then forwards it in its entirety to another server running in the background (the “backend”), e.g., an application server written in Node.js, Java, or Go. This is crucial for building microservices-based architectures.
    • CGI Application: The historical and slowest method. A new application process is launched for each request and is closed after returning a response. Due to the huge performance overhead, it’s only used for older applications that don’t support newer protocols.

    OLS routes traffic to the appropriate application using Script Handlers, which map file extensions (e.g., .php) to a specific application, or Contexts, which map URL paths (e.g., /api/) to a proxy type application.

    Communication Language – A Comparison of SAPI Architectures

    SAPI (Server Application Programming Interface) is an interface that defines how a web server communicates with an application interpreter (e.g., PHP). The choice of SAPI implementation has a fundamental impact on the performance and stability of the entire system.

    The Evolution of SAPI

    1. CGI (Common Gateway Interface): The first standard. Stable, but inefficient due to launching a new process for each request.
    2. Embedded Module (e.g., mod_php in Apache): The PHP interpreter is loaded directly into the server process. This provides very fast communication, but at the cost of stability (a PHP crash causes the server to crash) and security.
    3. FastCGI: A compromise between performance and stability. It maintains a pool of independent, long-running PHP processes, which eliminates the cost of constantly launching them. Communication takes place via a socket, which provides isolation from the web server.
    4. LSAPI (LiteSpeed SAPI): An evolution of the FastCGI model. It uses the same architecture with separate processes, but the communication protocol itself was designed from scratch to minimise overhead, which translates to even higher performance than standard FastCGI.

    SAPI Architecture Comparison Table

    FeatureCGIEmbedded Module (mod_php)FastCGILiteSpeed SAPI (LSAPI)
    Process ModelNew process per requestShared process with serverPermanent external processesPermanent external processes
    PerformanceLowVery highHighHighest
    Stability / IsolationExcellentLowHighHigh
    Resource ConsumptionVery highModerateLowVery low
    OverheadHigh (process launch)Minimal (shared memory)Moderate (protocol)Low (optimised protocol)
    Main AdvantageFull isolationCommunication speedBalanced performance & stabilityOptimised performance & stability
    Main DisadvantageVery low performanceInstability, security issuesMore complex configurationTechnology specific to LiteSpeed

    Comparison of Communication Sockets

    Communication between processes (e.g., OLS and Redis, or OLS and PHP processes) occurs via sockets. The choice of socket type affects performance.

    FeatureTCP/IP Socket (on localhost)Unix Domain Socket (UDS)
    AddressingIP Address + Port (e.g., 127.0.0.1:6379)File path (e.g., /var/run/redis.sock)
    ScopeSame machine or over a networkSame machine only (IPC)
    Performance (locally)LowerHigher
    OverheadHigher (goes through the network stack)Minimal (bypasses the network stack)
    Security ModelFirewall rulesFile system permissions

    For local communication, UDS is a more efficient solution because it bypasses the entire operating system network stack, which reduces latency and CPU overhead. This is why it’s preferred in optimised configurations for connections between OLS, Redis, and LSAPI processes.

    Practical Implementation and Management

    To translate theory into practice, let’s analyse a real server configuration for the virtual host solutionsinc.co.uk.

    5.1 Analysis of the solutionsinc.co.uk Configuration Example

    1. External App Definition:
      • In the “External App” panel, a LiteSpeed SAPI App named solutionsinc.co.uk has been defined. This is the central configuration point for handling the dynamic content of the site.
      • Address: UDS://tmp/lshttpd/solutionsinc.co.uk.sock. This line is crucial. It informs OLS that a Unix Domain Socket (UDS) will be used to communicate with the PHP application, not a TCP/IP network socket. The .sock file at this path is the physical endpoint of this efficient communication channel.
      • Command: /usr/local/lsws/lsphp84/bin/lsphp. This is the direct path to the executable file of the LiteSpeed PHP interpreter, version 8.4. OLS knows it should run this specific programme to process scripts.
      • Other parameters, such as LSAPI_CHILDREN = 50 and memory limits, are used for precise resource and performance management of the PHP process pool.
    2. Linking with PHP Files (Script Handler):
      • The application definition alone isn’t enough. In the “Script Handler” panel, we tell OLS when to use it.
      • For the .php suffix (extension), LiteSpeed SAPI is set as the handler.
      • [VHost Level]: solutionsinc.co.uk is chosen as the Handler Name, which directly points to the application defined in the previous step.
      • Conclusion: From now on, every request for a file with the .php extension on this site will be passed through the UDS socket to one of the lsphp84 processes.
    image 114
    image 115

    This configuration is an excellent example of an optimised and secure environment: OLS handles the connections, while dedicated, isolated lsphp84 processes execute the application code, communicating through the fastest available channel—a Unix domain socket.

    5.2 Managing Unix Domain Sockets (.sock) and Troubleshooting

    The .sock file, as seen in the solutionsinc.co.uk.sock example, isn’t a regular file. It’s a special file in Unix systems that acts as an endpoint for inter-process communication (IPC). Instead of communicating through the network layer (even locally), processes can write to and read data directly from this file, which is much faster.

    When OpenLiteSpeed launches an external LSAPI application, it creates such a socket file. The PHP processes listen on this socket for incoming requests from OLS.

    Practical tip: A ‘Stubborn’ .sock file

    Sometimes, after making changes to the PHP configuration (e.g., modifying the php.ini file or installing a new extension) and restarting the OpenLiteSpeed server (lsws), the changes may not be visible on the site. This happens because the lsphp processes may not have been correctly restarted with the server, and OLS is still communicating with the old processes through the existing, “old” .sock file.

    In such a situation, when a standard restart doesn’t help, an effective solution is to:

    1. Stop the OpenLiteSpeed server.
    2. Manually delete the relevant .sock file, for example, using the terminal command: rm /tmp/lshttpd/solutionsinc.co.uk.sock
    3. Restart the OpenLiteSpeed server.

    After restarting OLS, not finding the existing socket file, it will be forced to create a new one. More importantly, it will launch a new pool of lsphp processes that will load the fresh configuration from the php.ini file. Deleting the .sock file acts as a hard reset of the communication channel between the server and the PHP application, guaranteeing that all components are initialised from scratch with the current settings.

    Summary

    The server configuration presented is a precisely designed system in which each element plays a vital role.

    • OpenLiteSpeed acts as an efficient, event-driven core, managing connections.
    • LSCache provides instant delivery of pages from the full-page cache.
    • Redis acts as an object cache, drastically accelerating the generation of dynamic content when needed.
    • LSAPI UDS creates optimised communication channels, minimising overhead and latency.

    An understanding of these dependencies allows for informed server management and optimisation to achieve maximum performance and reliability.

  • LiteSpeed vs Apache vs Nginx Web Server: Which is Better?

    LiteSpeed vs Apache vs Nginx Web Server: Which is Better?

    When looking for hosting for your website, you need to consider not only the disk space, the type of hard drives (SSD, HDD, NVMe), the monthly transfer limit, or the number of databases, but also which web server will be handling your site. There are many different web servers on the market, but the three most popular—Apache, Nginx, and LiteSpeed—capture over 42% of the market share (August 2025). I am not including the Cloudflare server here, as it operates on a slightly different principle than these three. All of them are very stable, well-developed, and feature-rich; however, there are significant differences between them that will affect your website’s performance and ease of use.

    What is a web server?

    A web server is software that handles requests sent by visitors to a site and sends back a complete page for display in their web browsers. Every time you type a website address into your browser, you send a request to an HTTP/HTTPS server. This server either displays an HTML file in the case of static pages or dynamically generates a PHP page stored in a database, as is the case with sites built on WordPress, Joomla, or Drupal.

    We will compare the three most popular web servers so you can choose the best solution for your website. For some time now, the most popular web server has been Nginx, which has surpassed Apache. In third place, and climbing rapidly, is LiteSpeed.

    Web Servers: Top 10 Market Share – August 2025

    ServerMarket Share
    Nginx24%
    Cloudflare15%
    Apache14%
    OpenResty6%
    Google Servers5%
    LiteSpeed4%
    Microsoft1%
    Sun0%
    NCSA0%
    Other31%

    Source: netcraft.com*Sites may use several web servers simultaneously, which is why the percentages do not add up to 100%.

    Web servers market share 2025

    Apache

    Let’s start with Apache, as it is the oldest of the web servers presented here. This open-source server, created in 1995, was the undisputed leader in popularity for a long time. It practically dominated on Linux machines, and even on Windows computers, it was often chosen over the commercial IIS from Microsoft.

    Nginx

    Nginx was created to address the existing shortcomings of the Apache server and initially functioned only as a reverse proxy server and load balancer. It was later transformed into a fully-fledged web server. It is compatible with Apache, so existing sites can be easily transferred from Apache to Nginx.

    LiteSpeed

    LiteSpeed is the youngest web server of the three. Like Nginx, it is also fully compatible with Apache and supports .htaccess files, mod_rewrite, and mod_security.

    Main Differences

    FeatureApacheNginxLiteSpeed
    ArchitectureProcess-basedEvent-drivenEvent-driven
    Speed826 req/sec6 thousand req/sec69 thousand req/sec
    CachingW3 Total CacheFastCGI cacheLiteSpeed Cache
    Supported OSAll Unix, WindowsAll Unix, WindowsUbuntu 14-20, Debian 8+, CentOS 7+, FreeBSD 9+, Linux Kernel 3.0+
    Ease of Config.htaccess file.conf filesGUI, .htaccess
    SecurityModsecurity, DDoSModsecurityModsecurity, reCaptcha, WP brute-force, DDoS
    Control PanelscPanel, Kloxo, ZPanel, Ajenti, OpenPanelcPanel, aaPanel, Vesta, Hestia CPcPanel, Plesk, DirectAdmin, CyberPanel, CloudPages
    PluginsMany pluginsMany pluginsControl Panel plugins and API
    Prog. LanguagesPHP, Python, PerlPHP, Python, Perl, Ruby, JavaScript, Go, Java servletAll scripting languages
    HTTP/3No supportPlannedSupported
    CMSWordPress, Magento, Joomla, etc.WordPress, Magento, Joomla, etc.WordPress, Magento, Joomla, etc.

    Architecture

    Apache Apache has a process-based architecture. Each HTTP request is handled by a separate process. All these processes are managed by a single main parent process. This is the main drawback of Apache. The problem with a process-based architecture is that it struggles significantly with RAM consumption. While this isn’t a major issue on low-traffic sites, it becomes noticeable on popular websites. Under heavy load, performance and page loading speed drop drastically.

    Nginx The Nginx web server works completely differently. Its architecture is event-driven. There is one main process and several worker processes that manage all the traffic on the site. This architecture is much more efficient. With Nginx, there is no such drop in performance, even for heavily loaded websites.

    LiteSpeed Similar to Nginx, LiteSpeed’s architecture is event-driven. Therefore, just as with Nginx, the drop in performance with an increasing number of visitors is much smaller than with Apache.

    Speed

    For low-traffic websites, the speed of all three web servers is at a similar level. But the more active users a site has, the more Apache starts to fall behind the other two. Admittedly, after installing W3 Total Cache on Apache, things improve slightly, but with 100 concurrent visitors, Nginx and LiteSpeed still outperform Apache by a significant margin.

    Nginx with FastCGI Cache is much faster than Apache with W3 Total Cache, but it is LiteSpeed with the LiteSpeed Cache for WordPress plugin installed that shows the real advantage. For a WordPress-based site, the Apache server was able to handle 826.5 requests per second, Nginx handled 6,025 requests per second, while LiteSpeed handled a staggering 69,618 requests per second.

    Web Server Speed Comparison

    ServerRequests/secMB/secErrors
    LiteSpeed69,618.5270.380
    Nginx6,025.324.50
    Apache826.53.080

    Test Environment

    • Web Servers Tested:
      • LiteSpeed Web Server v5.4.1
      • Nginx v1.16.1
      • Apache v2.4.41
    • WordPress:
      • WordPress version: 5.2.2
      • LiteSpeed cache: LiteSpeed Cache for WordPress
      • Nginx cache: FastCGI Cache
      • Apache cache: W3 Total Cache
    • Client & Server Machine:
      • RAM: 1GB
      • Processors: 1
      • CPU Threads: 1
      • Processor Model: Virtual CPU 6db7dc0e7704
      • Disk: NVMe SSD
    • Network:
      • Bandwidth: 9.02 Gbits/sec
      • Latency: 0.302 ms
    • Cloud VM:
      • Vultr High Frequency Compute 1GB VM

    Caching

    The cache is used to temporarily store frequently used data. A web server’s cache stores frequently visited web pages and other resources. This reduces the server’s load, increases the site’s overall performance/throughput, and shortens page load times.

    Apache

    Apache has various caching modules, such as mod_cache, mod_cache_disk, mod_file_cache, and htcacheclean. You can implement them on your Apache server to improve the performance of frequently visited pages.

    Nginx

    You can enable caching on an Nginx server using cPanel or Plesk if you have them installed, or directly in the Nginx configuration files.

    LiteSpeed

    You can very easily enable the cache in LiteSpeed using plugins for:

    • WordPress
    • Magento
    • Joomla
    • PrestaShop
    • OpenCart
    • Drupal
    • XenForo
    • Laravel
    • Shopware
    • CS-Cart
    • MediaWiki

    The cache also offers several other unique features, such as the Cache Crawler. The Cache Crawler scans your website when it is not under load, identifies the most frequently visited pages, and moves them to the cache to speed up your site even further. LiteSpeed cache also improves the performance of online shops by caching customers’ shopping baskets.

    Supported Operating Systems

    Apache

    As the oldest of the three web servers, Apache supports the most operating systems. It supports all Unix/Linux systems: CentOS, RedHat, Fedora, Ubuntu, OpenSUSE, etc. It is the only one fully supported by Microsoft Windows systems.

    Nginx

    You can also install Nginx on all Unix/Linux systems; however, it does not run correctly on Windows systems.

    LiteSpeed

    You can install LiteSpeed on CentOS 7+, Ubuntu 14.04+, Debian 8+, FreeBSD 9+, Fedora 31+, and Linux Kernel 3.0+.

    *As of January 2023, you can install LiteSpeed on Ubuntu 22.04; however, CyberPanel does not work on this version of the system. If you want to use CyberPanel with its convenient GUI, you should stay on Ubuntu 20.04.

    Ease of Configuration

    If you are just starting your journey with web servers, ease of use may be important to you. It is much more pleasant to manage a web server from a browser with a convenient graphical interface than by using the CLI or editing configuration files.

    Apache

    Apache is most commonly configured by editing the .htaccess file. This is where you set up redirects, password protection, custom error messages, indexing, and much more. However, editing this file requires some knowledge of web server configuration, without which you can easily make a mistake and completely disable your site. Therefore, always make a backup before editing this file.

    Nginx

    The Nginx server is configured using .conf configuration files. By default, Nginx does not have a control panel with a graphical interface, but you can install one of several available Control Panels. Some of them are free, like Hestia Control Panel, while others require payment.

    LiteSpeed

    The free OpenLiteSpeed installs by default with a Dashboard featuring a convenient, graphical user interface. Additionally, you can install one of several control panels, for example, the excellent and free CyberPanel, from which you can manage the entire server. You can install a new website, install WordPress with LiteSpeed Cache, install SSL certificates with a single click (both free via Let’s Encrypt and your own paid certificates), configure DNS, FTP, SSH, create backups, change the PHP version for each site separately, install a mail server, and much more.

    Security

    All three described web servers take security very seriously.

    Apache additionally has a vigilant and the largest community of developers, which responds instantly to any detected security vulnerabilities. It also offers various configuration parameters to protect the site from DDoS attacks and privilege escalation, though implementing them requires a bit of IT knowledge.

    With Nginx, in addition to the community, security is handled by F5, the company that acquired the rights to Nginx. It has extensive documentation on security and potential threats.

    LiteSpeed is also very secure and is continuously and efficiently developed. Any detected security vulnerabilities are patched promptly.

    Plugins

    Plugins allow you to extend the capabilities of a web server.

    Apache probably has the most extensive list of plugins, including those for managing SQL connections, data compression, or executing CGI scripts.

    There are also many plugins for Nginx, which are written by the developer community. Thanks to them, you can, for example, manage HTTPS SSL authentication or dynamically block IP addresses.

    In terms of the number of plugins, LiteSpeed may seem the weakest at first glance, but this is only apparent, as many things that you have to install separately in Apache or Nginx come as standard in LiteSpeed.

    CMS Support

    All three servers support Content Management Systems (CMS) without any issues, including:

    • Joomla
    • Drupal
    • Magento
    • OpenCart
    • PrestaShop
    • Shopware
    • MediaWiki
    • and others

    Summary

    Each web server has its advantages and disadvantages. However, LiteSpeed seems to be the most future-proof at present, although Nginx has not yet had its final say, especially as it is backed by the large American corporation, F5. Apache, on the other hand, has the largest community and is the best documented. However, we are of the opinion that once you try the convenient graphical user interface of CyberPanel and the LiteSpeed dashboard, you will not want to go back to Apache.