Category: Docker EN

  • WireGuard on TrueNAS Scale: How to Build a Secure and Efficient Bridge Between Your Local Network and VPS Servers

    WireGuard on TrueNAS Scale: How to Build a Secure and Efficient Bridge Between Your Local Network and VPS Servers

    In today’s digital world, where remote work and distributed infrastructure are becoming the norm, secure access to network resources is not so much a luxury as an absolute necessity. Virtual Private Networks (VPNs) have long been the answer to these needs, yet traditional solutions can be complicated and slow. Enter WireGuard—a modern VPN protocol that is revolutionising the way we think about secure tunnels. Combined with the power of the TrueNAS Scale system and the simplicity of the WG-Easy application, we can create an exceptionally efficient and easy-to-manage solution.

    This article is a comprehensive guide that will walk you through the process of configuring a secure WireGuard VPN tunnel step by step. We will connect a TrueNAS Scale server, running on your home or company network, with a fleet of public VPS servers. Our goal is to create intelligent “split-tunnel” communication, ensuring that only necessary traffic is routed through the VPN, thereby maintaining maximum internet connection performance.

    What Is WireGuard and Why Is It a Game-Changer?

    Before we delve into the technical configuration, it’s worth understanding why WireGuard is gaining such immense popularity. Designed from the ground up with simplicity and performance in mind, it represents a breath of fresh air compared to older, more cumbersome protocols like OpenVPN or IPsec.

    The main advantages of WireGuard include:

    • Minimalism and Simplicity: The WireGuard source code consists of just a few thousand lines, in contrast to the hundreds of thousands for its competitors. This not only facilitates security audits but also significantly reduces the potential attack surface.
    • Unmatched Performance: By operating at the kernel level of the operating system and utilising modern cryptography, WireGuard offers significantly higher transfer speeds and lower latency. In practice, this means smoother access to files and services.
    • Modern Cryptography: WireGuard uses the latest, proven cryptographic algorithms such as ChaCha20, Poly1305, Curve25519, BLAKE2s, and SipHash24, ensuring the highest level of security.
    • Ease of Configuration: The model, based on the exchange of public keys similar to SSH, is far more intuitive than the complicated certificate management found in other VPN systems.

    The Power of TrueNAS Scale and the Convenience of WG-Easy

    TrueNAS Scale is a modern, free operating system for building network-attached storage (NAS) servers, based on the solid foundations of Linux. Its greatest advantage is its support for containerised applications (Docker/Kubernetes), which allows for easy expansion of its functionality. Running a WireGuard server directly on a device that is already operating 24/7 and storing our data is an extremely energy- and cost-effective solution.

    This is where the WG-Easy application comes in—a graphical user interface that transforms the process of managing a WireGuard server from editing configuration files in a terminal to simple clicks in a web browser. Thanks to WG-Easy, we can create profiles for new devices in moments, generate their configurations, and monitor the status of connections.

    Step 1: Designing the Network Architecture – The Foundation of Stability

    Before we launch any software, we must create a solid plan. Correctly designing the topology and IP addressing is the key to a stable and secure solution.

    The “Hub-and-Spoke” Model: Your Command Centre

    Our network will operate based on a “hub-and-spoke” model.

    • Hub: The central point (server) of our network will be TrueNAS Scale. All other devices will connect to it.
    • Spokes: Our VPS servers will be the clients (peers), or the “spokes” connected to the central hub.

    In this model, all communication flows through the TrueNAS server by default. This means that for one VPS to communicate with another, the traffic must pass through the central hub.

    To avoid chaos, we will create a dedicated subnet for our virtual network. In this guide, we will use 10.8.0.0/24.

    Device RoleHost IdentifierVPN IP Address
    Server (Hub)TrueNAS-Scale10.8.0.1
    Client 1 (Spoke)VPS110.8.0.2
    Client 2 (Spoke)VPS210.8.0.3
    Client 3 (Spoke)VPS310.8.0.4

    The Fundamental Rule: One Client, One Identity

    A tempting thought arises: is it possible to create a single configuration file for all VPS servers? Absolutely not. This would be a breach of a fundamental WireGuard security principle. Identity in this network is not based on a username and password, but on a unique pair of cryptographic keys. Using the same configuration on multiple machines is like giving the same house key to many different people—the server would be unable to distinguish between them, which would lead to routing chaos and a security breakdown.

    Step 2: Prerequisite – Opening the Gateway to the World

    The most common pitfall when configuring a home server is forgetting about the router. Your TrueNAS server is on a local area network (LAN) and has a private IP address (e.g., 192.168.0.13), which makes it invisible from the internet. For the VPS servers to connect to it, you must configure port forwarding on your router.

    You need to create a rule that directs packets arriving from the internet on a specific port straight to your TrueNAS server.

    • Protocol: UDP (WireGuard uses UDP exclusively)
    • External Port: 51820 (the standard WireGuard port)
    • Internal IP Address: The IP address of your TrueNAS server on the LAN
    • Internal Port: 51820

    Without this rule, your VPN server will never work.

    Step 3: Hub Configuration – Launching the Server on TrueNAS

    Launch the WG-Easy application on your TrueNAS server. The configuration process boils down to creating a separate profile for each client (each VPS server).

    Click “New” and fill in the form for the first VPS, paying special attention to the fields below:

    Field Name in WG-EasyExample Value (for VPS1)Explanation
    NameVPS1-PublicA readable label to help you identify the client.
    IPv4 Address10.8.0.2A unique IP address for this VPS within the VPN, according to our plan.
    Allowed IPs192.168.0.0/24, 10.8.0.0/24This is the heart of the “split-tunnel” configuration. It tells the client (VPS) that only traffic to your local network (LAN) and to other devices on the VPN should be sent through the tunnel. All other traffic (e.g., to Google) will take the standard route.
    Server Allowed IPs10.8.0.2/32A critical security setting. It informs the TrueNAS server to only accept packets from this specific client from its assigned IP address. The /32 mask prevents IP spoofing.
    Persistent Keepalive25An instruction for the client to send a small “keep-alive” packet every 25 seconds. This is necessary to prevent the connection from being terminated by routers and firewalls along the way.
    image 124

    After filling in the fields, save the configuration. Repeat this process for each subsequent VPS server, remembering to assign them consecutive IP addresses (10.8.0.3, 10.8.0.4, etc.).

    Once you save the profile, WG-Easy will generate a .conf configuration file for you. Treat this file like a password—it contains the client’s private key! Download it and prepare to upload it to the VPS server.

    Step 4: Spoke Configuration – Activating Clients on the VPS Servers

    Now it’s time to bring our “spokes” to life. Assuming your VPS servers are running Linux (e.g., Debian/Ubuntu), the process is very straightforward.

    1. Install WireGuard tools:
      sudo apt update && sudo apt install wireguard-tools -y
    2. Upload and secure the configuration file: Copy the previously downloaded wg0.conf file to the /etc/wireguard/ directory on the VPS server. Then, change its permissions so that only the administrator can read it:
      # On the VPS server:
      sudo mv /path/to/your/wg0.conf /etc/wireguard/wg0.conf
      sudo chmod 600 /etc/wireguard/wg0.conf
    3. Start the tunnel: Use a simple command to activate the connection. The interface name (wg0) is derived from the configuration file name.
      sudo wg-quick up wg0
    4. Ensure automatic start-up: To have the VPN tunnel start automatically after every server reboot, enable the corresponding system service:
      sudo systemctl enable wg-quick@wg0.service

    Repeat these steps on each VPS server, using the unique configuration file generated for each one.

    Step 5: Verification and Diagnostics – Checking if Everything Works

    After completing the configuration, it’s time for the final test.

    Checking the Connection Status

    On both the TrueNAS server and each VPS, execute the command:

    sudo wg show

    Look for two key pieces of information in the output:

    • latest handshake: This should show a recent time (e.g., “a few seconds ago”). This is proof that the client and server have successfully connected.
    • transfer: received and sent values greater than zero indicate that data is actually flowing through the tunnel.

    The Final Test: Validating the “Split-Tunnel”

    This is the test that will confirm we have achieved our main goal. Log in to one of the VPS servers and perform the following tests:

    1. Test connectivity within the VPN: Try to ping the TrueNAS server using its VPN and LAN addresses.
      ping 10.8.0.1       # VPN address of the TrueNAS server
      ping 192.168.0.13  # LAN address of the TrueNAS server (use your own)

      If you receive replies, it means that traffic to your local network is being correctly routed through the tunnel.
    2. Test the path to the internet: Use the traceroute tool to check the route packets take to a public website.
      traceroute google.com

      The result of this command is crucial. The first “hop” on the route must be the default gateway address of your VPS hosting provider, not the address of your VPN server (10.8.0.1). If this is the case—congratulations! Your “split-tunnel” configuration is working perfectly.

    Troubleshooting Common Problems

    • No “handshake”: The most common cause is a connection issue. Double-check the UDP port 51820 forwarding configuration on your router, as well as any firewalls in the path (on TrueNAS, on the VPS, and in your cloud provider’s panel).
    • There is a “handshake”, but ping doesn’t work: The problem usually lies in the Allowed IPs configuration. Ensure the server has the correct client VPN address entered (e.g., 10.8.0.2/32), and the client has the networks it’s trying to reach in its configuration (e.g., 192.168.0.0/24).
    • All traffic is going through the VPN (full-tunnel): This means that in the client’s configuration file, under the [Peer] section, the Allowed IPs field is set to 0.0.0.0/0. Correct this setting in the WG-Easy interface, download the new configuration file, and update it on the client.

    Creating your own secure and efficient VPN server based on TrueNAS Scale and WireGuard is well within reach. It is a powerful solution that not only enhances security but also gives you complete control over your network infrastructure.

  • Wazuh on Your Own Server: Digital Sovereignty at the Cost of Complexity

    Wazuh on Your Own Server: Digital Sovereignty at the Cost of Complexity

    Faced with the rising costs of commercial solutions and escalating cyber threats, the free security platform Wazuh is gaining popularity as a powerful alternative. However, the decision to self-host it on one’s own servers represents a fundamental trade-off: organisations gain unprecedented control over their data and system, but in return, they must contend with significant technical complexity, hidden operational costs, and full responsibility for their own security. This report analyses for whom this path is a strategic advantage and for whom it may prove to be a costly trap.

    Introduction – The Democratisation of Cybersecurity in an Era of Growing Threats

    The contemporary digital landscape is characterised by a paradox: while threats are becoming increasingly advanced and widespread, the costs of professional defence tools remain an insurmountable barrier for many organisations. Industry reports paint a grim picture, pointing to a sharp rise in ransomware attacks, which are evolving from data encryption to outright blackmail, and the ever-wider use of artificial intelligence by cybercriminals to automate and scale attacks. In this challenging environment, solutions like Wazuh are emerging as a response to the growing demand for accessible, yet powerful, tools to protect IT infrastructure.

    Wazuh is defined as a free, open-source security platform that unifies the capabilities of two key technologies: XDR (Extended Detection and Response) and SIEM (Security Information and Event Management). Its primary goal is to protect digital assets regardless of where they operate—from traditional on-premises servers in a local data centre, through virtual environments, to dynamic containers and distributed resources in the public cloud.

    The rise in Wazuh’s popularity is directly linked to the business model of dominant players in the SIEM market, such as Splunk. Their pricing, often based on the volume of data processed, can generate astronomical costs for growing companies, making advanced security a luxury. Wazuh, being free, eliminates this licensing barrier, which makes it particularly attractive to small and medium-sized enterprises (SMEs), public institutions, non-profit organisations, and all entities with limited budgets but who cannot afford to compromise on security.

    The emergence of such a powerful, free tool signals a fundamental shift in the cybersecurity market. One could speak of a democratisation of advanced defence mechanisms. Traditionally, SIEM/XDR-class platforms were the domain of large corporations with dedicated Security Operations Centres (SOCs) and substantial budgets. Meanwhile, cybercriminals do not limit their activities to the largest targets; SMEs are equally, and sometimes even more, vulnerable to attacks. Wazuh fills this critical gap, giving smaller organisations access to functionalities that were, until recently, beyond their financial reach. This represents a paradigm shift, where access to robust digital defence is no longer solely dependent on purchasing power but begins to depend on technical competence and the strategic decision to invest in a team.

    image 122

    To fully understand Wazuh’s unique position, it is worth comparing it with key players in the market.

    Table 1: Positioning Wazuh Against the Competition

    CriterionWazuhSplunkElastic Security
    Cost ModelOpen-source software, free. Paid options include technical support and a managed cloud service (SaaS).Commercial. Licensing is based mainly on the daily volume of data processed, which can lead to high costs at a large scale.“Open core” model. Basic functions are free, advanced ones (e.g., machine learning) are available in paid subscriptions. Prices are based on resources, not data volume.
    Main FunctionalitiesIntegrated XDR and SIEM. Strong emphasis on endpoint security (FIM, vulnerability detection, configuration assessment) and log analysis.A leader in log analysis and SIEM. An extremely powerful query language (SPL) and broad analytical capabilities. Considered the standard in large SOCs.An integrated security platform (SIEM + endpoint protection) built on the powerful Elasticsearch search engine. High flexibility and scalability.
    Deployment OptionsSelf-hosting (On-Premises / Private Cloud) or the official Wazuh Cloud service (SaaS).Self-hosting (On-Premises) or the Splunk Cloud service (SaaS).Self-hosting (On-Premises) or the Elastic Cloud service (SaaS).
    Target AudienceSMEs, organisations with technical expertise, entities with strict data sovereignty requirements, security enthusiasts.Large enterprises, mature Security Operations Centres (SOCs), organisations with large security budgets and a need for advanced analytics.Organisations seeking a flexible, scalable platform, often with an existing Elastic ecosystem. Development and DevOps teams.

    This comparison clearly shows that Wazuh is not a simple clone of commercial solutions. Its strength lies in the specific niche it occupies: it offers enterprise-class functionalities without licensing costs, in exchange requiring greater technical involvement from the user and the assumption of full responsibility for implementation and maintenance.

    Anatomy of a Defender – How Does the Wazuh Architecture Work?

    image 123

    Understanding the technical foundations of Wazuh is crucial for assessing the real complexity and potential challenges associated with its self-hosted deployment. At first glance, the architecture is elegant and logical; however, its scalability, one of its greatest advantages, simultaneously becomes its greatest operational challenge in a self-hosted model.

    The Agent-Server Model: The Eyes and Ears of the System

    At the core of the Wazuh architecture is a model based on an agent-server relationship. A lightweight, multi-platform Wazuh agent is installed on every monitored system—be it a Linux server, a Windows workstation, a Mac computer, or even cloud instances. The agent runs in the background, consuming minimal system resources, and its task is to continuously collect telemetry data. It gathers system and application logs, monitors the integrity of critical files, scans for vulnerabilities, inventories installed software and running processes, and detects intrusion attempts. All this data is then securely transmitted in near real-time to the central component—the Wazuh server.

    Central Components: The Brain of the Operation

    A Wazuh deployment, even in its simplest form, consists of three key central components that together form a complete analytical system.

    1. Wazuh Server: This is the heart of the entire system. It receives data sent by all registered agents. Its main task is to process this stream of information. The server uses advanced decoders to normalise and structure logs from various sources and then passes them through a powerful analytical engine. This engine, based on a predefined and configurable set of rules, correlates events and identifies suspicious activities, security policy violations, or Indicators of Compromise (IoCs). When an event or series of events matches a rule with a sufficiently high priority, the server generates a security alert.
    2. Wazuh Indexer: This is a specialised and highly scalable database, designed for the rapid indexing, storage, and searching of vast amounts of data. Technologically, the Wazuh Indexer is a fork of the OpenSearch project, which in turn was created from the Elasticsearch source code. All events collected by the server (both those that generated an alert and those that did not) and the alerts themselves are sent to the indexer. This allows security analysts to search through terabytes of historical data in seconds for traces of an attack, which is fundamental for threat hunting and forensic analysis processes.
    3. Wazuh Dashboard: This is the user interface for the entire platform, implemented as a web application. Like the indexer, it is based on the OpenSearch Dashboards project (formerly known as Kibana). The dashboard allows for the visualisation of data in the form of charts, tables, and maps, browsing and analysing alerts, managing agent and server configurations, and generating compliance reports. It is here that analysts spend most of their time, monitoring the security posture of the entire organisation.

    Security and Scalability of the Architecture

    A key aspect to emphasise is the security of the platform itself. Communication between the agent and the server occurs by default over port 1514/TCP and is protected by AES encryption (with a 256-bit key). Each agent must be registered and authenticated before the server will accept data from it. This ensures the confidentiality and integrity of the transmitted logs, preventing them from being intercepted or modified in transit.

    The Wazuh architecture was designed with scalability in mind. For small deployments, such as home labs or Proof of Concept tests, all three central components can be installed on a single, sufficiently powerful machine using a simplified installation script. However, in production environments monitoring hundreds or thousands of endpoints, such an approach quickly becomes inadequate. The official documentation and user experiences unequivocally indicate that to ensure performance and High Availability, it is necessary to implement a distributed architecture. This means separating the Wazuh server, indexer, and dashboard onto separate hosts. Furthermore, to handle the enormous volume of data and ensure resilience to failures, both the server and indexer components can be configured as multi-node clusters.

    It is at this point that the fundamental challenge of self-hosting becomes apparent. While an “all-in-one” installation is relatively simple, designing, implementing, and maintaining a distributed, multi-node Wazuh cluster is an extremely complex task. It requires deep knowledge of Linux systems administration, networking, and, above all, OpenSearch cluster management. The administrator must take care of aspects such as the correct replication and allocation of shards (index fragments), load balancing between nodes, configuring disaster recovery mechanisms, regularly creating backups, and planning updates for the entire technology stack. The decision to deploy Wazuh on a large scale in a self-hosted model is therefore not a one-time installation act. It is a commitment to the continuous management of a complex, distributed system, whose cost and complexity grow non-linearly with the scale of operations.

    The Strategic Decision – Full Control on Your Own Server versus the Convenience of the Cloud

    The choice of Wazuh deployment model—self-hosting on one’s own infrastructure (on-premises) versus using a ready-made cloud service (SaaS)—is one of the most important strategic decisions facing any organisation considering this platform. This is not merely a technical choice, but a fundamental decision concerning resource allocation, risk acceptance, and business priorities. An analysis of both approaches reveals a profound trade-off between absolute control and operational convenience.

    The Case for Self-Hosting: The Fortress of Data Sovereignty

    Organisations that decide to self-deploy and maintain Wazuh on their own servers are primarily driven by the desire for maximum control and independence. In this model, it is they, not an external provider, who define every aspect of the system’s operation—from hardware configuration, through data storage and retention policies, to the finest details of analytical rules. The open-source nature of Wazuh gives them an additional, powerful advantage: the ability to modify and adapt the platform to unique, often non-standard needs, which is impossible with closed, commercial solutions.

    However, the main driving force for many companies, especially in Europe, is the concept of data sovereignty. This is not just a buzzword, but a hard legal and strategic requirement. Data sovereignty means that digital data is subject to the laws and jurisdiction of the country in which it is physically stored and processed. In the context of stringent regulations such as Europe’s GDPR, the American HIPAA for medical data, or the PCI DSS standard for the payment card industry, keeping sensitive logs and security incident data within one’s own, controlled data centre is often the simplest and most secure way to ensure compliance.

    This choice also has a geopolitical dimension. Edward Snowden’s revelations about the PRISM programme run by the US NSA made the world aware that data stored in the clouds of American tech giants could be subject to access requests from US government agencies under laws such as the CLOUD Act. For many European companies, public institutions, or entities in the defence industry, the risk that their operational data and security logs could be made available to a foreign government is unacceptable. Self-hosting Wazuh in a local data centre, within the European Union, completely eliminates this risk, ensuring full digital sovereignty.

    The Reality of Self-Hosting: Hidden Costs and Responsibility

    The promise of free software is tempting, but the reality of a self-hosted deployment quickly puts the concept of “free” to the test. An analysis of the Total Cost of Ownership (TCO) reveals a series of hidden expenses that go far beyond the zero cost of the licence.

    • Capital Expenditure (CapEx): At the outset, the organisation must make significant investments in physical infrastructure. This includes purchasing powerful servers (with large amounts of RAM and fast processors), disk arrays capable of storing terabytes of logs, and networking components. Costs associated with providing appropriate server room conditions, such as uninterruptible power supplies (UPS), air conditioning, and physical access control systems, must also be considered.
    • Operational Expenditure (OpEx): This is where the largest, often underestimated, expenses lie. Firstly, the ongoing electricity and cooling bills. Secondly, and most importantly, personnel costs. Wazuh is not a “set it and forget it” system. As numerous users report, it requires constant attention, tuning, and maintenance. The default configuration can generate tens of thousands of alerts per day, leading to “alert fatigue” and rendering the system useless. To prevent this, a qualified security analyst or engineer is needed to constantly fine-tune rules and decoders, eliminate false positives, and develop the platform. For larger, distributed deployments, maintaining system stability can become a full-time job. One experienced user bluntly stated, “I’m losing my mind having to fix Wazuh every single day.” According to an analysis cited by GitHub, the total cost of a self-hosted solution can be up to 5.25 times higher than its cloud equivalent.

    Moreover, in the self-hosted model, the entire responsibility for security rests on the organisation’s shoulders. This includes not only protection against external attacks but also regular backups, testing disaster recovery procedures, and bearing the full consequences (financial and reputational) in the event of a successful breach and data leak.

    The Cloud Alternative: Convenience as a Service (SaaS)

    For organisations that want to leverage the power of Wazuh but are not ready to take on the challenges of self-hosting, there is an official alternative: Wazuh Cloud. This is a Software as a Service (SaaS) model, where the provider (the company Wazuh) takes on the entire burden of managing the server infrastructure, and the client pays a monthly or annual subscription for a ready-to-use service.

    The advantages of this approach are clear:

    • Lower Barrier to Entry and Predictable Costs: The subscription model eliminates the need for large initial hardware investments (CapEx) and converts them into a predictable, monthly operational cost (OpEx), which is often lower in the short and medium term.
    • Reduced Operational Burden: Issues such as server maintenance, patch installation, software updates, scaling resources in response to growing load, and ensuring high availability are entirely the provider’s responsibility. This frees up the internal IT team to focus on strategic tasks rather than “firefighting.”
    • Access to Expert Knowledge: Cloud clients benefit from the knowledge and experience of Wazuh engineers who manage hundreds of deployments daily. This guarantees optimal configuration and platform stability.

    Of course, convenience comes at a price. The main disadvantage is a partial loss of control over the system and data. The organisation must trust the security policies and procedures of the provider. Most importantly, depending on the location of the Wazuh Cloud data centres, the same data sovereignty issues that the self-hosted model avoids may arise.

    Ultimately, the choice between self-hosting and the cloud is not an assessment of which option is “better” in an absolute sense. It is a strategic allocation of risk and resources. The self-hosted model is a conscious acceptance of operational risk (failures, configuration errors, staff shortages) in exchange for minimising the risk associated with data sovereignty and third-party control. In contrast, the cloud model is a transfer of operational risk to the provider in exchange for accepting the risk associated with entrusting data and potential legal-geopolitical implications. For a financial sector company in the EU, the risk of a GDPR breach may be much higher than the risk of a server failure, which strongly inclines them towards self-hosting. For a dynamic tech start-up without regulated data, the cost of hiring a dedicated specialist and the operational risk may be unacceptable, making the cloud the obvious choice.

    Table 2: Decision Analysis: Self-Hosting vs. Wazuh Cloud

    CriterionSelf-Hosting (On-Premises)Wazuh Cloud (SaaS)
    Total Cost of Ownership (TCO)High initial cost (hardware, CapEx). Significant, often unpredictable operational costs (personnel, energy, OpEx). Potentially lower in the long term at a large scale and with constant utilisation.Low initial cost (no CapEx). Predictable, recurring subscription fees (OpEx). Usually more cost-effective in the short and medium term. Potentially higher in the long run.
    Control and CustomisationAbsolute control over hardware, software, data, and configuration. Ability to modify source code and deeply integrate with existing systems.Limited control. Configuration within the options provided by the supplier. No ability to modify source code or access the underlying infrastructure.
    Security and ResponsibilityFull responsibility for physical and digital security, backups, disaster recovery, and regulatory compliance rests with the organisation.Shared responsibility. The provider is responsible for the security of the cloud infrastructure. The organisation is responsible for configuring security policies and managing access.
    Deployment and MaintenanceComplex and time-consuming deployment, especially in a distributed architecture. Requires continuous maintenance, monitoring, updating, and tuning by qualified personnel.Quick and simple deployment (service activation). Maintenance, updates, and ensuring availability are entirely the provider’s responsibility, minimising the burden on the internal IT team.
    ScalabilityScalability is possible but requires careful planning, purchase of additional hardware, and manual reconfiguration of the cluster. It can be a slow and costly process.High flexibility and scalability. Resources (computing power, disk space) can be dynamically increased or decreased depending on needs, often with a few clicks.
    Data SovereigntyFull data sovereignty. The organisation has 100% control over the physical location of its data, which facilitates compliance with local legal and regulatory requirements (e.g., GDPR).Dependent on the location of the provider’s data centres. May pose challenges related to GDPR compliance if data is stored outside the EU. Potential risk of access on demand by foreign governments.

    Voices from the Battlefield – A Balanced Analysis of Expert and User Opinions

    A theoretical analysis of a platform’s capabilities and architecture is one thing, but its true value is verified in the daily work of security analysts and system administrators. The voices of users from around the world, from small businesses to large enterprises, paint a nuanced picture of Wazuh—a tool that is incredibly powerful, but also demanding. An analysis of opinions gathered from industry portals such as Gartner, G2, Reddit, and specialist forums allows us to identify both its greatest advantages and its most serious challenges.

    The Praise – What Works Brilliantly?

    Several key strengths that attract organisations to Wazuh are repeatedly mentioned in reviews and case studies.

    • Cost as a Game-Changer: For many users, the fundamental advantage is the lack of licensing fees. One information security manager stated succinctly: “It costs me nothing.” This financial accessibility is seen as crucial, especially for smaller entities. Wazuh is often described as a “great, out-of-the-box SOC solution for small to medium businesses” that could not otherwise afford this type of technology.
    • Powerful, Built-in Functionalities: Users regularly praise specific modules that deliver immediate value. File Integrity Monitoring (FIM) and Vulnerability Detection are at the forefront. One reviewer described them as the “biggest advantages” of the platform. FIM is key to detecting unauthorised changes to critical system files, which can indicate a successful attack, while the vulnerability module automatically scans systems for known, unpatched software. The platform’s ability to support compliance with regulations such as HIPAA or PCI DSS is also a frequently highlighted asset, allowing organisations to verify their security posture with a few clicks.
    • Flexibility and Customisation: The open nature of Wazuh is seen as a huge advantage by technical teams. The ability to customise rules, write their own decoders, and integrate with other tools gives a sense of complete control. “I personally love the flexibility of Wazuh, as a system administrator I can think of any use case and I know I’ll be able to leverage Wazuh to pull the logs and create the alerts I need,” wrote Joanne Scott, a lead administrator at one of the companies using the platform.

    The Criticism – Where Do the Challenges Lie?

    Equally numerous and consistent are the voices pointing to significant difficulties and challenges that must be considered before deciding on deployment.

    • Complexity and a Steep Learning Curve: This is the most frequently raised issue. Even experienced security specialists admit that the platform is not intuitive. One expert described it as having a “steep learning curve for newcomers.” Another user noted that “the initial installation and configuration can be a bit complicated, especially for users without much experience in SIEM systems.” This confirms that Wazuh requires dedicated time for learning and experimentation.
    • The Need for Tuning and “Alert Fatigue”: This is probably the biggest operational challenge. Users agree that the default, “out-of-the-box” configuration of Wazuh generates a huge amount of noise—low-priority alerts that flood analysts and make it impossible to detect real threats. One team reported receiving “25,000 to 50,000 low-level alerts per day” from just two monitored endpoints. Without an intensive and, importantly, continuous process of tuning rules, disabling irrelevant alerts, and creating custom ones tailored to the specific environment, the system is practically useless. One of the more blunt comments on a Reddit forum stated that “out of the box it’s kind of shitty.”
    • Performance and Stability at Scale: While Wazuh performs well in small and medium-sized environments, deployments involving hundreds or thousands of agents can encounter serious stability problems. In one dramatic post on a Google Groups forum, an administrator managing 175 agents described daily problems with agents disconnecting and server services hanging, forcing him to restart the entire infrastructure daily. This shows that scaling Wazuh requires not only more powerful hardware but also deep knowledge of optimising its components.
    • Documentation and Support for Different Systems: Although Wazuh has extensive online documentation, some users find it insufficient for more complex problems. There are also complaints that the predefined decoders (pieces of code responsible for parsing logs) work great for Windows systems but are often outdated or incomplete for other platforms, including popular network devices. This forces administrators to search for unofficial, community-created solutions on platforms like GitHub, which introduces an additional element of risk and uncertainty.

    An analysis of these starkly different opinions leads to a key conclusion. Wazuh should not be seen as a ready-to-use product that can simply be “switched on.” It is rather a powerful security framework—a set of advanced tools and capabilities from which a qualified team must build an effective defence system. Its final value depends 90% on the quality of the implementation, configuration, and competence of the team, and only 10% on the software itself. The users who succeed are those who talk about “configuring,” “customising,” and “integrating.” Those who encounter problems are often those who expected a ready-made solution and were overwhelmed by the default configuration. The story of one expert who, during a simulated attack on a default Wazuh installation, “didn’t catch a single thing” is the best proof of this. An investment in a self-hosted Wazuh is really an investment in the people who will manage it.

    Consequences of the Choice – Risk and Reward in the Open-Source Ecosystem

    The decision to base critical security infrastructure on a self-hosted, open-source solution like Wazuh goes beyond a simple technical assessment of the tool itself. It is a strategic immersion into the broader ecosystem of Open Source Software (OSS), which brings with it both enormous benefits and serious, often underestimated, risks.

    The Ubiquity and Hidden Risks of Open-Source Software

    Open-source software has become the foundation of the modern digital economy. According to the 2025 “Open Source Security and Risk Analysis” (OSSRA) report, as many as 97% of commercial applications contain OSS components. They form the backbone of almost every system, from operating systems to libraries used in web applications. However, this ubiquity has its dark side. The same report reveals alarming statistics:

    • 86% of the applications studied contained at least one vulnerability in the open-source components they used.
    • 91% of applications contained components that were outdated and had newer, more secure versions available.
    • 81% of applications contained high or critical risk vulnerabilities, many of which already had publicly available patches.

    One of the biggest challenges is the problem of transitive dependencies. This means that a library a developer consciously adds to a project itself depends on dozens of other libraries, which in turn depend on others. This creates a complex and difficult-to-trace chain of dependencies, meaning organisations often have no idea exactly which components are running in their systems and what risks they carry. This is the heart of the software supply chain security problem.

    By choosing to self-host Wazuh, an organisation takes on full responsibility for managing not only the platform itself but its entire technology stack. This includes the operating system it runs on, the web server, and, above all, key components like the Wazuh Indexer (OpenSearch) and its numerous dependencies. This means it is necessary to track security bulletins for all these elements and react immediately to newly discovered vulnerabilities.

    The Advantages of the Open-Source Model: Transparency and the Power of Community

    In opposition to these risks, however, stand fundamental advantages that make the open-source model so attractive, especially in the field of security.

    • Transparency and Trust: In the case of commercial, closed-source solutions (“black boxes”), the user must fully trust the manufacturer’s declarations regarding security. In the open-source model, the source code is publicly available. This provides the opportunity to conduct an independent security audit and verify that the software does not contain hidden backdoors or serious flaws. This transparency builds fundamental trust, which is invaluable in the context of systems designed to protect a company’s most valuable assets.
    • The Power of Community: Wazuh boasts one of the largest and most active communities in the open-source security world. Users have numerous support channels at their disposal, such as the official Slack, GitHub forums, a dedicated subreddit, and Google Groups. It is there, in the heat of real-world problems, that custom decoders, innovative rules, and solutions to problems not found in the official documentation are created. This collective wisdom is an invaluable resource, especially for teams facing unusual challenges.
    • Avoiding Vendor Lock-in: By choosing a commercial solution, an organisation becomes dependent on a single vendor—their product development strategy, pricing policy, and software lifecycle. If the vendor decides to raise prices, end support for a product, or go bankrupt, the client is left with a serious problem. Open source provides freedom. An organisation can use the software indefinitely, modify and develop it, and even use the services of another company specialising in support for that solution if they are not satisfied with the official support.

    This duality of the open-source nature leads to a deeper conclusion. The decision to self-host Wazuh fundamentally changes the organisation’s role in the security ecosystem. It ceases to be merely a passive consumer of a ready-made security product and becomes an active manager of software supply chain risk. When a company buys a commercial SIEM, it pays the vendor to take responsibility for managing the risk associated with the components from which its product is built. It is the vendor who must patch vulnerabilities in libraries, update dependencies, and guarantee the security of the entire stack. By choosing the free, self-hosted Wazuh, the organisation consciously (or not) takes on all this responsibility itself. To do this in a mature way, it is no longer enough to just know how to configure rules in Wazuh. It becomes necessary to implement advanced software management practices, such as Software Composition Analysis (SCA) to identify all components and their vulnerabilities, and to maintain an up-to-date “Software Bill of Materials” (SBOM) for the entire infrastructure. This significantly raises the bar for competency requirements and shows that the decision to self-host has deep, structural consequences for the entire IT and security department.

    The Verdict – Who is Self-Hosted Wazuh For?

    The analysis of the Wazuh platform in a self-hosted model leads to an unequivocal conclusion: it is a solution with enormous potential, but burdened with equally great responsibility. The key trade-off that runs through every aspect of this technology can be summarised as follows: self-hosted Wazuh offers unparalleled control, absolute data sovereignty, and zero licensing costs, but in return requires significant, often underestimated, investments in hardware and, above all, in highly qualified personnel capable of managing a complex and demanding system that requires constant attention.

    This is not a solution for everyone. Attempting to implement it without the appropriate resources and awareness of its nature is a straight path to frustration, a false sense of security, and ultimately, project failure.

    Profile of the Ideal Candidate

    Self-hosted Wazuh is the optimal, and often the only right, choice for organisations that meet most of the following criteria:

    • They have a mature and competent technical team: They have an internal security and IT team (or the budget to hire/train one) that is not afraid of working with the command line, writing scripts, analysing logs at a low level, and managing a complex Linux infrastructure.
    • They have strict data sovereignty requirements: They operate in highly regulated industries (financial, medical, insurance), in public administration, or in the defence sector, where laws (e.g., GDPR) or internal policies categorically require that sensitive data never leaves physically controlled infrastructure.
    • They operate at a large scale where licensing costs become a barrier: They are large enough that the licensing costs of commercial SIEM systems, which increase with data volume, become prohibitive. In such a case, investing in a dedicated team to manage a free solution becomes economically justified over a period of several years.
    • They understand they are implementing a framework, not a finished product: They accept the fact that Wazuh is a set of powerful building blocks, not a ready-made house. They are prepared for a long-term, iterative process of tuning, customising, and improving the system to fully match the specifics of their environment and risk profile.
    • They have a need for deep customisation: Their security requirements are so unique that standard, commercial solutions cannot meet them, and the ability to modify the source code and create custom integrations is a key value.

    Questions for Self-Assessment

    For all other organisations, especially smaller ones with limited human resources and without strict sovereignty requirements, a much safer and more cost-effective solution will likely be to use the Wazuh Cloud service or another commercial SIEM/XDR solution.

    Before making the final, momentous decision, every technical leader and business manager should ask themselves and their team a series of honest questions:

    1. Have we realistically assessed the Total Cost of Ownership (TCO)? Does our budget account not only for servers but also for the full-time equivalents of specialists who will manage this platform 24/7, including their salaries, training, and the time needed to learn?
    2. Do we have the necessary expertise in our team? Do we have people capable of advanced rule tuning, managing a distributed cluster, diagnosing performance issues, and responding to failures in the middle of the night? If not, are we prepared to invest in their recruitment and development?
    3. What is our biggest risk? Are we more concerned about operational risk (system failure, human error, inadequate monitoring) or regulatory and geopolitical risk (breach of data sovereignty, third-party access)? How does the answer to this question influence our decision?
    4. Are we ready for full responsibility? Do we understand that by choosing self-hosting, we are taking responsibility not only for the configuration of Wazuh but for the security of the entire software supply chain on which it is based, including the regular patching of all its components?

    Only an honest answer to these questions will allow you to avoid a costly mistake and make a choice that will genuinely strengthen your organisation’s cybersecurity, rather than creating an illusion of it.

    Integrating Logs from Docker Applications with Wazuh SIEM

    In modern IT environments, containerisation using Docker has become the standard. It enables the rapid deployment and scaling of applications but also introduces new challenges in security monitoring. By default, logs generated by applications running in containers are isolated from the host system, which complicates their analysis by SIEM systems like Wazuh.

    In this post, we will show you how to break down this barrier. We will guide you step-by-step through the configuration process that will allow the Wazuh agent to read, analyse, and generate alerts from the logs of any application running in a Docker container. We will use the password manager Vaultwarden as a practical example.

    The Challenge: Why is Accessing Docker Logs Difficult?

    Docker containers have their own isolated file systems. Applications inside them most often send their logs to “standard output” (stdout/stderr), which is captured by Docker’s logging mechanism. The Wazuh agent, running on the host system, does not have default access to this stream or to the container’s internal files.

    To enable monitoring, we must make the application logs visible to the Wazuh agent. The best and cleanest way to do this is to configure the container to write its logs to a file and then share that file externally using a Docker volume.

    Step 1: Exposing Application Logs Outside the Container

    Our goal is to make the application’s log file appear in the host server’s file system. We will achieve this by modifying the docker-compose.yml file.

    1. Configure the application to log to a file: Many Docker images allow you to define the path to a log file using an environment variable. In the case of Vaultwarden, this is LOG_FILE.
    2. Map a volume: Create a mapping between a directory on the host server and a directory inside the container where the logs are saved.

    Here is an example of what a fragment of the docker-compose.yml file for Vaultwarden with the correct logging configuration might look like:

    version: “3”

    services:
      vaultwarden:
        image: vaultwarden/server:latest
        container_name: vaultwarden
        restart: unless-stopped
        volumes:
          # Volume for application data (database, attachments, etc.)
          – ./data:/data
        ports:
          – “8080:80”
        environment:
          # This variable instructs the application to write logs to a file inside the container
          – LOG_FILE=/data/vaultwarden.log

    What happened here?

    • LOG_FILE=/data/vaultwarden.log: We are telling the application to create a vaultwarden.log file in the /data directory inside the container.
    • ./data:/data: We are mapping the /data directory from the container to a data subdirectory in the location where the docker-compose.yml file is located (on the host).

    After saving the changes and restarting the container (docker-compose down && docker-compose up -d), the log file will be available on the server at a path like /opt/vaultwarden/data/vaultwarden.log.

    Step 2: Configuring the Wazuh Agent to Monitor the File

    Now that the logs are accessible on the host, we need to instruct the Wazuh agent to read them.

    Open the agent’s configuration file:

    sudo nano /var/ossec/etc/ossec.conf

    Add the following block within the <ossec_config> section:

    <localfile>
      <location>/opt/vaultwarden/data/vaultwarden.log</location>
      <log_format>logall</log_format>
    </localfile>

    Restart the agent to apply the changes:

    sudo systemctl restart wazuh-agent

    From now on, every new line in the vaultwarden.log file will be sent to the Wazuh manager.

    Step 3: Translating Logs into the Language of Wazuh (Decoders)

    The Wazuh manager is now receiving raw log lines, but it doesn’t know how to interpret them. We need to create decoders that will “teach” it to extract key information, such as the attacker’s IP address or the username.

    On the Wazuh manager server, edit the local decoders file:

    sudo nano /var/ossec/etc/decoders/local_decoder.xml

    Add the following decoders:

    <!– Decoder for Vaultwarden logs –>
    <decoder name=”vaultwarden”>
      <prematch>vaultwarden::api::identity</prematch>
    </decoder>

    <!– Decoder for failed login attempts in Vaultwarden –>
    <decoder name=”vaultwarden-failed-login”>
      <parent>vaultwarden</parent>
      <prematch>Username or password is incorrect. Try again. IP: </prematch>
      <regex>IP: (\S+)\. Username: (\S+)\.$</regex>
      <order>srcip, user</order>
    </decoder>

    Step 4: Creating Rules and Generating Alerts

    Once Wazuh can understand the logs, we can create rules that will generate alerts.

    On the manager server, edit the local rules file:

    sudo nano /var/ossec/etc/rules/local_rules.xml

    Add the following rule group:

    <group name=”vaultwarden,”>
      <rule id=”100105″ level=”5″>
        <decoded_as>vaultwarden</decoded_as>
        <description>Vaultwarden: Failed login attempt for user $(user) from IP address: $(srcip).</description>
        <group>authentication_failed,</group>
      </rule>

      <rule id=”100106″ level=”10″ frequency=”6″ timeframe=”120″>
        <if_matched_sid>100105</if_matched_sid>
        <description>Vaultwarden: Multiple failed login attempts (possible brute-force attack) from IP address: $(srcip).</description>
        <mitre>
          <id>T1110</id>
        </mitre>
        <group>authentication_failures,</group>
      </rule>
    </group>

    Note: Ensure that the rule id is unique and does not appear anywhere else in the local_rules.xml file. Change it if necessary.

    Step 5: Restart and Verification

    Finally, restart the Wazuh manager to load the new decoders and rules:

    sudo systemctl restart wazuh-manager

    To test the configuration, make several failed login attempts to your Vaultwarden application. After a short while, you should see level 5 alerts in the Wazuh dashboard for each attempt, and after exceeding the threshold (6 attempts in 120 seconds), a critical level 10 alert indicating a brute-force attack.

    Summary

    Integrating logs from applications running in Docker containers with the Wazuh system is a key element in building a comprehensive security monitoring system. The scheme presented above—exposing logs to the host via a volume and then analysing them with custom decoders and rules—is a universal approach that you can apply to virtually any application, not just Vaultwarden. This gives you full visibility of events across your entire infrastructure, regardless of the technology it runs on.

  • Full Control Over Applications in TrueNAS Scale via SSH

    Full Control Over Applications in TrueNAS Scale via SSH

    TrueNAS Scale, with its powerful web interface, makes installing and managing applications simple and intuitive. However, any advanced user will sooner or later discover that the real power and flexibility lie in the command line. It’s worth noting that since version 24.04 (Electric Eel), TrueNAS Scale has undergone a significant transformation, moving away from the previous k3s system (a lightweight Kubernetes distribution) in favour of native container management using Docker. This change has significantly simplified the architecture and made working directly with containers more accessible.

    True freedom comes from a direct SSH connection, which bypasses the limitations of the in-browser terminal. It allows you to transform from a regular user into a knowledgeable administrator who can look ‘under the bonnet’ of any application, diagnose problems in real-time, and manage the system with a precision unavailable from the graphical user interface. This article is a comprehensive guide to managing applications in TrueNAS Scale using the terminal, based on the native Docker commands that have become the new foundation of the application system.

    Step 1: Identifying Running Applications

    Before we can start managing applications, we need to know what is actually running on our system. The graphical interface shows us the application names, but the terminal gives us insight into the actual containers.

    Listing Containers: docker ps

    The basic command is docker ps. It displays a list of all currently running containers.

    docker ps
    

    The output of this command is a table with key information:

    • CONTAINER ID: A unique identifier.
    • IMAGE: The name of the image from which the container was created.
    • STATUS: Information on how long the container has been running.
    • PORTS: Port mappings.
    • NAMES: The most important piece of information for us – the user-friendly name of the container, which we will use in subsequent commands (e.g., ix-jellyfin-jellyfin-1).

    If you also want to see stopped containers, add the -a flag: docker ps -a.

    Monitoring Resources in Real-Time: docker stats

    An even better way to get a quick overview is docker stats. This command displays a dynamic, live-updating table showing CPU, RAM, and network resource usage for each container. It’s the perfect tool to identify at a glance which application is putting a load on the system.

    docker stats
    

    Step 2: Getting Inside a Container – docker exec

    Once you’ve identified a container, you can get inside it to browse files, edit configurations, or perform advanced diagnostics.

    docker exec -it ix-jellyfin-jellyfin-1 /bin/bash
    

    Let’s break down this command:

    • docker exec: Execute a command in a running container.
    • -it: Key flags that signify an interactive session (-i) with a pseudo-terminal allocated (-t).
    • ix-jellyfin-jellyfin-1: The name of our container.
    • /bin/bash: The command we want to run inside – in this case, the Bash shell.

    After running the command, the terminal prompt will change, indicating that you are now “inside”. You can move freely around the container’s file system using commands like ls, cd, etc. To exit and return to TrueNAS, simply type exit or use the shortcut Ctrl + D.

    Why are Tools like top, ps, or nano Missing?

    While working inside a container, you might encounter command not found errors. This is intentional. Many modern Docker images (including the official Jellyfin one) are so-called minimalist or “distroless” images. They do not contain any additional tools, only the application itself and its libraries. This is a best practice that increases security and reduces the image size.

    In such cases, you must rely on the external tools provided by Docker itself.

    Step 3: Diagnostics and Troubleshooting

    When an application isn’t working correctly, the terminal is your best friend.

    Viewing Logs: docker logs

    This is the most important diagnostic command. It displays everything the application has written to its logs.

    docker logs ix-nextcloud-nextcloud-1
    

    If you want to follow the logs in real-time, add the -f (--follow) flag:

    docker logs -f ix-nextcloud-nextcloud-1
    

    Detailed Inspection: docker inspect

    The docker inspect command returns a vast amount of detailed information about a container in JSON format – its IP address, attached volumes, environment variables, and much more.

    docker inspect ix-tailscale-tailscale-1
    

    Step 4: Managing Files and the Application Lifecycle

    The terminal gives you full control over the files and the state of your applications.

    Copying Files: docker cp

    This is an extremely useful command for transferring files between the TrueNAS system and a container, without needing to go inside it.

    Copying from a container to TrueNAS (e.g., for a configuration backup):

    docker cp ix-nginx-proxy-manager-npm-1:/data/nginx /mnt/YourPool/backups/
    

    Copying from TrueNAS to a container:

    docker cp /mnt/YourPool/data/new-certificate.pem ix-nginx-proxy-manager-npm-1:/data/custom_ssl/
    

    Controlling the Application State

    Instead of clicking in the graphical interface, you can quickly manage your applications:

    To stop an application:

    docker stop ix-qbittorrent-qbittorrent-1
    

    To start a stopped application:

    docker start ix-qbittorrent-qbittorrent-1
    

    To restart an application (the most common operation):

    docker restart ix-qbittorrent-qbittorrent-1
    

    From User to Administrator

    Mastering a few basic Docker commands in the SSH terminal opens up a whole new dimension of managing TrueNAS Scale. You are no longer dependent on the limitations of the graphical interface and gain the tools to understand how your applications really work.

    The ability to quickly check logs, monitor resources in real-time, edit any configuration file, or make a swift backup – all this makes working with the system more efficient and troubleshooting faster. Connecting via SSH is not just a convenience; it’s a fundamental tool for any conscientious administrator who wants full control over their server.