In today’s digital world, our lives—both personal and professional—are recorded as data. From family photos to crucial company databases, losing this information can be catastrophic. Despite this, many individuals and businesses still treat backups as an afterthought. Here is a complete guide to building a powerful, automated, and secure backup system using free tools: UrBackup, TrueNAS Scale, and Tailscale.
Why You Need a Plan B: The Importance of Backups
Imagine one morning your laptop’s hard drive refuses to work. Or the server that runs your business falls victim to a ransomware attack, and all your files are encrypted. These aren’t scenarios from science-fiction films but everyday reality. Hardware failure, human error, malicious software, or even theft—the threats are numerous.
A backup is your insurance policy. It’s the only way to quickly and painlessly recover valuable data in the event of a disaster, minimising downtime and financial loss. Without it, rebuilding lost information is often impossible or astronomically expensive.
The Golden Rule: The 3-2-1 Strategy
In the world of data security, there is a simple but incredibly effective principle known as the 3-2-1 strategy. It states that you should have:
THREE copies of your data (the original and two backups).
On TWO different storage media (e.g., a disk in your computer and a disk in a NAS server).
ONE copy in a different location (off-site), in case of fire, flood, or theft at your main location.
Having three copies of your data drastically reduces the risk of losing them all simultaneously. If one drive fails, you have a second. If the entire office is destroyed, you have a copy in the cloud or at home.
A Common Misconception: Why RAID is NOT a Backup
Many NAS server users mistakenly believe that a RAID configuration absolves them of the responsibility to make backups. This is a dangerous error.
RAID (Redundant Array of Independent Disks) is a technology that provides redundancy and high availability, not data security. It protects against the physical failure of a hard drive. Depending on the configuration (e.g., RAID 1, RAID 5, RAID 6, or their RAID-Z equivalents in TrueNAS), the array can survive the failure of one or even two drives at the same time, allowing them to be replaced without data loss or system downtime.
However, RAID will not protect you from:
Human error: Accidentally deleting a file is instantly replicated across all drives in the array.
Ransomware attack: Encrypted files are immediately synchronised across all drives.
Power failure or RAID controller failure: This can lead to the corruption of the entire array.
Theft or natural disaster: Losing the entire device means losing all the data.
Remember: Redundancy protects against hardware failure; a backup protects against data loss.
Your Backup Hub: UrBackup on TrueNAS Scale
Creating a robust backup system doesn’t have to involve expensive subscriptions. An ideal solution is to combine the TrueNAS Scale operating system with the UrBackup application.
TrueNAS Scale: A powerful, free operating system for building NAS servers. It is based on Linux and offers advanced features such as the ZFS file system and support for containerised applications.
UrBackup: An open-source, client-server software for creating backups. It is extremely efficient and flexible, allowing for the backup of both individual files and entire disk images.
The TrueNAS Protective Shield: ZFS Snapshots
One of the most powerful features of TrueNAS, resulting from its use of the ZFS file system, is snapshots. A snapshot is an instantly created, read-only image of the entire file system at a specific moment. It works like freezing your data in time.
Why is this so important in the context of ransomware?
When ransomware attacks and encrypts files on a network share, these changes affect the “live” version of the data. However, previously taken snapshots remain untouched and unchanged because they are inherently read-only. In the event of an attack, you can restore the entire dataset to its pre-infection state in seconds, completely nullifying the effects of the attack.
You can configure TrueNAS to automatically create snapshots (e.g., every hour) and retain them for a specified period. This creates an additional, incredibly powerful layer of protection that perfectly complements the backups performed by UrBackup.
Advantages and Disadvantages of the Solution
Advantages:
Full control and privacy: Your data is stored on your own hardware.
No licence fees: The software is completely free.
Exceptional efficiency: Incremental backups save space and time.
Flexibility: Supports Windows, macOS, Linux, physical servers, and VPS.
Disk image backups: Ability to perform a “bare-metal restore” of an entire system.
Disadvantages:
Requires your own hardware: You need to have a NAS server.
Initial configuration: Requires some technical knowledge.
Full responsibility: The user is responsible for the security and operation of the server.
Step-by-Step: Installation and Configuration
1. Installing UrBackup on TrueNAS Scale
Log in to the TrueNAS web interface.
Navigate to the Apps section.
Search for the UrBackup application and click Install.
In the most important configuration step, you must specify the path where backups will be stored (e.g., /mnt/YourPool/backups).
After the installation is complete, start the application and go to its web interface.
2. Basic Server Configuration
In the UrBackup interface, go to Settings. The most important initial options are:
Path to backup storage: This should already be set during installation.
Backup intervals: Set how often incremental backups (e.g., every few hours) and full backups (e.g., every few weeks) should be performed.
Email settings: Configure email notifications to receive reports on the status of your backups.
3. Installing the Client on Computers
The process of adding a computer to the backup system consists of two stages: registering it on the server and installing the software on the client machine.
a) Adding a new client on the server:
In the UrBackup interface, go to the Status tab.
Click the blue + Add new client button.
Select the option Add new internet/active client. This is recommended as it works both on the local network and over the internet (e.g., via Tailscale).
Enter a unique name for the new client (e.g., “Annas-Laptop” or “Web-Server”) and click Add client.
b) Installing the software on the client machine:
After adding the client on the server, while still on the Status tab, you will see buttons to Download client for Windows and Download client for Linux.
Click the appropriate button and select the name of the client you just added from the drop-down list.
Download the prepared installation file (.exe or .sh). It is already fully configured to connect to your server.
Run the installer on the client computer and follow the instructions.
After a few minutes, the new client should connect to the server and appear on the list with an “Online” status, ready for its first backup.
Security Above All: Tailscale Enters the Scene
How can you securely back up computers located outside your local network? The ideal solution is Tailscale. It creates a secure, private network (a mesh VPN) between all your devices, regardless of their location.
Why use Tailscale with UrBackup?
Simplicity: Installation and configuration take minutes.
“Zero Trust” Security: Every connection is end-to-end encrypted.
Stable IP Addresses: Each device receives a static IP address from the 100.x.x.x range, which does not change even when the device moves to a different physical location.
What to do if the IP address changes?
If for some reason you need to change the IP address of the UrBackup server (e.g., after switching from another VPN to Tailscale), the procedure is simple:
Update the address on the UrBackup server: In Settings -> Internet/Active Clients, enter the new, correct server address (e.g., urbackup://100.x.x.x).
Download the updated installer: On the Status tab, click Download client, select the offline client from the list, and download a new installation script for it.
Run the installer on the client: Running the new installer will automatically update the configuration on the client machine.
Managing and Monitoring Backups
The UrBackup interface provides all the necessary tools to supervise the system.
Status: The main dashboard showing a list of all clients, their online/offline status, and the status of their last backup.
Activities: A live view of currently running operations, such as file indexing or data transfer.
Backups: A list of all completed backups for each client, with the ability to browse and restore files.
Logs: A detailed record of all events, errors, and warnings—an invaluable tool for diagnosing problems.
Statistics: Charts and tables showing disk space usage by individual clients over time.
Backing Up Databases: Do It Right!
Never back up a database by simply copying its files from the disk while the service is running! This risks creating an inconsistent copy that will be useless. The correct method is to perform a “dump” using tools like mysqldump or mariadb-dump.
Method 2: Each Database to a Separate File (Recommended)
A more flexible solution. The script below will automatically save each database to a separate, compressed file. It should be run periodically (e.g., via cron) just before the scheduled backup by UrBackup.
#!/bin/bash
# Configuration
BACKUP_DIR="/var/backups/mysql"
DB_USER="root"
DB_PASS="YourSuperSecretPassword"
# Check if user and password are provided
if [ -z "$DB_USER" ] || [ -z "$DB_PASS" ]; then
echo "Error: DB_USER or DB_PASS variables are not set in the script."
exit 1
fi
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
# Get a list of all databases, excluding system databases
DATABASES=$(mysql -u "$DB_USER" -p"$DB_PASS" -e "SHOW DATABASES;" | tr -d " " | grep -vE "(Database|information_schema|performance_schema|mysql|sys)")
# Loop through each database
for db in $DATABASES; do
echo "Dumping database: $db"
# Perform the dump and compress on the fly
mysqldump -u "$DB_USER" -p"$DB_PASS" --databases "$db" | gzip > "$BACKUP_DIR/$db-$(date +%Y-%m-%d).sql.gz"
if [ $? -eq 0 ]; then
echo "Dump of database $db completed successfully."
else
echo "Error during dump of database $db."
fi
done
# Optional: Remove old backups (older than 7 days)
find "$BACKUP_DIR" -type f -name "*.sql.gz" -mtime +7 -exec rm {} \;
echo "Backup process for all databases has finished."
Your Digital Fortress
Having a solid, automated backup strategy is not a luxury but an absolute necessity. Combining the power of TrueNAS Scale with its ZFS snapshots, the flexibility of UrBackup, and the security of Tailscale allows you to build a multi-layered, enterprise-class defence system at zero software cost.
It is an investment of time that provides priceless peace of mind. Remember, however, that no system is entirely maintenance-free. Regularly monitoring logs, checking email reports, and, most importantly, periodically performing test restores of your files are the final, crucial elements that turn a good backup system into an impregnable fortress protecting your most valuable asset—your data.
Faced with the rising costs of commercial solutions and escalating cyber threats, the free security platform Wazuh is gaining popularity as a powerful alternative. However, the decision to self-host it on one’s own servers represents a fundamental trade-off: organisations gain unprecedented control over their data and system, but in return, they must contend with significant technical complexity, hidden operational costs, and full responsibility for their own security. This report analyses for whom this path is a strategic advantage and for whom it may prove to be a costly trap.
Introduction – The Democratisation of Cybersecurity in an Era of Growing Threats
The contemporary digital landscape is characterised by a paradox: while threats are becoming increasingly advanced and widespread, the costs of professional defence tools remain an insurmountable barrier for many organisations. Industry reports paint a grim picture, pointing to a sharp rise in ransomware attacks, which are evolving from data encryption to outright blackmail, and the ever-wider use of artificial intelligence by cybercriminals to automate and scale attacks. In this challenging environment, solutions like Wazuh are emerging as a response to the growing demand for accessible, yet powerful, tools to protect IT infrastructure.
Wazuh is defined as a free, open-source security platform that unifies the capabilities of two key technologies: XDR (Extended Detection and Response) and SIEM (Security Information and Event Management). Its primary goal is to protect digital assets regardless of where they operate—from traditional on-premises servers in a local data centre, through virtual environments, to dynamic containers and distributed resources in the public cloud.
The rise in Wazuh’s popularity is directly linked to the business model of dominant players in the SIEM market, such as Splunk. Their pricing, often based on the volume of data processed, can generate astronomical costs for growing companies, making advanced security a luxury. Wazuh, being free, eliminates this licensing barrier, which makes it particularly attractive to small and medium-sized enterprises (SMEs), public institutions, non-profit organisations, and all entities with limited budgets but who cannot afford to compromise on security.
The emergence of such a powerful, free tool signals a fundamental shift in the cybersecurity market. One could speak of a democratisation of advanced defence mechanisms. Traditionally, SIEM/XDR-class platforms were the domain of large corporations with dedicated Security Operations Centres (SOCs) and substantial budgets. Meanwhile, cybercriminals do not limit their activities to the largest targets; SMEs are equally, and sometimes even more, vulnerable to attacks. Wazuh fills this critical gap, giving smaller organisations access to functionalities that were, until recently, beyond their financial reach. This represents a paradigm shift, where access to robust digital defence is no longer solely dependent on purchasing power but begins to depend on technical competence and the strategic decision to invest in a team.
To fully understand Wazuh’s unique position, it is worth comparing it with key players in the market.
Table 1: Positioning Wazuh Against the Competition
Criterion
Wazuh
Splunk
Elastic Security
Cost Model
Open-source software, free. Paid options include technical support and a managed cloud service (SaaS).
Commercial. Licensing is based mainly on the daily volume of data processed, which can lead to high costs at a large scale.
“Open core” model. Basic functions are free, advanced ones (e.g., machine learning) are available in paid subscriptions. Prices are based on resources, not data volume.
Main Functionalities
Integrated XDR and SIEM. Strong emphasis on endpoint security (FIM, vulnerability detection, configuration assessment) and log analysis.
A leader in log analysis and SIEM. An extremely powerful query language (SPL) and broad analytical capabilities. Considered the standard in large SOCs.
An integrated security platform (SIEM + endpoint protection) built on the powerful Elasticsearch search engine. High flexibility and scalability.
Deployment Options
Self-hosting (On-Premises / Private Cloud) or the official Wazuh Cloud service (SaaS).
Self-hosting (On-Premises) or the Splunk Cloud service (SaaS).
Self-hosting (On-Premises) or the Elastic Cloud service (SaaS).
Target Audience
SMEs, organisations with technical expertise, entities with strict data sovereignty requirements, security enthusiasts.
Large enterprises, mature Security Operations Centres (SOCs), organisations with large security budgets and a need for advanced analytics.
Organisations seeking a flexible, scalable platform, often with an existing Elastic ecosystem. Development and DevOps teams.
This comparison clearly shows that Wazuh is not a simple clone of commercial solutions. Its strength lies in the specific niche it occupies: it offers enterprise-class functionalities without licensing costs, in exchange requiring greater technical involvement from the user and the assumption of full responsibility for implementation and maintenance.
Anatomy of a Defender – How Does the Wazuh Architecture Work?
Understanding the technical foundations of Wazuh is crucial for assessing the real complexity and potential challenges associated with its self-hosted deployment. At first glance, the architecture is elegant and logical; however, its scalability, one of its greatest advantages, simultaneously becomes its greatest operational challenge in a self-hosted model.
The Agent-Server Model: The Eyes and Ears of the System
At the core of the Wazuh architecture is a model based on an agent-server relationship. A lightweight, multi-platform Wazuh agent is installed on every monitored system—be it a Linux server, a Windows workstation, a Mac computer, or even cloud instances. The agent runs in the background, consuming minimal system resources, and its task is to continuously collect telemetry data. It gathers system and application logs, monitors the integrity of critical files, scans for vulnerabilities, inventories installed software and running processes, and detects intrusion attempts. All this data is then securely transmitted in near real-time to the central component—the Wazuh server.
Central Components: The Brain of the Operation
A Wazuh deployment, even in its simplest form, consists of three key central components that together form a complete analytical system.
Wazuh Server: This is the heart of the entire system. It receives data sent by all registered agents. Its main task is to process this stream of information. The server uses advanced decoders to normalise and structure logs from various sources and then passes them through a powerful analytical engine. This engine, based on a predefined and configurable set of rules, correlates events and identifies suspicious activities, security policy violations, or Indicators of Compromise (IoCs). When an event or series of events matches a rule with a sufficiently high priority, the server generates a security alert.
Wazuh Indexer: This is a specialised and highly scalable database, designed for the rapid indexing, storage, and searching of vast amounts of data. Technologically, the Wazuh Indexer is a fork of the OpenSearch project, which in turn was created from the Elasticsearch source code. All events collected by the server (both those that generated an alert and those that did not) and the alerts themselves are sent to the indexer. This allows security analysts to search through terabytes of historical data in seconds for traces of an attack, which is fundamental for threat hunting and forensic analysis processes.
Wazuh Dashboard: This is the user interface for the entire platform, implemented as a web application. Like the indexer, it is based on the OpenSearch Dashboards project (formerly known as Kibana). The dashboard allows for the visualisation of data in the form of charts, tables, and maps, browsing and analysing alerts, managing agent and server configurations, and generating compliance reports. It is here that analysts spend most of their time, monitoring the security posture of the entire organisation.
Security and Scalability of the Architecture
A key aspect to emphasise is the security of the platform itself. Communication between the agent and the server occurs by default over port 1514/TCP and is protected by AES encryption (with a 256-bit key). Each agent must be registered and authenticated before the server will accept data from it. This ensures the confidentiality and integrity of the transmitted logs, preventing them from being intercepted or modified in transit.
The Wazuh architecture was designed with scalability in mind. For small deployments, such as home labs or Proof of Concept tests, all three central components can be installed on a single, sufficiently powerful machine using a simplified installation script. However, in production environments monitoring hundreds or thousands of endpoints, such an approach quickly becomes inadequate. The official documentation and user experiences unequivocally indicate that to ensure performance and High Availability, it is necessary to implement a distributed architecture. This means separating the Wazuh server, indexer, and dashboard onto separate hosts. Furthermore, to handle the enormous volume of data and ensure resilience to failures, both the server and indexer components can be configured as multi-node clusters.
It is at this point that the fundamental challenge of self-hosting becomes apparent. While an “all-in-one” installation is relatively simple, designing, implementing, and maintaining a distributed, multi-node Wazuh cluster is an extremely complex task. It requires deep knowledge of Linux systems administration, networking, and, above all, OpenSearch cluster management. The administrator must take care of aspects such as the correct replication and allocation of shards (index fragments), load balancing between nodes, configuring disaster recovery mechanisms, regularly creating backups, and planning updates for the entire technology stack. The decision to deploy Wazuh on a large scale in a self-hosted model is therefore not a one-time installation act. It is a commitment to the continuous management of a complex, distributed system, whose cost and complexity grow non-linearly with the scale of operations.
The Strategic Decision – Full Control on Your Own Server versus the Convenience of the Cloud
The choice of Wazuh deployment model—self-hosting on one’s own infrastructure (on-premises) versus using a ready-made cloud service (SaaS)—is one of the most important strategic decisions facing any organisation considering this platform. This is not merely a technical choice, but a fundamental decision concerning resource allocation, risk acceptance, and business priorities. An analysis of both approaches reveals a profound trade-off between absolute control and operational convenience.
The Case for Self-Hosting: The Fortress of Data Sovereignty
Organisations that decide to self-deploy and maintain Wazuh on their own servers are primarily driven by the desire for maximum control and independence. In this model, it is they, not an external provider, who define every aspect of the system’s operation—from hardware configuration, through data storage and retention policies, to the finest details of analytical rules. The open-source nature of Wazuh gives them an additional, powerful advantage: the ability to modify and adapt the platform to unique, often non-standard needs, which is impossible with closed, commercial solutions.
However, the main driving force for many companies, especially in Europe, is the concept of data sovereignty. This is not just a buzzword, but a hard legal and strategic requirement. Data sovereignty means that digital data is subject to the laws and jurisdiction of the country in which it is physically stored and processed. In the context of stringent regulations such as Europe’s GDPR, the American HIPAA for medical data, or the PCI DSS standard for the payment card industry, keeping sensitive logs and security incident data within one’s own, controlled data centre is often the simplest and most secure way to ensure compliance.
This choice also has a geopolitical dimension. Edward Snowden’s revelations about the PRISM programme run by the US NSA made the world aware that data stored in the clouds of American tech giants could be subject to access requests from US government agencies under laws such as the CLOUD Act. For many European companies, public institutions, or entities in the defence industry, the risk that their operational data and security logs could be made available to a foreign government is unacceptable. Self-hosting Wazuh in a local data centre, within the European Union, completely eliminates this risk, ensuring full digital sovereignty.
The Reality of Self-Hosting: Hidden Costs and Responsibility
The promise of free software is tempting, but the reality of a self-hosted deployment quickly puts the concept of “free” to the test. An analysis of the Total Cost of Ownership (TCO) reveals a series of hidden expenses that go far beyond the zero cost of the licence.
Capital Expenditure (CapEx): At the outset, the organisation must make significant investments in physical infrastructure. This includes purchasing powerful servers (with large amounts of RAM and fast processors), disk arrays capable of storing terabytes of logs, and networking components. Costs associated with providing appropriate server room conditions, such as uninterruptible power supplies (UPS), air conditioning, and physical access control systems, must also be considered.
Operational Expenditure (OpEx): This is where the largest, often underestimated, expenses lie. Firstly, the ongoing electricity and cooling bills. Secondly, and most importantly, personnel costs. Wazuh is not a “set it and forget it” system. As numerous users report, it requires constant attention, tuning, and maintenance. The default configuration can generate tens of thousands of alerts per day, leading to “alert fatigue” and rendering the system useless. To prevent this, a qualified security analyst or engineer is needed to constantly fine-tune rules and decoders, eliminate false positives, and develop the platform. For larger, distributed deployments, maintaining system stability can become a full-time job. One experienced user bluntly stated, “I’m losing my mind having to fix Wazuh every single day.” According to an analysis cited by GitHub, the total cost of a self-hosted solution can be up to 5.25 times higher than its cloud equivalent.
Moreover, in the self-hosted model, the entire responsibility for security rests on the organisation’s shoulders. This includes not only protection against external attacks but also regular backups, testing disaster recovery procedures, and bearing the full consequences (financial and reputational) in the event of a successful breach and data leak.
The Cloud Alternative: Convenience as a Service (SaaS)
For organisations that want to leverage the power of Wazuh but are not ready to take on the challenges of self-hosting, there is an official alternative: Wazuh Cloud. This is a Software as a Service (SaaS) model, where the provider (the company Wazuh) takes on the entire burden of managing the server infrastructure, and the client pays a monthly or annual subscription for a ready-to-use service.
The advantages of this approach are clear:
Lower Barrier to Entry and Predictable Costs: The subscription model eliminates the need for large initial hardware investments (CapEx) and converts them into a predictable, monthly operational cost (OpEx), which is often lower in the short and medium term.
Reduced Operational Burden: Issues such as server maintenance, patch installation, software updates, scaling resources in response to growing load, and ensuring high availability are entirely the provider’s responsibility. This frees up the internal IT team to focus on strategic tasks rather than “firefighting.”
Access to Expert Knowledge: Cloud clients benefit from the knowledge and experience of Wazuh engineers who manage hundreds of deployments daily. This guarantees optimal configuration and platform stability.
Of course, convenience comes at a price. The main disadvantage is a partial loss of control over the system and data. The organisation must trust the security policies and procedures of the provider. Most importantly, depending on the location of the Wazuh Cloud data centres, the same data sovereignty issues that the self-hosted model avoids may arise.
Ultimately, the choice between self-hosting and the cloud is not an assessment of which option is “better” in an absolute sense. It is a strategic allocation of risk and resources. The self-hosted model is a conscious acceptance of operational risk (failures, configuration errors, staff shortages) in exchange for minimising the risk associated with data sovereignty and third-party control. In contrast, the cloud model is a transfer of operational risk to the provider in exchange for accepting the risk associated with entrusting data and potential legal-geopolitical implications. For a financial sector company in the EU, the risk of a GDPR breach may be much higher than the risk of a server failure, which strongly inclines them towards self-hosting. For a dynamic tech start-up without regulated data, the cost of hiring a dedicated specialist and the operational risk may be unacceptable, making the cloud the obvious choice.
Table 2: Decision Analysis: Self-Hosting vs. Wazuh Cloud
Criterion
Self-Hosting (On-Premises)
Wazuh Cloud (SaaS)
Total Cost of Ownership (TCO)
High initial cost (hardware, CapEx). Significant, often unpredictable operational costs (personnel, energy, OpEx). Potentially lower in the long term at a large scale and with constant utilisation.
Low initial cost (no CapEx). Predictable, recurring subscription fees (OpEx). Usually more cost-effective in the short and medium term. Potentially higher in the long run.
Control and Customisation
Absolute control over hardware, software, data, and configuration. Ability to modify source code and deeply integrate with existing systems.
Limited control. Configuration within the options provided by the supplier. No ability to modify source code or access the underlying infrastructure.
Security and Responsibility
Full responsibility for physical and digital security, backups, disaster recovery, and regulatory compliance rests with the organisation.
Shared responsibility. The provider is responsible for the security of the cloud infrastructure. The organisation is responsible for configuring security policies and managing access.
Deployment and Maintenance
Complex and time-consuming deployment, especially in a distributed architecture. Requires continuous maintenance, monitoring, updating, and tuning by qualified personnel.
Quick and simple deployment (service activation). Maintenance, updates, and ensuring availability are entirely the provider’s responsibility, minimising the burden on the internal IT team.
Scalability
Scalability is possible but requires careful planning, purchase of additional hardware, and manual reconfiguration of the cluster. It can be a slow and costly process.
High flexibility and scalability. Resources (computing power, disk space) can be dynamically increased or decreased depending on needs, often with a few clicks.
Data Sovereignty
Full data sovereignty. The organisation has 100% control over the physical location of its data, which facilitates compliance with local legal and regulatory requirements (e.g., GDPR).
Dependent on the location of the provider’s data centres. May pose challenges related to GDPR compliance if data is stored outside the EU. Potential risk of access on demand by foreign governments.
Voices from the Battlefield – A Balanced Analysis of Expert and User Opinions
A theoretical analysis of a platform’s capabilities and architecture is one thing, but its true value is verified in the daily work of security analysts and system administrators. The voices of users from around the world, from small businesses to large enterprises, paint a nuanced picture of Wazuh—a tool that is incredibly powerful, but also demanding. An analysis of opinions gathered from industry portals such as Gartner, G2, Reddit, and specialist forums allows us to identify both its greatest advantages and its most serious challenges.
The Praise – What Works Brilliantly?
Several key strengths that attract organisations to Wazuh are repeatedly mentioned in reviews and case studies.
Cost as a Game-Changer: For many users, the fundamental advantage is the lack of licensing fees. One information security manager stated succinctly: “It costs me nothing.” This financial accessibility is seen as crucial, especially for smaller entities. Wazuh is often described as a “great, out-of-the-box SOC solution for small to medium businesses” that could not otherwise afford this type of technology.
Powerful, Built-in Functionalities: Users regularly praise specific modules that deliver immediate value. File Integrity Monitoring (FIM) and Vulnerability Detection are at the forefront. One reviewer described them as the “biggest advantages” of the platform. FIM is key to detecting unauthorised changes to critical system files, which can indicate a successful attack, while the vulnerability module automatically scans systems for known, unpatched software. The platform’s ability to support compliance with regulations such as HIPAA or PCI DSS is also a frequently highlighted asset, allowing organisations to verify their security posture with a few clicks.
Flexibility and Customisation: The open nature of Wazuh is seen as a huge advantage by technical teams. The ability to customise rules, write their own decoders, and integrate with other tools gives a sense of complete control. “I personally love the flexibility of Wazuh, as a system administrator I can think of any use case and I know I’ll be able to leverage Wazuh to pull the logs and create the alerts I need,” wrote Joanne Scott, a lead administrator at one of the companies using the platform.
The Criticism – Where Do the Challenges Lie?
Equally numerous and consistent are the voices pointing to significant difficulties and challenges that must be considered before deciding on deployment.
Complexity and a Steep Learning Curve: This is the most frequently raised issue. Even experienced security specialists admit that the platform is not intuitive. One expert described it as having a “steep learning curve for newcomers.” Another user noted that “the initial installation and configuration can be a bit complicated, especially for users without much experience in SIEM systems.” This confirms that Wazuh requires dedicated time for learning and experimentation.
The Need for Tuning and “Alert Fatigue”: This is probably the biggest operational challenge. Users agree that the default, “out-of-the-box” configuration of Wazuh generates a huge amount of noise—low-priority alerts that flood analysts and make it impossible to detect real threats. One team reported receiving “25,000 to 50,000 low-level alerts per day” from just two monitored endpoints. Without an intensive and, importantly, continuous process of tuning rules, disabling irrelevant alerts, and creating custom ones tailored to the specific environment, the system is practically useless. One of the more blunt comments on a Reddit forum stated that “out of the box it’s kind of shitty.”
Performance and Stability at Scale: While Wazuh performs well in small and medium-sized environments, deployments involving hundreds or thousands of agents can encounter serious stability problems. In one dramatic post on a Google Groups forum, an administrator managing 175 agents described daily problems with agents disconnecting and server services hanging, forcing him to restart the entire infrastructure daily. This shows that scaling Wazuh requires not only more powerful hardware but also deep knowledge of optimising its components.
Documentation and Support for Different Systems: Although Wazuh has extensive online documentation, some users find it insufficient for more complex problems. There are also complaints that the predefined decoders (pieces of code responsible for parsing logs) work great for Windows systems but are often outdated or incomplete for other platforms, including popular network devices. This forces administrators to search for unofficial, community-created solutions on platforms like GitHub, which introduces an additional element of risk and uncertainty.
An analysis of these starkly different opinions leads to a key conclusion. Wazuh should not be seen as a ready-to-use product that can simply be “switched on.” It is rather a powerful security framework—a set of advanced tools and capabilities from which a qualified team must build an effective defence system. Its final value depends 90% on the quality of the implementation, configuration, and competence of the team, and only 10% on the software itself. The users who succeed are those who talk about “configuring,” “customising,” and “integrating.” Those who encounter problems are often those who expected a ready-made solution and were overwhelmed by the default configuration. The story of one expert who, during a simulated attack on a default Wazuh installation, “didn’t catch a single thing” is the best proof of this. An investment in a self-hosted Wazuh is really an investment in the people who will manage it.
Consequences of the Choice – Risk and Reward in the Open-Source Ecosystem
The decision to base critical security infrastructure on a self-hosted, open-source solution like Wazuh goes beyond a simple technical assessment of the tool itself. It is a strategic immersion into the broader ecosystem of Open Source Software (OSS), which brings with it both enormous benefits and serious, often underestimated, risks.
The Ubiquity and Hidden Risks of Open-Source Software
Open-source software has become the foundation of the modern digital economy. According to the 2025 “Open Source Security and Risk Analysis” (OSSRA) report, as many as 97% of commercial applications contain OSS components. They form the backbone of almost every system, from operating systems to libraries used in web applications. However, this ubiquity has its dark side. The same report reveals alarming statistics:
86% of the applications studied contained at least one vulnerability in the open-source components they used.
91% of applications contained components that were outdated and had newer, more secure versions available.
81% of applications contained high or critical risk vulnerabilities, many of which already had publicly available patches.
One of the biggest challenges is the problem of transitive dependencies. This means that a library a developer consciously adds to a project itself depends on dozens of other libraries, which in turn depend on others. This creates a complex and difficult-to-trace chain of dependencies, meaning organisations often have no idea exactly which components are running in their systems and what risks they carry. This is the heart of the software supply chain security problem.
By choosing to self-host Wazuh, an organisation takes on full responsibility for managing not only the platform itself but its entire technology stack. This includes the operating system it runs on, the web server, and, above all, key components like the Wazuh Indexer (OpenSearch) and its numerous dependencies. This means it is necessary to track security bulletins for all these elements and react immediately to newly discovered vulnerabilities.
The Advantages of the Open-Source Model: Transparency and the Power of Community
In opposition to these risks, however, stand fundamental advantages that make the open-source model so attractive, especially in the field of security.
Transparency and Trust: In the case of commercial, closed-source solutions (“black boxes”), the user must fully trust the manufacturer’s declarations regarding security. In the open-source model, the source code is publicly available. This provides the opportunity to conduct an independent security audit and verify that the software does not contain hidden backdoors or serious flaws. This transparency builds fundamental trust, which is invaluable in the context of systems designed to protect a company’s most valuable assets.
The Power of Community: Wazuh boasts one of the largest and most active communities in the open-source security world. Users have numerous support channels at their disposal, such as the official Slack, GitHub forums, a dedicated subreddit, and Google Groups. It is there, in the heat of real-world problems, that custom decoders, innovative rules, and solutions to problems not found in the official documentation are created. This collective wisdom is an invaluable resource, especially for teams facing unusual challenges.
Avoiding Vendor Lock-in: By choosing a commercial solution, an organisation becomes dependent on a single vendor—their product development strategy, pricing policy, and software lifecycle. If the vendor decides to raise prices, end support for a product, or go bankrupt, the client is left with a serious problem. Open source provides freedom. An organisation can use the software indefinitely, modify and develop it, and even use the services of another company specialising in support for that solution if they are not satisfied with the official support.
This duality of the open-source nature leads to a deeper conclusion. The decision to self-host Wazuh fundamentally changes the organisation’s role in the security ecosystem. It ceases to be merely a passive consumer of a ready-made security product and becomes an active manager of software supply chain risk. When a company buys a commercial SIEM, it pays the vendor to take responsibility for managing the risk associated with the components from which its product is built. It is the vendor who must patch vulnerabilities in libraries, update dependencies, and guarantee the security of the entire stack. By choosing the free, self-hosted Wazuh, the organisation consciously (or not) takes on all this responsibility itself. To do this in a mature way, it is no longer enough to just know how to configure rules in Wazuh. It becomes necessary to implement advanced software management practices, such as Software Composition Analysis (SCA) to identify all components and their vulnerabilities, and to maintain an up-to-date “Software Bill of Materials” (SBOM) for the entire infrastructure. This significantly raises the bar for competency requirements and shows that the decision to self-host has deep, structural consequences for the entire IT and security department.
The Verdict – Who is Self-Hosted Wazuh For?
The analysis of the Wazuh platform in a self-hosted model leads to an unequivocal conclusion: it is a solution with enormous potential, but burdened with equally great responsibility. The key trade-off that runs through every aspect of this technology can be summarised as follows: self-hosted Wazuh offers unparalleled control, absolute data sovereignty, and zero licensing costs, but in return requires significant, often underestimated, investments in hardware and, above all, in highly qualified personnel capable of managing a complex and demanding system that requires constant attention.
This is not a solution for everyone. Attempting to implement it without the appropriate resources and awareness of its nature is a straight path to frustration, a false sense of security, and ultimately, project failure.
Profile of the Ideal Candidate
Self-hosted Wazuh is the optimal, and often the only right, choice for organisations that meet most of the following criteria:
They have a mature and competent technical team: They have an internal security and IT team (or the budget to hire/train one) that is not afraid of working with the command line, writing scripts, analysing logs at a low level, and managing a complex Linux infrastructure.
They have strict data sovereignty requirements: They operate in highly regulated industries (financial, medical, insurance), in public administration, or in the defence sector, where laws (e.g., GDPR) or internal policies categorically require that sensitive data never leaves physically controlled infrastructure.
They operate at a large scale where licensing costs become a barrier: They are large enough that the licensing costs of commercial SIEM systems, which increase with data volume, become prohibitive. In such a case, investing in a dedicated team to manage a free solution becomes economically justified over a period of several years.
They understand they are implementing a framework, not a finished product: They accept the fact that Wazuh is a set of powerful building blocks, not a ready-made house. They are prepared for a long-term, iterative process of tuning, customising, and improving the system to fully match the specifics of their environment and risk profile.
They have a need for deep customisation: Their security requirements are so unique that standard, commercial solutions cannot meet them, and the ability to modify the source code and create custom integrations is a key value.
Questions for Self-Assessment
For all other organisations, especially smaller ones with limited human resources and without strict sovereignty requirements, a much safer and more cost-effective solution will likely be to use the Wazuh Cloud service or another commercial SIEM/XDR solution.
Before making the final, momentous decision, every technical leader and business manager should ask themselves and their team a series of honest questions:
Have we realistically assessed the Total Cost of Ownership (TCO)? Does our budget account not only for servers but also for the full-time equivalents of specialists who will manage this platform 24/7, including their salaries, training, and the time needed to learn?
Do we have the necessary expertise in our team? Do we have people capable of advanced rule tuning, managing a distributed cluster, diagnosing performance issues, and responding to failures in the middle of the night? If not, are we prepared to invest in their recruitment and development?
What is our biggest risk? Are we more concerned about operational risk (system failure, human error, inadequate monitoring) or regulatory and geopolitical risk (breach of data sovereignty, third-party access)? How does the answer to this question influence our decision?
Are we ready for full responsibility? Do we understand that by choosing self-hosting, we are taking responsibility not only for the configuration of Wazuh but for the security of the entire software supply chain on which it is based, including the regular patching of all its components?
Only an honest answer to these questions will allow you to avoid a costly mistake and make a choice that will genuinely strengthen your organisation’s cybersecurity, rather than creating an illusion of it.
Integrating Logs from Docker Applications with Wazuh SIEM
In modern IT environments, containerisation using Docker has become the standard. It enables the rapid deployment and scaling of applications but also introduces new challenges in security monitoring. By default, logs generated by applications running in containers are isolated from the host system, which complicates their analysis by SIEM systems like Wazuh.
In this post, we will show you how to break down this barrier. We will guide you step-by-step through the configuration process that will allow the Wazuh agent to read, analyse, and generate alerts from the logs of any application running in a Docker container. We will use the password manager Vaultwarden as a practical example.
The Challenge: Why is Accessing Docker Logs Difficult?
Docker containers have their own isolated file systems. Applications inside them most often send their logs to “standard output” (stdout/stderr), which is captured by Docker’s logging mechanism. The Wazuh agent, running on the host system, does not have default access to this stream or to the container’s internal files.
To enable monitoring, we must make the application logs visible to the Wazuh agent. The best and cleanest way to do this is to configure the container to write its logs to a file and then share that file externally using a Docker volume.
Step 1: Exposing Application Logs Outside the Container
Our goal is to make the application’s log file appear in the host server’s file system. We will achieve this by modifying the docker-compose.yml file.
Configure the application to log to a file: Many Docker images allow you to define the path to a log file using an environment variable. In the case of Vaultwarden, this is LOG_FILE.
Map a volume: Create a mapping between a directory on the host server and a directory inside the container where the logs are saved.
Here is an example of what a fragment of the docker-compose.yml file for Vaultwarden with the correct logging configuration might look like:
version: “3”
services: vaultwarden: image: vaultwarden/server:latest container_name: vaultwarden restart: unless-stopped volumes: # Volume for application data (database, attachments, etc.) – ./data:/data ports: – “8080:80” environment: # This variable instructs the application to write logs to a file inside the container – LOG_FILE=/data/vaultwarden.log
What happened here?
LOG_FILE=/data/vaultwarden.log: We are telling the application to create a vaultwarden.log file in the /data directory inside the container.
./data:/data: We are mapping the /data directory from the container to a data subdirectory in the location where the docker-compose.yml file is located (on the host).
After saving the changes and restarting the container (docker-compose down && docker-compose up -d), the log file will be available on the server at a path like /opt/vaultwarden/data/vaultwarden.log.
Step 2: Configuring the Wazuh Agent to Monitor the File
Now that the logs are accessible on the host, we need to instruct the Wazuh agent to read them.
Open the agent’s configuration file:
sudo nano /var/ossec/etc/ossec.conf
Add the following block within the <ossec_config> section:
From now on, every new line in the vaultwarden.log file will be sent to the Wazuh manager.
Step 3: Translating Logs into the Language of Wazuh (Decoders)
The Wazuh manager is now receiving raw log lines, but it doesn’t know how to interpret them. We need to create decoders that will “teach” it to extract key information, such as the attacker’s IP address or the username.
On the Wazuh manager server, edit the local decoders file:
<!– Decoder for Vaultwarden logs –> <decoder name=”vaultwarden”> <prematch>vaultwarden::api::identity</prematch> </decoder>
<!– Decoder for failed login attempts in Vaultwarden –> <decoder name=”vaultwarden-failed-login”> <parent>vaultwarden</parent> <prematch>Username or password is incorrect. Try again. IP: </prematch> <regex>IP: (\S+)\. Username: (\S+)\.$</regex> <order>srcip, user</order> </decoder>
Step 4: Creating Rules and Generating Alerts
Once Wazuh can understand the logs, we can create rules that will generate alerts.
On the manager server, edit the local rules file:
sudo nano /var/ossec/etc/rules/local_rules.xml
Add the following rule group:
<group name=”vaultwarden,”> <rule id=”100105″ level=”5″> <decoded_as>vaultwarden</decoded_as> <description>Vaultwarden: Failed login attempt for user $(user) from IP address: $(srcip).</description> <group>authentication_failed,</group> </rule>
Note: Ensure that the rule id is unique and does not appear anywhere else in the local_rules.xml file. Change it if necessary.
Step 5: Restart and Verification
Finally, restart the Wazuh manager to load the new decoders and rules:
sudo systemctl restart wazuh-manager
To test the configuration, make several failed login attempts to your Vaultwarden application. After a short while, you should see level 5 alerts in the Wazuh dashboard for each attempt, and after exceeding the threshold (6 attempts in 120 seconds), a critical level 10 alert indicating a brute-force attack.
Summary
Integrating logs from applications running in Docker containers with the Wazuh system is a key element in building a comprehensive security monitoring system. The scheme presented above—exposing logs to the host via a volume and then analysing them with custom decoders and rules—is a universal approach that you can apply to virtually any application, not just Vaultwarden. This gives you full visibility of events across your entire infrastructure, regardless of the technology it runs on.
In early 2023, the number of attacks on FreePBX Asterisk systems increased. The vulnerability exploited by hackers is the ARI interface. To gain access to the ARI interface, one must know the ARI username and password, but also the login details for the FreePBX administrative interface. This is why it is so important to use strong, hard-to-crack passwords. In the new version of FreePBX, we are shown the error: Change ARI Username Password.
The ARI user and its password are created during the FreePBX installation. The username consists of about 15 random characters, and the password of about 30 random characters. The developers of the FreePBX system discovered that for some reason on some systems the username and password are not unique.
This does not look like an error in Asterisk or FreePBX itself, so their versions are irrelevant here. If there has been a leak of ARI data, the hacker can gain access to our FreePBX system regardless of its version.
How to get rid of the “Change ARI Username Password” error
To patch the security hole, we must create a new ARI user and a new password for it. To create a new ARI user, log in to your FreePBX system and enter the command:
In place of RANDOM_PASSWORD, enter 30 random alphanumeric characters. Next, we need to reload the settings with the command:
fwconsole reload
Finally, all you have to do is restart FreePBX with the command:
fwconsole restart
After the restart, the “Change ARI Username Password” error message should disappear.
Summary
FreePBX is an extremely secure system. However, even the most secure system will be vulnerable to hacking if easy-to-crack passwords are used and the configuration is incorrect.
Over time, phishing attacks have become increasingly sophisticated. Thanks to public information campaigns on television, more people have become aware of the threats and know what to look out for after receiving messages from uncertain sources. However, criminals are also constantly adapting their attack methods to changing circumstances, and many people still fall victim to these types of attacks, losing confidential or private data, and even their savings. The matter is even more serious when we talk about phishing attacks on companies that hold confidential data of their customers in their databases. How is it possible that companies still fall victim to such attacks?
Lack of Employee Security Training
Training employees is a cost to the company. Therefore, in the name of saving money, it turns out that business owners are abandoning training their employees in the field of cyber-security. Untrained employees do not know how to avoid ever-new security threats. Without proper training, they may not realise how serious a threat phishing can be to the company, how to detect it, and how to protect themselves against it. A lack of employee training can end in disaster for the enterprise and cause the company many problems, including legal ones. Training is essential to know how to recognise a phishing message in the first place.
As long as business owners ignore the problem of a lack of employee training, companies will continue to fall victim to phishing attacks. The cost incurred for training in cyber-crime can pay for itself in the future, and ignoring this type of threat can come back to haunt them.
The problem affects small companies to an even greater extent than large enterprises, as large companies can usually allocate more funds for training, because the cost per employee for such training will be lower than in small firms with a few or a dozen employees. Furthermore, the IT infrastructure of large companies is generally much better protected against cyber-attacks than in small businesses.
Money
Cyber-criminals make money from phishing attacks. Often, large sums of money. Obtaining confidential data, for example, login details for banking websites, from unsuspecting employees is much easier than hacking directly into the banks’ websites. That is why, despite the passage of time, phishing attacks are still going strong. New, ever more sophisticated methods of phishing attacks are constantly emerging.
Cyber-criminals are often able to invest considerable funds in purchasing software and hardware to carry out these types of attacks. This, combined with the unawareness of untrained company employees, means that tens of thousands of data-phishing sites are detected each year. According to The Anti-Phishing Working Group, over a million phishing attacks were detected in the first quarter of 2022. In March 2022, over 384,000 data-phishing sites were discovered. This is a really serious problem for private individuals and an even bigger problem for companies.
Careless Employees
Sometimes it is not the company itself that is responsible for falling victim to phishing, but the carelessness and negligence of individual employees, even despite appropriate training being conducted. Clicking on links and entering confidential data on websites without thinking can result in the leakage of login data. Any employee with access to websites at work can fall victim to phishing.
Easy Access to Software for Criminals
In the past, only a handful of hackers in the world had the skills to write software to carry out effective phishing attacks. Today, in the age of the ubiquitous internet, with the right amount of cash, criminals are able to easily acquire professional tools and software to carry out phishing attacks. That is why the number of these attacks is growing year on year.
Companies Are Looking for Savings
Recent years, 2020-2022 (the coronavirus pandemic, high energy prices), have not been easy for entrepreneurs. It is no wonder, then, that companies looking for savings are tightening their belts and giving up on employee training. However, saving on a company’s cyber-security can come back to haunt them in the future.
Summary
The problem of phishing attacks, especially on companies, is growing year on year, and their methods are becoming more and more sophisticated. Therefore, taking care of the security of company data and the confidential data of our clients is extremely important. That is why professional training for office employees in the field of security is extremely important. Such training is offered by many companies, such as Network Masters or Securitum, both online and in-person. It is also extremely important to properly secure our company’s IT infrastructure itself. A good quality firewall can automatically detect and block many types of attacks on our company’s computer systems, including phishing.
When creating a website, you should pay special attention to its security. A breach of your site’s security can lead to significant problems. If your site allows user logins, or worse, you have an online shop (e.g., WooCommerce) with a customer database, it could result in a personal data leak and serious consequences, even legal action. In this article, you will learn how to remove pop-up ad malware from a WordPress site.
What’s more, the wide variety of WordPress plugins and themes means that security breaches of this CMS are not uncommon. This is not surprising, considering that there are up to 90,000 attacks on WordPress sites per minute worldwide.
It’s not that WordPress isn’t secure enough – it’s simply the most popular CMS in the world, and therefore the most frequently attacked. Its popularity is so great that, according to the company Sucuri, attacks on the WordPress platform account for 90% of all attacks on CMS systems.
Symptoms of a Malware Infection
To effectively combat malware that has infected your website, you first need to know that your site has been infected at all. Sometimes you may not even realise that your site is infected and is sending spam in the background to thousands of people about pornography or potency pills. In this article, we will help you recognise the symptoms of a malware infection.
One of the most obvious signs of a malware infection on your site is visible changes you didn’t make, or strange meta descriptions in the SERPs (Search Engine Results Pages). Furthermore, a common symptom of infection is pop-ups, adverts, or redirecting users to a completely different, spammy website.
As a WordPress site administrator, you must bear in mind that you cannot see some changes from the back-end (Dashboard). Only users visiting your site will see the annoying pop-ups.
Another very obvious symptom of infection will be the blacklisting of your site by Google. Google may then warn users trying to access your site with a large red alert, and in extreme cases, even remove your site from the Google SERPs.
A website infected with malware may also be taken down by your hosting provider if it is on a shared server, to protect the owners of other websites hosted on the same server.
If your site does not appear in Google search results, or does not open when you type its address into the browser, it is quite possible that it has been infected and removed from the Google index or from your provider’s server.
How to Clean a Website of Malware
Use a Malware Removal Plugin
If you can log in to your WordPress dashboard, the quickest way to get rid of malware is to use plugins designed for this purpose.
Plugins like Wordfence, Sucuri, or iThemes are some of the best plugins for protecting your WordPress-based site.
If you have no experience in administering web servers (Apache, LiteSpeed, Nginx) or Linux, this is the fastest and safest method to combat malware.
Manual Malware Removal
Manually removing malware is time-consuming and, if files are edited or deleted incorrectly, can result in your website becoming completely immobilised. However, if you cannot access the WordPress dashboard to install the necessary plugins, manually searching for and removing malware is often the only option. If you have no experience in this, it is safest to consult a company that does this professionally to avoid causing even more damage.
Create a Site Backup
Before you do anything to remove the malware from your site, first create a backup of the website so that you can restore it in case of complications. Creating backups should become a habit, which will save you a lot of time and stress in case of website problems.
To manually back up your site’s files, log in to your hosting using FTP, SFTP, or via the CyberPanel. Then, compress the contents of the wp-content folder and download the compressed file to your computer’s hard drive.
If your hosting has a snapshot backup option, this is also a good choice, or you can use one of the many WordPress plugins for this purpose.
It will also be necessary to back up the .htaccess file. This file is hidden by default in some file managers, so make sure you have the option to show system files enabled.
It is equally important to back up the database, as this is where most of the information displayed on our site is stored. Bear in mind that some malware can hide in the database itself.
Reinstall WordPress
Before you start looking for malware in your files, install a clean version of WordPress. By installing a fresh version of WordPress from the official website, you can be sure that it is free of malware.
Check the Files
The most difficult part of the task is ahead of you – checking all the WordPress files. This is a tedious and lengthy process, as you will have to check each file and directory one by one to identify infected files.
First, compare all WordPress files with the files from your backup, then proceed to check the theme and plugin files. If you have no experience in creating and editing HTML, CSS, or JavaScript files, it may be very difficult for you to spot suspicious lines of code.
After reviewing all the WordPress files, check the contents of the .htaccess file. This is a very important file, and even after installing a clean version of WordPress, if you have an infected .htaccess file, you may still leave a backdoor open for hackers to reinfect your site with malware.
Finally, after thoroughly checking and cleaning all files, you should also clear your browser and web server cache, as files in the cache can also be infected and reinfect your site.
Reinstall Themes and Plugins
Reinstall clean versions of your themes and plugins. Remember to only install trusted themes and plugins. The truth is that most malware infections are not caused by WordPress itself, but especially by installing poor-quality plugins and themes, or worse, installing them from untrustworthy sources. Such plugins are often already infected at the time of installation.
Reset All WordPress and phpMyAdmin User Passwords
This is an absolutely necessary step when dealing with malware. Often, the weakest link on our website is not WordPress itself, but weak passwords that are easy to guess or break using the Brute Force method. It is also worth checking the list of users on our site to see if there are any suspicious individuals with high privileges who were not there before.
It is also essential to log in to the database via phpMyAdmin and check whether any suspicious users have appeared there who should not be. You should also reset all database passwords, as a person with access to our WordPress database can cause enormous damage and change the content of our site.
Restore Photos and Other Files
After you have finished checking the files, you can restore your photos and other site files.
However, be careful and review the directories one by one before restoring the files. While there is little danger of infecting the site with multimedia files, any JavaScript or PHP file should immediately raise a red flag, as they most likely contain malware. These files should not be in your photo folders.
Checking the Database Backup
Operations on database files are very complicated, and a person without extensive knowledge of how MySQL, PostgreSQL, or MariaDB works will likely do more harm than good. Therefore, if you suspect your WordPress database is infected with malware, it is best to contact a company that handles this professionally.
Summary
Prevention is better than cure. Therefore, if you do not yet have a plugin installed to improve WordPress security, it is high time to download and install one.
Furthermore, it is worth using suitably strong and long passwords to make it harder for hackers to crack them using the Brute Force method.
Keep your WordPress core, themes, and plugins updated. The latest versions usually patch security vulnerabilities found in previous versions.