Technology that runs on Linux operating systems can see a dramatic improvement from the implementation of a load balancing system, with benefits that include marked improvements to overall performance, speed, reliability, availability, and user experience.
What’s more, Linux can be used in conjunction with multiple forms of load balancing technology, including Nginx, HAProxy, Apache, and Keepalived. This means that there are multiple options when it comes to configuration.
But many IT professionals lack experience with Linux load balancer configuration and deployment, so many are left wondering, “How can I configure a load balancer in Linux?” The process differs a bit depending upon the exact technology and platform that you happen to be working with, but most IT professionals will find that configuring Linux load balancing is a relatively straightforward process.
What is Linux? And How Does Linux Load Balancing Work?
Developed by Linus Torvalds and released in September, 1991, Linux is one of the most popular open-source operating systems on the planet. By 2027, it’s estimated that the Linux global market value will top an incredible $15.64 trillion dollars.
Nearly half of all developers opt to use Linux — the same Unix-like technology that powers all of the world’s 500 fastest supercomputers. It’s also estimated that Linux is running just under 40% of all websites (with an identifiable operating system) and approximately 85% of mobile devices worldwide.
Corporate heavyweights such as Facebook, Amazon, McDonald’s, Google, NASA, and Dell use Linux operating systems to get the job done. In fact, SpaceX leveraged Linux technology to launch a total of 65 missions as of late 2022, while Hollywood SFX developers use Linux to achieve 90% of all special effects that you see on the silver screen.
With stats such as these, the power of the Linux kernel — the digital creation that’s at the heart of all Linux operating systems — is clear, so it’s no wonder that an increasing number of people are turning to LInux operating systems to drive their technology forward. But there is always room for improvement and that is where load balancing comes into play.
Linux load balancing can be used with any server-reliant technology, including web servers, network servers, and software systems. These technologies use different types of load balancers, including HTTP load balancers, network load balancers, and software load balancers.
Configuring Apache Load Balancing in Linux
Apache — formally known as the Apache HTTP load balancer — is one of several options for a Linux load balancing configuration. In fact, Apache servers are currently the most popular choice for use on Linux operating systems. This open-source solution helps to boost performance on high-traffic websites, resulting in a marked improvement in speed, performance and overall user experience.
An Apache HTTP load balancer is configured as a reverse proxy using the mod_proxy module. This configuration involves a central hub server — the actual load balancer — which processes incoming client requests and dispatches them to servers in a server pool or cluster that is comprised of at least two servers. Additionally, you can also configure other related features in Apache, including failover nodes, hot-spares, and hot-standby.
Apache works in conjunction with a number of different platforms and the load balancer configuration process varies somewhat for each one. The following steps are used to configure an Apache load balancer on an Ubuntu distribution with Debian architecture.
It is also possible to configure a load balancer on a number of other Linux distributions, such as a CentOS 7 distribution. The process for a HTTP CentOS 7 load balancer configuration uses the following basic steps.
- STEP 1: Create and configure at least three virtual machines (VMs) to serve as the three servers that will collectively serve as the HTTP load balancing architecture. You will have one server that acts as the hub server, intercepting and evaluating incoming client requests, which are then dispatched across a pool or cluster of servers. This is the purpose of the two (or more) additional VMs; they comprise the server cluster, actually processing the client requests.
- STEP 2: Install the Apache HTTP server using the “yum” command.
# yum install -y httpd
- STEP 3: Start and enable httpd.service.
# systemctl start httpd.service
# systemctl enable httpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
- STEP 4: Allow HTTP service in Linux firewall.
# firewall-cmd –permanent –add-service=http
# firewall-cmd –reload
- STEP 5: Navigate to the Apache server URL in a browser to verify that the Apache server is up and running. You should see a default test page displayed.
- STEP 6: Configure the HTTP load balancer and reverse proxy using the command line to verify the availability of the mod_proxy module.
# httpd -M | grep proxy
- STEP 7: Add the configuration file to /etc/httpd/conf.d/
- STEP 8: Enter the following reverse proxy configuration to the aforementioned file.
ProxyPass “/app” “balancer://appset/”
ProxyPassReverse “/app” “balancer://appset/”
- STEP 9: Restart httpd.service.
# systemctl restart httpd.service
- STEP 10: Navigate to the Apache server URL in a browser to verify that the reverse proxy is up and running. It should forward you to one of the load balancer server URLs that was established in the first step. If you refresh the page, it should forward to the second VM’s URL. This refreshing process should be repeated until you’ve confirmed that all virtual machines are receiving incoming client requests.
Once this process is complete, it is recommended that you activate and configure the Apache HTTP server’s in-built Balancer Manager features. It should be noted that this load balancer management app lacks default authentication capabilities, so authentication should be configured to prevent unauthorized individuals from accessing your platform.
Configuring Linux Load Balancing With Ubuntu
The process for configuring a load balancer in Linux using Ubuntu is quite similar. It involves installing four modules:
Once the four modules are installed and active, install Flask and create two (or more) servers running on ports 8080 and 8081 to serve as the configuration’s backend. These are the two servers that will compose the server pool. Once they are configured, run the “curl” command on each server to verify that they are operational using these steps:
STEP 1: Run the server with this command:
$ FLASK_APP=~/backend.py flask run –port=8080 >/dev/null 2>&1 &
STEP 2: Enter the following “curl” command. If the servers are operational, you should see a standard “Hello World” response.
$ curl http://127.0.0.1:8080/
Repeat the process for the other servers by swapping out “8080” for the appropriate port number.
Next, you configure the Apache HTTP load balancer by modifying the default configuration file in the following manner.
STEP 3: Access the file.
$ sudo vi /etc/apache2/sites-available/000-default.conf
STEP 4: Add these lines to the “VirtualHost” tag. Note that the two server port numbers are referenced here. If you are using additional servers in your server cluster, those must be added to the code.
ProxyPass / balancer://mycluster/
ProxyPassReverse / balancer://mycluster/
STEP 5: Restart the Apache server so these changes are reflected. Use the following command.
$ sudo service apache2 restart
This should complete the configuration process for a Linux load balancer with a Ubuntu distribution.
Linux Load Balancing Using HAProxy and KeepAlived
HAProxy and KeepAlived are two additional options for load balancing on Linux.
Running on active and passive LVS routers, the KeepAlived daemon uses virtual redundancy routing protocol or VRRP to control servers and initiate failover, making it an efficient mechanism for creating a load balancer configuration. KeepAlived operates on OSI layer 4 – the transport layer — where it utilizes TCP connections and evaluates incoming client requests.
HAProxy is used for HTTP load balancing, making it ideal for websites and web apps and other internet-based technology. Operating on OSI layer 7 — the application layer — HAProxy can handle extremely high volumes of incoming client requests, which are dispatched to two or more virtual servers.
Notably, it is possible to configure HAProxy and KeepAlived to work in tandem, thereby allowing you to achieve a more complex and high performance Linux load balancing configuration.
Linux load balancing options abound, there is no one-size-fits-all solution. Third-party load balancer services such as those offered by Resonate can deliver exceptional performance that far exceeds many of the in-built solutions that are available for Linux operating systems. Resonate specializes in load balancing for exceptional speed, performance, reliability, availability and user experience. Our cutting-edge technology can accelerate your website, network, software, mobile app or other server-reliant technology to levels beyond what you thought to be possible. Contact the Resonate team today; we look forward to discussing your goals and we’ll help you find the perfect technology for your exact Linux load balancing needs.