Categories: Load Balancing

How Can I Configure a Load Balancer in Linux?

Technology that runs on Linux operating systems can see a dramatic improvement from the implementation of a load balancing system, with benefits that include marked improvements to overall performance, speed, reliability, availability, and user experience. 

What’s more, Linux can be used in conjunction with multiple forms of load balancing technology, including Nginx, HAProxy, Apache, and Keepalived. This means that there are multiple options when it comes to configuration. 

But many IT professionals lack experience with Linux load balancer configuration and deployment, so many are left wondering, “How can I configure a load balancer in Linux?” The process differs a bit depending upon the exact technology and platform that you happen to be working with, but most IT professionals will find that configuring Linux load balancing is a relatively straightforward process. 

What is Linux? And How Does Linux Load Balancing Work?

Developed by Linus Torvalds and released in September, 1991, Linux is one of the most popular open-source operating systems on the planet. By 2027, it’s estimated that the Linux global market value will top an incredible $15.64 trillion dollars.

Nearly half of all developers opt to use Linux — the same Unix-like technology that powers all of the world’s 500 fastest supercomputers. It’s also estimated that Linux is running just under 40% of all websites (with an identifiable operating system) and approximately 85% of mobile devices worldwide. 

Corporate heavyweights such as Facebook, Amazon, McDonald’s, Google, NASA, and Dell use Linux operating systems to get the job done. In fact, SpaceX leveraged Linux technology to launch a total of 65 missions as of late 2022, while Hollywood SFX developers use Linux to achieve 90% of all special effects that you see on the silver screen. 

With stats such as these, the power of the Linux kernel — the digital creation that’s at the heart of all Linux operating systems — is clear, so it’s no wonder that an increasing number of people are turning to LInux operating systems to drive their technology forward.  But there is always room for improvement and that is where load balancing comes into play. 

Linux load balancing can be used with any server-reliant technology, including web servers, network servers, and software systems. These technologies use different types of load balancers, including HTTP load balancers, network load balancers, and software load balancers. 

Configuring Apache Load Balancing in Linux

Apache — formally known as the Apache HTTP load balancer — is one of several options for a Linux load balancing configuration. In fact, Apache servers are currently the most popular choice for use on Linux operating systems. This open-source solution helps to boost performance on high-traffic websites, resulting in a marked improvement in speed, performance and overall user experience. 

An Apache HTTP load balancer is configured as a reverse proxy using the mod_proxy module. This configuration involves a central hub server — the actual load balancer — which processes incoming client requests and dispatches them to servers in a server pool or cluster that is comprised of at least two servers. Additionally, you can also configure other related features in Apache, including failover nodes, hot-spares, and hot-standby.

Apache works in conjunction with a number of different platforms and the load balancer configuration process varies somewhat for each one. The following steps are used to configure an Apache load balancer on an Ubuntu distribution with Debian architecture.  

It is also possible to configure a load balancer on a number of other Linux distributions, such as a CentOS 7 distribution. The process for a HTTP CentOS 7 load balancer configuration uses the following basic steps. 

  • STEP 1Create and configure at least three virtual machines (VMs) to serve as the three servers that will collectively serve as the HTTP load balancing architecture. You will have one server that acts as the hub server, intercepting and evaluating incoming client requests, which are then dispatched across a pool or cluster of servers. This is the purpose of the two (or more) additional VMs; they comprise the server cluster, actually processing the client requests.
  • STEP 2Install the Apache HTTP server using the “yum” command.

# yum install -y httpd

  • STEP 3Start and enable httpd.service.

# systemctl start httpd.service

# systemctl enable httpd.service

Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.

  • STEP 4:  Allow HTTP service in Linux firewall.

# firewall-cmd –permanent –add-service=http

success

# firewall-cmd –reload

Success

  • STEP 5Navigate to the Apache server URL in a browser to verify that the Apache server is up and running. You should see a default test page displayed.
  • STEP 6Configure the HTTP load balancer and reverse proxy using the command line to verify the availability of the mod_proxy module.

# httpd -M | grep proxy

 proxy_module (shared)

 proxy_ajp_module (shared)

 proxy_balancer_module (shared)

 proxy_connect_module (shared)

 proxy_express_module (shared)

 proxy_fcgi_module (shared)

 proxy_fdpass_module (shared)

 proxy_ftp_module (shared)

 proxy_http_module (shared)

 proxy_scgi_module (shared)

 proxy_wstunnel_module (shared)

  • STEP 7Add the configuration file to /etc/httpd/conf.d/
  • STEP 8Enter the following reverse proxy configuration to the aforementioned file.

<proxy balancer://appset>

        BalancerMember http://web-01.example.com

        BalancerMember http://web-02.example.com

        ProxySet lbmethod=bytraffic

</proxy>

ProxyPass “/app” “balancer://appset/”

ProxyPassReverse “/app” “balancer://appset/”

  • STEP 9Restart httpd.service.

# systemctl restart httpd.service

  • STEP 10Navigate to the Apache server URL in a browser to verify that the reverse proxy is up and running. It should forward you to one of the load balancer server URLs that was established in the first step. If you refresh the page, it should forward to the second VM’s URL. This refreshing process should be repeated until you’ve confirmed that all virtual machines are receiving incoming client requests.

Once this process is complete, it is recommended that you activate and configure the Apache HTTP server’s in-built Balancer Manager features. It should be noted that this load balancer management app lacks default authentication capabilities, so authentication should be configured to prevent unauthorized individuals from accessing your platform.  

Configuring Linux Load Balancing With Ubuntu

The process for configuring a load balancer in Linux using Ubuntu is quite similar. It involves installing four modules:

  • mod_proxy
  • mod_proxy_http
  • mod_proxy_balancer
  • mod_proxy_lbmethod_byrequests

Once the four modules are installed and active, install Flask and create two (or more) servers running on ports 8080 and 8081 to serve as the configuration’s backend. These are the two servers that will compose the server pool. Once they are configured, run the “curl” command on each server to verify that they are operational using these steps: 

STEP 1:  Run the server with this command: 

$ FLASK_APP=~/backend.py flask run –port=8080 >/dev/null 2>&1 &

STEP 2:  Enter the following “curl” command. If the servers are operational, you should see a standard “Hello World” response. 

$ curl http://127.0.0.1:8080/

Repeat the process for the other servers by swapping out “8080” for the appropriate port number. 

Next, you configure the Apache HTTP load balancer by modifying the default configuration file in the following manner. 

STEP 3:  Access the file. 

$ sudo vi /etc/apache2/sites-available/000-default.conf

STEP 4:  Add these lines to the “VirtualHost” tag. Note that the two server port numbers are referenced here. If you are using additional servers in your server cluster, those must be added to the code.

 <Proxy balancer://mycluster>

       BalancerMember http://127.0.0.1:8080

       BalancerMember http://127.0.0.1:8081

      </Proxy>

      ProxyPreserveHost On

      ProxyPass / balancer://mycluster/

      ProxyPassReverse / balancer://mycluster/

STEP 5:  Restart the Apache server so these changes are reflected. Use the following command. 

$ sudo service apache2 restart

This should complete the configuration process for a Linux load balancer with a Ubuntu distribution. 

Linux Load Balancing Using HAProxy and KeepAlived

HAProxy and KeepAlived are two additional options for load balancing on Linux. 

Running on active and passive LVS routers, the KeepAlived daemon uses virtual redundancy routing protocol or VRRP to control servers and initiate failover, making it an efficient mechanism for creating a load balancer configuration. KeepAlived operates on OSI layer 4 – the transport layer — where it utilizes TCP connections and evaluates incoming client requests.

HAProxy is used for HTTP load balancing, making it ideal for websites and web apps and other internet-based technology. Operating on OSI layer 7 — the application layer — HAProxy can handle extremely high volumes of incoming client requests, which are dispatched to two or more virtual servers. 

Notably, it is possible to configure HAProxy and KeepAlived to work in tandem, thereby allowing you to achieve a more complex and high performance Linux load balancing configuration. 

Linux load balancing options abound, there is no one-size-fits-all solution. Third-party load balancer services such as those offered by Resonate can deliver exceptional performance that far exceeds many of the in-built solutions that are available for Linux operating systems. Resonate specializes in load balancing for exceptional speed, performance, reliability, availability and user experience. Our cutting-edge technology can accelerate your website, network, software, mobile app or other server-reliant technology to levels beyond what you thought to be possible. Contact the Resonate team today; we look forward to discussing your goals and we’ll help you find the perfect technology for your exact Linux load balancing needs.

Resonate

Recent Posts

Consequences of a Failed Network

There are nearly a dozen types of networks that are used to establish connections that…

2 years ago

How Efficient Are Load Balancers in Preventing Server Overload?

Servers abound, this mission-critical hardware is a key driving force behind countless technologies ranging from…

2 years ago

How to Avoid Network Failure

Network failure can prove to be very costly or even downright disastrous for a business,…

2 years ago

What are the Business Costs Associated With Server Downtime?

A large segment of today’s technologies are server-reliant, from networks, to enterprise software systems, mobile…

2 years ago

How to Start Fixing Your Slow Network With Load Balancing

Slow network performance can really hamper a company’s operations, while simultaneously resulting in a very…

2 years ago

Tips on How to Choose the Best Load Balancer

Load balancers have the power to optimize speed and performance on a variety of different…

2 years ago