Open Source Software Technical Articles

Want the Best of the Wazi Blogs Delivered Directly to your Inbox?

Subscribe to Wazi by Email

Your email:

Connect with Us!

Current Articles | RSS Feed RSS Feed

Load Balancing Using Apache's mod_proxy_balancer


Making sure that a corporate website serves user traffic efficiently is a key task for network administrators. While you can use round robin DNS to balance traffic loads by forwarding requests to multiple servers sequentially, the Apache HTTP Server lets you take advantage of a more elegant and intelligent solution in the form of mod_proxy_balancer, an add-in module that acts as a software load balancer and ensures that traffic is split across back-end servers or workers to reduce latencies and give users a better experience.

mod_proxy_balancer distributes requests to multiple worker processes running on back-end servers to let multiple resources service incoming traffic and processing. It ensures efficient utilization of the back-end workers to prevent any single worker from getting overloaded.

If you're ready to test the benefits of mod_proxy_balancer yourself, it's easy to try, but be aware that it will work only with Apache 2.1 and later. The default rpm install of Apache via yum for Red Hat or CentOS installs the module along with the web server. If you installed Apache HTTP Server from source, make sure you also have mod_proxy installed, since mod_proxy_balancer depends on it. To install mod_proxy_balancer from source, run make and make install in the module's source directory.

When you configure mod_proxy_balancer, you can choose among three load-balancing algorithms: Request Counting, Weighted Traffic Counting, and Pending Request Counting, which we'll discuss in detail in a moment. The best algorithm to use depends on the individual use case; if you are not sure which to try first, go with Pending Request Counting.

The add-in also supports session stickyness, meaning you can optionally ensure that all the requests from a particular IP address or in a particular session goes to the same back-end server. The easiest way to achieve stickyness is to use cookies, either inserted by the Apache web server or by the back-end servers.

A general configuration for load balancing defined in /etc/httpd/httpd.conf would look like this:

<Proxy balancer://A_name_signifying_your_app>
BalancerMember http://ip_address:port/ loadfactor=appropriate_load_factor # Balancer member 1
BalancerMember http://ip_address:port/ loadfactor=appropriate_load_factor # Balancer member 2
ProxySet lbmethod=the_Load_Balancing_algorithm

You can specify anything for a name, but it's good to choose one that's significant. BalancerMember specifies a back-end worker's IP address and port number. A worker can be a back-end HTTP server or anything that can serve HTTP traffic. You can omit the port number if you use the web server's default port of 80. You can define as many BalancerMembers as you want; the optimal number depends on the capabilities of each server and the incoming traffic load. The loadfactor variable specifies the load that a back-end worker can take. Depending upon the algorithm, this can represent a number of requests or a number of bytes. lbmethod specifies the algorithm to be used for load balancing.

Let's look at how to configure each of the three options.


Request Counting

With this algorithm, incoming requests are distributed among back-end workers in such a way that each back end gets a proportional number of requests defined in the configuration by the loadfactor variable. For example, consider this Apache config snippet:

<Proxy balancer://myapp>
BalancerMember loadfactor=1 # Balancer member 1
BalancerMember loadfactor=3 # Balancer member 2
ProxySet lbmethod=byrequests

In this example, one request out of every four will be sent to, while three will be sent to This might be an appropriate configuration for a site with two servers, one of which is more powerful than the other.

Weighted Traffic Counting Algorithm

The Weighted Traffic Counting algorithm is similar to Request Counting algorithm, with a minor difference: Weighted Traffic Counting considers the number of bytes instead of number of requests. In the configuration example below, the number of bytes processed by will be three times that of

<Proxy balancer://myapp>
BalancerMember loadfactor=1 # Balancer member 1
BalancerMember loadfactor=3 # Balancer member 2
ProxySet lbmethod=bytraffic

Pending Request Counting Algorithm

The Pending Request Counting algorithm is the latest and most sophisticated algorithm provided by Apache for load balancing. It is available from Apache 2.2.10 onward.

In this algorithm, the scheduler keeps track of the number of requests that are assigned to each back-end worker at any given time. Each new incoming request will be sent to the back end that has least number of pending requests – in other words, to the back-end worker that is relatively least loaded. This helps keep the request queues even among the back-end workers, and each request generally goes to the worker that can process it the fastest.

If two workers are equally lightly loaded, the scheduler uses the Request Counting algorithm to break the tie.

<Proxy balancer://myapp>
BalancerMember # Balancer member 1
BalancerMember # Balancer member 2
ProxySet lbmethod=bybusyness

Enable the Balancer Manager

Sometimes you may need to change your load balancing configuration, but that may not be easy to do without affecting the running server. For such situations, the Balancer Manager module provides a web interface to change the status of back-end workers on the fly. You can use Balancer Manager to put a worker in offline mode or change its loadfactor. You must have mod_status installed in order to use Balance Manager. A sample config, which should be defined in /etc/httpd/httpd.conf, might look like:

<Location /balancer-manager>
SetHandler balancer-manager

Order Deny,Allow
Deny from all
Allow from

Once you add directives like those above to httpd.conf and restart Apache you can open the Balancer Manager by pointing a browser at

Apache mod_proxy_balancer provides a cost-effective software solution for load balancing. While a hardware load balancer device can easily cost more than $1,000, software load balancers are cheap – or, in this case, free. Even though mod_proxy_balancer has a fairly limited feature set compared to those of hardware load balancers, which can provide more comprehensive statistics and handle larger volumes of traffic, it offers solid basic capabilities for small to midsize deployments.

This work is licensed under a Creative Commons Attribution 3.0 Unported License
Creative Commons License.

This work is licensed under a Creative Commons Attribution 3.0 Unported License
Creative Commons License.


nice article.. thanks for posted...
Posted @ Tuesday, December 11, 2012 11:14 PM by
nice, thanks
Posted @ Sunday, December 23, 2012 8:58 AM by sergio
This post is excellent. Thank you.
Posted @ Monday, February 17, 2014 3:10 AM by Anand Chalatt
can i have loadbalancer for two different pairs of ip addresses from the same Apache server? eg. i pair users ip and 1.3.. The second loadbalanced pair is using and 1.11. The incoming requests are whcih should go to the first load balanced set. the incoming request should go to the second set of load balanced ips
Posted @ Wednesday, March 19, 2014 1:35 AM by Sayantan
Excellent contribution it let know the best way to balancing two backend with different performance thanks
Posted @ Wednesday, April 30, 2014 10:44 PM by Andrés Carrillo
Gracias, por la contribución, es clara y precisa, al menos para los que recién iniciamos en este mundo. 
Posted @ Wednesday, July 16, 2014 10:20 AM by JeffDev
Post Comment
Website (optional)

Allowed tags: <a> link, <b> bold, <i> italics