Management Piranha Lead image: arsgera, 123RF
arsgera, 123RF
 

An IP-based load balancing solution

Exploring Piranha

Red Hat's Piranha load balancing software is based on the Linux Virtual Server concept. We show you how to configure and test your setup. By Khurram Shiraz

Load balancing is a major requirement for most web server farms. These web servers are expected to serve thousands of requests per second. Consequently, load balancing is no longer a luxury but an essential requirement for both performance and availability. Initially, hardware load balancers were used, but with the increasing costs of hardware and the growing maturity in software, the trend now is to use software load balancers.

Linux Virtual Server

Linux has a major advantage over other operating systems in this regard, because most of its commercial distributions (e.g., Red Hat and openSUSE) already provide inherent load balancing capabilities for most network services, such as web, cache, mail, FTP, media, and VoIP. These inherent capabilities are based on Layer 4 switching, allowing Linux servers to constitute special kinds of clusters known as LVS clusters.

LVS stands for Linux Virtual Server [1], which is a highly scalable and highly available server built on a cluster of real servers, with the load balancer running on the Linux operating system.

The Linux Virtual Server Project implements Layer 4 switching in the Linux kernel, which allows TCP and UDP sessions to to be load balanced between multiple real servers. This method provides a way to scale Internet services beyond a single host.

Note that the architecture of this virtual IP cluster is fully transparent to end users, and the users interact as if it were a single, high-performance, real server. Different variants of LVS technology have been adopted by many Linux distributions, including Debian, Red Hat, and openSUSE.

Architecture of LVS Clusters

For transparency, scalability, availability, and manageability of the whole system, LVS clusters usually adopt a three-tier architecture, which is illustrated in Figure 1.

LVS cluster architecture.
Figure 1: LVS cluster architecture.

This architecture consists of:

In this setup, a load balancer is considered the single point of entry of server cluster systems. It runs IP Virtual Server (IPVS) [2], which implements IP load balancing techniques inside the Linux kernel. With IPVS, all real servers are required to provide the same services and content; the load balancer forwards a new client request to a server according to the specified scheduling algorithms and the load of each server. No matter which server is selected, the client should get the same result.

The maximum number of real servers can be indefinite and, for most network services like web (in which client requests are generally not high), a linear increase in response time is expected with an increase in the total number of real servers.

Shared storage can be database systems, network file systems, or distributed file systems. When real servers have to write information dynamically to databases, you can expect a highly available cluster (active-passive or active-active, like RHEL cluster suite, Oracle RAC, or DB2 parallel server) on this back-end layer.

When data to be written by real servers is static (e.g., web services), you could have either a shared filesystem over the network (e.g., NFS) for smaller implementations or a cluster filesystem (e.g., IBM GPFS or Linux GFS) for bigger implementations. In even smaller environments with fewer security requirements, this back-end layer can be skipped, and real servers can write directly to their local storage.

Layer 4 Switching

IPVS performs a "Layer 4 switching mechanism," which works by multiplexing incoming TCP/IP connections and UDP/IP datagrams to real servers. Packets are received by a Linux load balancer, and a decision is made regarding which real server to foward the packet to. Once this decision is made, subsequent packets to the same connection will be sent to the same real server. Thus, the integrity of the connection is maintained.

The LVS has three ways of forwarding packets: network address translation (NAT), IP-IP encapsulation (tunneling), and direct routing.

On the Linux load balancer, a virtual service is defined by an IP address, port, and protocol or a firewall mark. The virtual services are then assigned with a scheduling algorithm that is used to allocate incoming connections to the real servers. In LVS, the schedulers are implemented as separate kernel modules. Thus, new schedulers can be implemented without modifying the core LVS code.

Many different scheduling algorithms are available to suit a variety of needs. The simplest algorithms are round robin and least connected, and they are also the most commonly used in LVS clusters.

Configuration

Piranha [3] is a clustering product from Red Hat based on the LVS concept. It includes the IPVS kernel code, cluster monitoring tool, and web-based cluster configuration tool.

The Piranha monitoring tool performs two main functions:

Configuring the Piranha infrastructure starts with a decision about your design approach. Although a single LVS router can be a good starting point, it is very important to start with a cluster of two LVS routers. If you start with two LVS routers, one of them will be an active LVS router, whereas the second will remain passive all the time.

The whole Piranha infrastructure comprises two LVS routers (active/passive cluster configuration) and two web servers; this setup is shown in Figure 2.

The Piranha infrastructure.
Figure 2: The Piranha infrastructure.

It is also important to understand the entire infrastructure from a daemons' point of view, because some unique daemons are running on the LVS routers, as well as on the real servers, which are constantly communicating with each other to guarantee proper load balancing and high availability. These components are shown logically in Figure 3.

LVS routers.
Figure 3: LVS routers.

A basic requirement of a Piranha infrastructure is that all servers, including real servers and primary/secondary load balancers, should be able to ping each other by name.

Now, you can begin configuration with your primary Piranha server. This server will act as the primary or active LVS router. Also, the whole Piranha configuration must be done on this server. You can start with the installation of two major RPMs (belonging to Piranha and the LVS configuration: piranha-0.8.4-16.el5.i386.rpm and ipvsadm-1.24-10.i386.rpm) with

yum install piranha

as shown in Figure 4. On the backup LVS router, you can repeat the same installation procedure for both RPMs.

Installing with Yum.
Figure 4: Installing with Yum.

For the LVS router to forward network packets properly to the real servers, each LVS router node must have IP forwarding turned on in the kernel. To do this, log in as root user on both LVS routers and then change the following line in /etc/sysctl.conf to have a value of 1:

net.ipv4.ip_forward = 1

The original value is usually 0. The changes take effect when you reboot the system.

You can start the piranha-gui service on the primary load-balancing server, whereas the pulse service must be restarted on both LVS routers. After you start the pirhana-gui service on the primary LVS router, you can log in to the piranha-gui through the web browser.

To start, go to the Redundancy tab to choose a cluster of LVS routers. Next, go to the Global Settings tab. In this tab, the choice for Network Type is important. For most implementations, default direct routing works fine, but in some cases, NAT could also be a choice. For this configuration, the Direct Routing choice is sufficient (Figure 5).

Choosing the network type.
Figure 5: Choosing the network type.

Next, make sure that all real servers (in this case webserver1 and webserver2 ) are resolved by name from the primary and secondary LVS router servers and add definitions of both real servers by editing the Virtual Servers tab shown in Figure 6.

Defining the real servers.
Figure 6: Defining the real servers.

Both real web servers are defined with a weight value of 1.

Now you are almost done with the initial configuration. As a last step, you can review the lvs.cf file on your primary piranha server, which should look something like Listing 1. (I selected Round robin as the Scheduling algorithm (Figure 6) to do load balancing between web servers.)

Listing 1: lvs.cf File

01 serial_no = 40
02 primary = 192.168.1.230
03 service = lvs
04 backup_active = 1
05 backup = 192.168.1.232
06 heartbeat = 1
07 heartbeat_port = 539
08 keepalive = 6
09 deadtime = 18
10 network = direct
11 debug_level = NONE
12 monitor_links = 1
13 syncdaemon = 1
14 virtual webs.test.com {
15      active = 1
16      address = 192.168.1.250 eth0:0
17      vip_nmask = 255.255.255.0
18      port = 80
19      send = "GET / HTTP/1.0\r\n\r\n"
20      expect = "HTTP"
21      use_regex = 0
22      load_monitor = rup
23      scheduler = rr
24      protocol = tcp
25      timeout = 6
26      reentry = 15
27      quiesce_server = 1
28      server webserver1 {
29          address = 192.168.1.240
30          active = 1
31          weight = 1
32      }
33      server webserver2 {
34          address = 192.168.1.231
35          active = 1
36          weight = 1
37      }
38 }

You can then synchronize the lvs.cf file to the secondary piranha server using the scp command and then restart the pulse service on both servers (service pulse start). Please note that, at this stage, you may see your virtual services in "down" status, but that is expected because real web servers have yet to be configured.

Once you are done configurnig LVS routers, you can proceed to the configuration of real servers. In this case, these are just normal web servers that serve critical PHP applications.

Because you'll put the same virtual IP address as an Ethernet interface alias on each real web server, it's important to configure arptables-related parameters on both servers. That way, any ARP requests for the virtual IP are ignored entirely by the real servers, and any ARP packets that might otherwise be sent containing the virtual IPs are mangled to contain the real server's IP instead of the virtual server's IP. To configure each real server to ignore ARP requests for each virtual IP address, perform the following steps on each real web server. To begin, install the arptables-related RPM on each real web server:

#/home/root> rpm -i arptables_jf-0.0.8-8.i386.rpmwarning: arptables_jf-0.0.8-8.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Then, execute the arptables command on both real web servers to drop and mangle ARP requests for virtual IPs, as shown in Figure 7.

Configuring arptables.
Figure 7: Configuring arptables.

Once the arptables-related configuration has been completed on each real server, the ARP table entries must be saved by typing the following commands on each real server:

service arptables_jf save
chkconfig --level 2345 arptables_jf on

Finally, put an IP alias of 192.168.1.250 on the Ethernet interfaces of both web servers. This can be done on a temporary basis using the ifconfig command as in Figure 8.

IP aliasing with ipconfig.
Figure 8: IP aliasing with ipconfig.

Finally, configure Apache servers on these real servers using the virtual server directive in apache.conf. In the Piranha web interface, you should see all green status. Your virtual server will be up, and both real servers will show up under STATUS (Figure 9).

Up status of virtual servers.
Figure 9: Up status of virtual servers.

Test Scenarios

Once you have the servers successfully up and running, you can begin your testing. The first test is to try to load a web page from these web servers.

As more and more page requests go to the IP load balancer, it will automatically balance requests between the two web servers (round-robin fashion), as shown in Figure 10.

Balancing requests between web servers.
Figure 10: Balancing requests between web servers.

The first step is to shut down one of the web servers. When you shut down one web server, the IP load balancer will start serving all web page requests from other surviving web server (Figure 11).

Routing requests to the surviving web server.
Figure 11: Routing requests to the surviving web server.

Next, shut down the primary LVS router server. It was expected that the backup LVS router would come into action and that I would get access to the web servers through the secondary LVS router, and this was exactly the behavior I observed.

Note that when I was using ruptime as the load monitor, I faced a lot of issues, including non-availability of websites, when the primary LVS router was down; however, when I switched to simple rup, everything went fine, and I was able to access the web application through web servers even though the primary LVS router was down.

Summary

Because Piranha has been adopted as the IP load balancing software by Red Hat Enterprise Linux, you can look forward to increased maturity and reliability in future Piranha versions. At present, although some performance issues exist with this load balancing software, it has lot of potential that is worth being explored by system administrators.