Concepts of Server Load Balancing By PANKAJ SHARMA 1 Concepts of Server Load Balancing Introduction of Load balancing and clustering with Liferay Load balancing is one of the most popular in the world due to its impressive ease-of-use. load balancing is techniques to distributed load on multiple system . Server Load Balancing (SLB) could mean many things, for the purpose of this book it is defined as a process and technology that distributes site traffic among several servers using a network-based device. This device intercepts traffic destined for a site and redirects that traffic to various servers . The load-balancing process is completely transparent to the end user. it’s configured optimally for a multiple server environment. If one server isn’t sufficient to serve the high traffic needs of your site, Liferay scales to the size you need. In Figure 1-1, we see the simplest representation of SLB. Apache server based Load Balancing: Simply add load balancing to your current multiple tomcat server configuration in order to achieve high availability and build your own web cluster without burning your entire hosting budget. Load balancing can be implemented quickly and easily as an add-on to your current server solution to share the load between your web servers, using a simple script to replicate the data on the server . A load balancer performs the following functions: Intercepts network-based traffic (such as web traffic) destined for a site. 2 Concepts of Server Load Balancing Splits the traffic into individual requests and decides which servers receive individual requests. Maintains a watch on the available servers, ensuring that they are responding to traffic. If they are not, they are taken out of rotation. Provides redundancy by employing more than one unit in a fail-over scenario. Offers content-aware distribution, by doing things such as reading URLs, inter-cepting cookies, and XMLparsing. 3 Concepts of Server Load Balancing . Figure 1-1 Apache server based Load Balancing Concept The Concept of Architecture Load Balancing: The objective here is that the solution should distribute the load among the servers in the cluster to provide the best possible response time to the end user. In a typical clustering solution, this involves use of a load distribution algorithm, like a simple round robin algorithm or more sophisticated algorithms, that distributes requests to the servers in the cluster by keeping track of the load and available resources on the servers. In shown figure1-2 4 Concepts of Server Load Balancing 5 Concepts of Server Load Balancing Figure 1-2 Apache server based Architecture Load Balancing Types of Load Balancing : DNS-Based Load Balancing: With DNS round robin, it is possible to give multiple IP addresses to hostname,distributing traffic more or less evenly to the listed IP addresses. For instance, let's say you had three web servers with IP addresses of 208.185.43.202, 208.185.43.203, and 208.185.43.204 that we wanted to SLB was a technology or a viable product, site administrators would (andsometimes still do)employ a load balancing process known as DNS round robin.DNS round robin uses a function of DNS that allows more than one IP address to associate with a hostname. Every DNS entry has what is known as an A record,which maps a hostname to an IP address (such as In the Beginning 208.185.43.202).Usually only one IP address is given for a hostname. Under ISO's DNS server, BIND 8, this is what the DNS entry for www.vegan.net would look like: IN 208.185.43.202 , share the load for the site www.vegan.net. The configuration in the DNS server for the three IP addresses would look like this: IN A 208.185.43.20 IN A 208.185.43.203 IN A 208.185.43.204 6 Concepts of Server Load Balancing You can check the effect using a DNS utility known as look up, which would show the following for www.vegan.net Server: ns1.vegan.net Address: 198.143.25.15 Name: www.vegan.net Addresses: 208.185.43.202, 208.185.43.203, 208.185.43.204 The end result is that the traffic destined for www.vegan.net is distributed between the three IP addresses listed .as shown in Figure 1-3. Figure 1-3 Traffic distribution by DNS-based load balancing 7 Concepts of Server Load Balancing we have four front-end servers, {FE1, FE2, FE3, FE4}. The user retrieves this list of servers from DNS, and then randomly connects to one in the list. In our example, let’s assume that the client connects to FE3. Upon connecting, the client presents its SIP URI from which a hash is generated by the front-end server. From this hash, the server determines the location of the registrar assigned to it. Note When a user is first enabled on (or moved to) a pool, a hash is generated to determine which front-end server is the primary registration database for the user, along with the order in which the remaining front- end servers will be attempted (as the backup registrar services). For our example, our user hash results in {FE4, FE2, FE1, FE3}. This is then interpreted as the order in which the clients will attempt to register. The client attempts to register with FE3, but because it’s not the primary registrar assigned to the user, FE3 redirects the client to FE4 as the correct registrar to connect to. The client successfully registers with FE4. SLB has several benefits: Flexibility: SLB allows the addition and removal of servers to a site at any time, and the effect is immediate. Among other advantages, this allows for the maintenance of any machine, even during peak hours with little or no impact to the site. A load balancer can also intelligently direct traffic using cookies, URL parsing, static and dynamic algorithms, and much more. High availability: SLB can check the status of the available servers, take any non responding servers out of the rotation, and put them in rotation when they are functioning again. This is automatic, requiring no 8 Concepts of Server Load Balancing intervention by an administrator. Also, the load balancers themselves usually come in a redundant configuration, employing more than one unit in case any one unit fails. Scalability: Since SLB distributes load among many servers, all that is needed to increase the serving power of a site is to add more servers. This can be very economical, since many small- to medium-sized servers can be much less expensive than a few high-end servers. Also, when site load increases, servers can be brought up immediately to handle the increase in traffic.Load balancers started out as PC-based devices, and many still are, but now load balancing functions have found their way into switches and routers as well. firewall Load Balancing: Firewall load balancing balances traffic flows to one or more firewall farms. A firewall farm is a group of firewalls that are connected in parallel or that have their “inside” (protected) and “outside” (unprotected interfaces connected to common network segments. Firewall load balancing requires a load-balancing device (IOS SLB) to be connected to each side of the firewall farm. A firewall farm with “inside” and “outside” interfaces would then requir two load- balancing devices, each making sure that traffic flows are directed toward the same firewall for the duration of the connection. Firewall load balancing is performed by computing a hash value of each new traffic flow (source and destination IP addresses and ports). This is called a route lookup. The firewall load-balancing device then masquerades as the IP 9 Concepts of Server Load Balancing address for all firewalls in the firewall farm. Firewall load balancing can detect a firewall failure by monitoring probe activity. The HSRP can be used to provide a “stateless backup redundancy for multiple firewall load-balancing devices. If one device fails, a redundant device can take over its function. Multiple firewall load-balancing devices can also use “stateful- backup” for redundancy. Backup devices keep state information dynamically and can take over immediately if a failure occurs. Figure 1-4 Firewall Load-Balancing Concept 10 Concepts of Server Load Balancing What About Load Balancing: First, let's be clear on what "load balancing" is. Load balancing - a technique to distribute workload across resources. It is but one component in a high-availability cluster. Liferay Portal’s case, we are load balancing the volume of requests across multiple app servers, which may or may not be on physically separate hardware. Initially, this may seem sufficient, until you realize some of the components that the portal uses. What is mod_jk: mod_jk is a replacement to the elderly mod_jserv. It is a completely new Tomcat-Apache plug-in that handles the communication between Tomcat and Apache. The mod_jk connector is an Apache HTTPD module that allows HTTPD to communicate with Apache Tomcat instances over the AJP protocol. The module is used in conjunction with Tomcat's AJP Connector component. shown in figure1-5 . Now the configure is much easier and more consistent . ProxyPass /servlets ajp://tc.example.com:8089 Easier when Apache needs to proxy both HTTP and AJP . Leaverage improvements proxy module . 11 Concepts of Server Load Balancing 12 Concepts of Server Load Balancing Figure 1-5 Apache + mod_jk Balancing Concept About Connectors: Apache Tomcat uses Connector components to allow communication between a Tomcat instance and another party, such as a browser, server, or another Tomcat instance that is part of the same network.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages37 Page
-
File Size-