How to optimize compute resources in a heterogeneous environment using weight/ratio-based load balancing

Unless you’re starting from scratch your data center is full of physical servers of various and sundry sizes, colors, shapes, and compute resources. And even if you’re starting from scratch and you have beautiful racks of everything the same, it’s not likely to stay that way if for no other reason than, well, hardware moves on at an astonishing rate these days. So you’ve almost certainly got (or will have) a physically heterogeneous environment in terms of hardware compute resources.

When you’re scaling up servers – whether solely to assure availability or for capacity – you will end up with instances running on different weighted-slbservers. Or at least you’d better if availability is part of the equation. Now, in a traditional environment that would cause potential issues as one of the hallmarks of a highly available architecture is that if the primary server fails the secondary must be able to handle the load. All of it. In a virtualized environment that’s not necessarily the case as you may be able to simply bring up two or three instances with less capacity to meet demand if you have the physical resources available.

Here’s the catch: your infrastructure needs to understand the capacity of each server (physical or virtual) in order to maximize resources available. Specifically, the load balancing solution – whether a traditional “load balancer” or part of an application delivery controller – must be able to distribute requests based on what resources are available on any given instance. That means if an instance is running on a physical server with fewer total resources available than another instance, the instance with fewer resources should be used less frequently.

It is the fact that data centers are heterogeneous and comprised of myriad physical servers of varying capacity that makes it important for the folks architecting cloud environments to understand what’s going on “under the hood.” 


WEIGHTS and MEASURES

Not all servers are created equal, but if you’re consolidating and trying to eek out every last drop of CPU and RAM from your physical hardware to reduce capital expenditures you might need to get more creative with how you’re distributing your application load.

Using weighted or ratio-based load balancing in a heterogeneous environment offers the convenience and simplicity of traditional, simple load balancing algorithms with an eye toward balancing the differences inherent in physical hardware. Or, potentially in emerging data center models, the differences in virtual instances. Because the limitation on physical hardware necessitates limitations on virtual instances, particularly if there are ore than one instances of a virtual container running on the same hardware, it’s important for the solution that distributes request amongst application instances to understand that some have more headroom than others, as it were.

In load balancing there are a few “industry standard” old standby algorithms. These algorithms are often also implemented by application server clustering solutions, and should be fairly familiar to network and application folks alike: round robin, least F5 BIG-IP load balancing algorithms connections, and fastest response time.

In most load balancers there is also the addition of a ratio-based algorithm in which “weights” are assigned to each member of a (pool|farm|cluster). Requests are distributed to each member based on that weight, which is really used more like a percentage ratio than anything else.

Using a ratio-based load balancing algorithms in a virtual or cloud environment in which the hardware and/or virtual containers may have different resource ceilings affords architects the ability to better distribute requests according to the capacity and health of each instance. Without taking into consideration these physical limitations it is easy to overwhelm one system while leaving another idle, which runs contrary to the concept behind cloud computing and on-demand data centers in which every ounce of compute resources is used in order to meet capacity.

Obviously there are other (and perhaps better) ways to make decisions on distributing requests when the environment comprises instances of varying capacities and resources. A ratio-based load balancing algorithm is one of the simplest and easiest to implement, but affords a great deal better use of resources than does an algorithm that does not take the physical constraints of disparate systems into consideration.

 

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share