Cloud Load Balancing Fu for Developers Helps Avoid Scaling Gotchas

If you don’t know how scaling services work in a cloud environment you may not like the results

One of the benefits of cloud computing, and in particular IaaS (Infrastructure as a Service) is that the infrastructure is, well, a service. It’s abstracted, and that means you don’t need to know a lot about the nitty-gritty details of how it works. Right?

Well, mostly right.

While there’s no reason you should need to know how to specifically configure, say, an F5 BIG-IP load balancing solution when deploying an application with GoGrid, you probably should understand the implications of using the provider’s API to scale using that load balancing solution. If you don’t you may run into a “gotcha” that either leaves you scratching your head or reaching for your credit card. And don’t think you can sit back and be worry free, oh Amazon Web Services customer, because these “gotchas” aren’t peculiar to GoGrid. Turns out AWS ELB comes with its own set of oddities and, ultimately, may lead many to come to the same conclusion cloud proponents have come to: cloud is really meant to scale stateless applications.

Many of the “problems” developers are running into could be avoided by a combination of more control over the load balancing environment and a basic foundation in load balancing. Not just how load balancing works, most understand that already, but how load balancers work. The problems that are beginning to show themselves aren’t because of how traffic is distributed across application instances or even understanding of persistence (you call it affinity or sticky sessions) but in the way that load balancers are configured and interact with the nodes (servers) that make up the pools of resources (application instances) it is managing.


Published May 06, 2010
Version 1.0

Was this article helpful?

No CommentsBe the first to comment