Yesterday I blogged about how the lack of planning and scalable provisioning of any kind can cause major problems when traffic spikes hit. Today let’s look at how companies who are planning for spikes are preparing, even if it’s not completely

Rich Miller had an excellent post-election blog post a few weeks ago on sites scaling up for election traffic on Data Center Knowledge. As he points out in a later post, traffic hit record levels through Akamai's CDN on election night. Some companies adequately planned for the burst, other didn't. Spike management isn't something new, however we do deal with massively larger amounts of traffic than we have in the past, and our traffic usage is different. An election that everyone is watching is an excellent case study for these new traffic patterns. Me, I was sitting in front of the TV on election night with my laptop open to MSNBC and twitter, CNN mobile on the iPhone (primarily b/c I enjoyed seeing all those 404 and 500 errors that were showing up on CNN mobile; I know, I'm evil :) ). And I'm guessing this was the norm for people who use the Internet as their primary news source, like me. And the company responses that Rich covers, that did plan for the election spike and anticipated this flood of traffic, are interesting to me on two fronts:

  1. The lack of the V-word: Surprisingly in this day and age, none of the companies interviewed said they were relying on any virtualization solutions to scale for their traffic. All the remedies involved physical servers and physical space in a data center or with a hosting company. But with all the hype (and b/c it's all I think about all day), I expected to see something about VMs or virtual storage as part of their spike management plans. On one hand this is encouraging that yes, the world can still spin without VMware or Microsoft virtual platforms. On other other, though, the election should serve as a perfect use case for provisioning and scaling using tools like virtual machines. This election is the best example I can imagine for "elastic computing," and I'm surprised that it wasn't first in responses from these companies. The ability to provision up and de-provision down as need based on real-time, immediate traffic needs is the long term bread-and-butter for virtual platforms; companies like BlueLock and Joyent know this today and have built virtual hosting solutions around provisioning scale for both infrastructure and the applications. So why not use the virtual tools available today as part of your scaling and provisioning needs, rather than having to plan for a spike by pre-ordering batches of servers and waiting weeks for them to go online?
  2. Focus on the Apps: I have to say it warms my heart anytime someone mentions applications in the data center -- I'm a softie for those darned apps! :) All of the examples in his post were customers who were expecting an increased need for their application: a political blog, a CDN that hosts political websites, Twitter, etc. Their concern isn't with scaling core infrastructure (switches, routers, cables, trunks, etc), it's with scaling the application platforms (servers, OS', webservers, etc); again, props to those hosting providers who have already built out virtual infrastructure platforms to allow VM and application scale and provisioning as needed.

The phrase "old school" kept popping up in my head as I was reading the post. Are these companies sticking with what works, what's tried and true, by provisioning physical servers well in advance of the expected spike? Or does this show that virtual platforms are still in their infancy and companies that know how to plan for and manage massive amounts of application data traffic don't yet trust virtual solutions? I would probably lean somewhere in the middle, and until virtual platforms and dynamic provisioning proves itself, we'll continue to see dynamic provisioning in the VDC as more of a test case rather than a real-world use case.