Note: F5 BIG-IP v10 launched earlier this month with 120+ features focused on the IT Agility and the Dynamic Data Center. While those features are critical for administrators they’re also necessary for management on up through the CxO, albeit in a different format (technical details vs. applied benefit). For the next few days I’m going to be extrapolating the typical geeky feature list and talk about how BIG-IP v10 enables IT Agility on a data center level, helping to move your data center from the monolithic and static building it’s in today to a dynamic and mobile data center located on-premise, in the cloud, or a hybrid.

F5 BIG-IP v10: IT Agility. Your Way.

The term dynamic data center has been a prevalent, forward-thinking component of IT for the past few years. Like most technology buzzwords that include “dynamic,” the dynamic data center means different things to different people. Because the word dynamic has so agile-dog many meanings when used in a technical context and rarely means the same thing to different audiences, the dynamic data center phrase often loses substance. On one end of the spectrum, the dynamic data center is nothing more than a marketing phrase used to make people think about how their data center is changing and evolving. It’s usually not backed up with much substance beyond “Tomorrow your data center will be different!” and there’s no talk about how the data center will change. Using the dynamic data center as a more generic term that applies to various business objectives is great for marketing, but it doesn’t address the business and technology needs of how the data center is changing. On the other end of the spectrum, the dynamic data center is a completely new computing paradigm with shared, yet secure resources providing a dynamic “Just-in-time (JIT)” computing model. While the long-term goal is to achieve a true JIT dynamic data center, the reality of the dynamic data center today lies somewhere in the middle of the spectrum, creating a true data center movement: using technologies readily available to better align IT needs with the needs of the business.

The Traditional Data Center

In order for there to be a new data center model there must be an “old” data center model to move away from. In reality, this is where most data centers begin—at the static, physical data center, resulting from decades of enterprise computing evolution and based on assumptions of how the data center delivers applications to users. Client server applications, for example, assumed a farm of physical servers were sitting behind a firewall and communicating with users through a traditional load balancer. Physical data centers were built to focus on speeds and feeds, with plumbing built to handle network congestion and traffic direction. This traditional model is still a tried and true architecture.

Historically, enterprises have built out these static, physical data centers from a central location, either by building a structure themselves or by leasing rack space from a hosting provider. These physical data centers require special considerations, such as protection against natural disasters (earthquake, hurricane, and tornado proofing), safeguarding from massive power runs, and the addition of large HVAC systems to maintain a safe running environment for the systems. And these data centers need large, single-purpose systems and servers to run applications; are network-centric with servers and applications built around network cables, racks, switches, and routers; and are isolated and self-contained, single-function structures.

But those considerations apply only to the first, or primary, data center. A secondary, redundant data center also needs to be built or leased as well, providing a rollover location for all application services in the data center should the primary become unavailable due to a human, system, or natural failure. In essence, every physical data center requires a geographically removed twin, just in case. Every part of the primary data center needs to be replicated at the secondary location and kept in constant synchronization with the primary.

While this bifurcated data center model works well for high availability, it’s an extremely expensive architecture to build and maintain. Capital expenses to create and monitor parity between systems and facilities within each data center, along with operating expenses to manage each location and keep each in sync, can continue to drain operating funds. Despite the challenges with this model, it has
been the de facto architecture for redundant system and application distribution. When an application is mission critical, virtually no expense is too high to change this architecture, even if maintaining multiple geographically disparate data centers is required. However, this large capital expense outlay to build multiple, monolithic application delivery data centers can be an inefficient use of IT budget, consuming dollars for services that sit dormant for days, months, and even years. That high cost barrier to entry for mission-critical applications and services is changing, and new computing models are attempting to do away with that barrier all together.

Next Up: Optimizing Your Business with and for IT Agility.