letthemeatcloud

The consensus seems to be, at least from the myriad surveys, studies, and research, that cloud as a model is the right answer, it’s just the location that’s problematic for most organizations.

Organizations aren’t ignoring reality; they know there are real benefits associated with cloud computing. But they aren’t yet – and may never be – willing to give up control. And there are good reasons to maintain that control, from security to accountability to agility. 

But the “people” still want the benefits of cloud, so the question is: how do we put the power of ( cloud | elastic | on-demand) computing into the hands of the people who will benefit from it without requiring that they relocate to a new address “in the cloud?”

The problem is that all the cloud providers have the secret sauce to their efficient, on-demand infrastructures locked up in the palace. They certainly aren’t – and shouldn’t really be expected to – reveal those secrets. It’s part of their competitive advantage, after all.

Unlike the French back in 1789 that decided that if the nobility wasn’t going to share their cake then, well, they’d just revolt, execute them, and take the cake themselves there’s no way you can “force” cloud providers to hand over their secret sauce. You can revolt, of course, but such a revolution will be digital, not physical, and it’s not really going to change the way providers do business.


EAT THEIR CAKE AND HAVE IT TOO

The problem isn’t necessarily that enterprises don’t want to use the cloud at all. In fact, many organizations are using the cloud or see potential use for the cloud, just not for every application for which they are responsible. Some applications will invariably end up in the cloud while others remain tethered to the local data center for years to come due to integration or security concerns, or just the inherent difficulty in moving something like a COBOL application on IBM big iron into the cloud. Yes, such applications still exist. Yes, they still run businesses. Yes, I know it’s crazy but it works for them and trying to get them to “modernize” is like trying to convince your grandmother she needs an iPhone.

For applications that have been/will be moved to the cloud, there it is. That’s all you need. But for those “left behind” for which you’d really like the same benefits of an on-demand, elastic infrastructure such that you’re not wasting compute resources, you need a way to move from what is a fairly static network and application network infrastructure to something a whole lot more dynamic.

You’ve probably already invested in a virtualization technology. That’s the easy part. The harder part is implementing the automation and intelligent provisioning necessary to maximize utilization of compute resources across the data center and somehow managing the volatility that will occur due to the moving around of resources in a way that optimizes the data center. This is the “secret sauce” part.

What you still need to do:

  1. Normalize storage. One of the things we forget is that in an environment where applications can be deployed at will on one of X number of physical servers is that we either (a) need to keep a copy of the virtual image on every physical server or (b) we need a consistent method of access from each physical server so the image can be loaded and executed. As (a) is a management nightmare and could, if you have enough applications, use far more disk space than is reasonable, you’ll want to go with (b). This means implementing a storage layer in your architecture that is normalized – that is, access from any physical machine is consistent. Storage/file virtualization is an excellent method of implementing the storage layer and providing that consistency that also happens to make more efficient use of your storage capabilities.
  2. Delegate authority. If you aren’t going to be provisioning and de-provisioning manually then something – some system, some application, some device – needs to be the authoritative source of these "events”. This could be VMWare, Microsoft, a custom application, a third-party solution, etc… Whatever has the ability to interrogate and direct action based on specific resource conditions across the infrastructure – servers, network, application network, security – is a good place to look for this authoritative source.
  3. Prepare the infrastructure. This may be more difficult or easy depending on the level of integration that exists between the authoritative source and the infrastructure. Infrastructure needs to be prepared to provide feedback and to take direction from the source of authority in the virtual infrastructure. For example, if the authority “knows” that a particular application is nearing capacity, it may (if so configured) decide to spin up another instance. Doing so kicks off an entire chain of events that includes assignment of IP address, activation of security policies, recognition by the application delivery network that a new instance is available and should be included in future application routing decisions.

    This is the “integration”, the “collaboration”, the “connectivity intelligence” we talk about with Infrastructure 2.0. Many of the moving parts are already capable – and integrated – with virtual management offerings and both give and take feedback from such an authoritative source when making decisions about routing, application switching, user access, etc…in real time. If the integration with the authoritative source you choose does not exist, then you have a few options:
    1. Build/acquire a different source of authority. One that most of the infrastructure does integrate with.
    2. Invest in a different infrastructure solution. One that does integrate with a wide variety of virtual infrastructure management systems and that is likely to continue to integrate with systems in the future. Consider new options, too. For example, HP ProCurve ONE Infrastructure is specifically designed for just such an environment. It may be time to invest in a new infrastructure, one that is capable of seeing you through the changes that are coming (and they are coming, do not doubt that) in the near and far future.
    3. Wait. Yes, this is always an option. If you explore what it would take and decide it’s too costly, the technology is too immature, or it’s just going to take too long right now you can always stop. There’s no time limit on migrating from a static architecture to a dynamic one. No one’s going to make fun of you because you decided to wait. It’s your data center, if it’s just not the right time then it’s not the right time. Period. Eventually the system management tools will exist that handle most of this for you, so perhaps waiting is the right option for your organization.
  4. Define policies. This means more than just the traditional network, security, and application performance policies that are a normal part of a deployment. This also means defining thresholds on compute resources and utilization and determining at what point it is necessary to spin up – or down – application images. One of the things you’ll need to know is how long it takes to spin up an application. If it takes five minutes to spin up an application, you’ll need to tweak the policies surrounding provisioning to ensure that the “spin up process” starts before you run out of capacity, such that existing allocated resources can handle the load until that new instance is online.

    This process is one of ongoing tweaking and modification. You’re unlikely to “get it perfect” the first time, and you’ll need to evaluate the execution success of these policies on an ongoing basis until you – and business stakeholders - are satisfied with the results. This is the reason visibility is so important in a virtualized infrastructure; because you need to be able to see and understand the flow of traffic, data, and how the execution of policies affects everything from availability to security to performance in order to optimize them in a way that makes sense for your infrastructure, application, and business needs.

THE BEST OF BOTH WORLDS

What we are likely to see in the future is a hybrid model of computing; one in which organizations take advantage of cloud computing models both internally and externally as befits the needs of their organization and applications. The dynamic infrastructure revolution is about ensuring you have the means to support a cloud model internally such that you can make the decision on whether any given application should reside “out there” or “in here”. The dynamic infrastructure revolution is about realizing the benefits of a cloud computing model internally as well as externally, so you don’t have to sacrifice performance, or reliability, or security just to reduce costs. The dynamic infrastructure revolution is about a change in the way we view network and application network infrastructure by elevating infrastructure to a first-class citizen in the overall architecture; one that actively participates in the process of delivering applications and provides value through that collaboration.

The dynamic infrastructure revolution is about a change in the way we think about networks. Are they dumb pipes that make routing and security and application delivery decisions in a vacuum or are they intelligent, dynamic partners that provide real value to both applications and the people managing them?

The dynamic infrastructure revolution is not about removing the power of the cloud, it’s about giving that power to the people, too, so both can be leveraged in a way that maximizes the efficiency of all applications, modern and legacy, web and client-server, virtualized and physical.

 

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share