Sharing is core to a successful cloud implementation but not something every organization does well. How do you encourage business stakeholders to play well with others?

image

In most definitions of “cloud computing” there lies a central, key component: shared resources. It is the sharing of resources, in fact, through which many of the benefits of reduced operating expenses are supposed to be achieved. It is the sharing of resources – or perceived inability to share resources – that confounds some folks when discussing private cloud, although there are several ways in which sharing of resources can certainly be achieved even internally.

The problem for both public and private cloud, however, is not with the ability to share resources but the willingness of stakeholders to do so.

Many an enterprise application project includes in its budget its own hardware. This is not because it’s the way it’s always been done; this is because stakeholders are extremely jealous of their compute resources and fully cognizant of the potential impact of sharing those resources. When resources are shared, whether via a cloud architecture or not, you can rest assured that this sharing is the first recipient of the “finger of blame” when performance or availability issues arise.

Application stakeholders do not, in general, play well with others.


THERE’S ALWAYS TRUTH IN MYTHS IN LEGENDS

Even the most unbelievable myth or legend has its roots in truth. Some kernel of reality is present in every fairy tale, though you may have to dig fairly deep to find it. The same is true with sharing of resources, particularly in a virtualized environment. While virtual machines are quite capable in restricting the total amount of resources available to any given application residing on a specific hardware platform, the contention for resources lies deeper within the host operating system.

Virtualization is a layer of abstraction over the host operating system. It is the host operating system that controls access to and parcels out the hardware resources required for each guest operating system. When two or more applications running in a virtual machine attempt to access memory or CPU resources, the host operating system determines which one goes first. At the deepest levels of hardware there is no such thing as “sharing” of resources; an application either has access to it or it doesn’t. A complex system of locks and pagination and scheduling algorithms controls those resources and therefore the “myth” that one virtualized application may be responsible for the poor performance of another has some kernel of truth to it.

Regardless of reality, however, the myth, or like most fairy tale monsters the mere fear of the myth being true, will continue to drive stakeholder’s reluctance to share resources with other constituents. It may be the case that the cost of carrying the overhead of idle resources is justified by the ability – as they see it – to maintain application performance and thus meet service level agreements with their constituents. It may be the case that their risk management equations put more weight on the possibility of interference from other applications on the performance of their applications. It may the case that they believe, honestly, that they’ll need those extra resources to scale vertically in the future so they don’t have to scale horizontally and incur the capital expenses associated with such an endeavor.

Whatever the underlying reason, application stakeholders often do not play well with others and thus are highly reluctant to share resources. This makes implementing or using a cloud of any kind quite the challenge.


EFFICIENCY ASSURANCE

One of the ways you can alleviate the concerns of those who stand firmly against sharing their toys with others is to provide the means by which the resources that are allocated are utilized in the most efficient manner possible. That means eliminating shared resources as an excuse for poor performance. Making virtual machines more efficient is the first line of defense, and the addition of acceleration and optimization when necessary is a second.

According to Aberdeen Group’s “Application Performance Management” report from June 2008, “58% of organizations surveyed are unsatisfied with the imageperformance of applications they are currently using.” Worse, “50% of organizations are reporting that issues with application performance are causing lost revenue opportunities.

Aberdeen Group research further indicates a very real aberdeen-virtual-performance-issuesimpact on application performance from virtualization. Note that the graph shows a decrease in application response times, not an improvement, resulting from virtualized environment. That is unsurprising, if you consider that virtualization adds another layer of infrastructure between the application and the end user. There will be an impact from that additional layer that needs to be compensated for elsewhere. The benefits of virtualization are such that even a negative impact on performance is unlikely to persuade IT and business stakeholders from shying away from such technology and, indeed, if they are considering a private cloud implementation down the road one of the first steps is logically to virtualize the application infrastructure.

So what’s needed, then, is to compensate in the infrastructure – the network or the application network, or both – for the inevitable degradation in performance, however slight.

  1. Improve VM efficiency by offloading compute intense tasks to application infrastructure. Reducing the amount of processing a virtual machine must perform to accomplish tasks like communication (TCP management) and transport layer security (SSL) – each of which can consume up to 30% of a virtual machine’s resources on their own – frees up resources that can be applied to the task at hand: application logic and execution.
  2. Add appropriate acceleration and optimization technologies to improve end-user response times and reduce congestion on the network. Compression, caching, and network-based optimizations reduce the amount of data being transferred and can improve application performance dramatically. Leverage existing infrastructure (application delivery controllers) capable of being extended to reduce the impact of point-solution sprawl.
  3. Ensure maximum visibility into performance and resource consumption of applications and virtual machines. Monitor, report, tweak if necessary. Wash. Rinse. Repeat. Make sure you’ve got the tools in place to monitor and report, consistently, on resource utilization, response time, and capacity on a per application basis. Use historical trending of this data to understand which application requires what resources and when, and schedule accordingly to maintain a best performance scenario all the time.

Using infrastructure components capable of improving the efficiency of the VMs will aid in compensating for the degradation and contention inherent in increasing virtual machine densities. Using infrastructure components that provide web application and network optimization and acceleration functions will further improve application performance.

The biggest challenge for private – and public – cloud implementations will to be to dispel the fears of business stakeholders that sharing resources will result in a degradation of application performance or potentially downtime. By evaluating the options available to you to mitigate very real impacts on performance from virtualization and sharing – and then implementing those options – you can assuage the concerns of business stakeholders and effectively remove them as a reason why you can’t move ahead with virtualization and cloud computing architectures.

 

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles: