Technical Article How do you get the benefits of shared resources in a private cloud? August 17, 2009 by Lori MacVittie 3341 article application delivery architecture availability business cloud design dynamic infrastructure efficiency infrastructure internet management performance us virtualization web 0 I was recording a podcast last week on the subject of cloud with an emphasis on security and of course we talked in general about cloud and definitions. During the discussion the subject of “private cloud” computing was raised and one of the participants asked a very good question: Some of the core benefits of cloud computing come from shared resources. In a private cloud, where does the sharing of resources come from? I had to stop and think about that one for a second, because it’s not something I’ve really thought about before. But it was a valid point; without sharing of resources the reduction in operating costs is not as easily realized. But even in an enterprise data center there is a lot more sharing that could be going on than perhaps people realize. SHARING in the ENTERPRISE There are plethora of ways in which sharing of resources can be accomplished in the enterprise. That’s because there are just as many divisions within an organization for which resources are often dedicated as there are outside the organization. Sometimes the separation is just maintained in the financial ledger, but just as frequently the separation manifests itself physically in the datacenter with dedicated resources. Individual initiatives. Departmental level applications. Lines of business. Subsidiaries. Organizations absorbed – mostly - via mergers and acquisitions. Each of these “entities” can – and often does – have its own budgets and thus dedicated resources. Some physical resources in the data center are dedicated to specific projects, or departments, or lines of business and it is often the case that the stakeholders of applications deployed on those resources “do not play well with others” in that they aren’t about to compromise the integrity and performance of that application by sharing what might be perfectly good compute resources with other folks across the organization. Thus is it perfectly reasonable to believe that there are quite a few “dedicated” resources in any large data center which can be shared across the organization. And given chargeback methods and project portfolio management methods, this results in savings in much the same manner as it would were the applications deployed to a public cloud. But there is also a good deal of compute resources that go to waste in the data center due to constraints placed upon hardware utilization by organizational operating policies. Many organizations still limit the total utilization of resources on any given machine (and hardware) to somewhere between 60% and 80%. After that the administrators get nervous and begin thinking about deploying a second machine from which resources can be utilized. This is often out of consideration for performance and a fear of over-provisioning that could result in the dread “d” word: downtime. Cloud computing models, however, are supposed to ensure availability and scalability through on-demand provisioning of resources. Thus if a single instance of an application begins to perform poorly or approaches capacity limits, another instance should be provisioned. The models themselves assume full utilization of all compute resources across available hardware, which means those pesky utilization limits should disappear. Imagine if you had twenty or thirty servers all running at 60% utilization that were suddenly freed to run up to 90% (or higher)? That’s like gaining … 600-900% more resources in the data center or 6-9 additional servers. The increase in utilization offers the ability to share the resources that otherwise sat idle in the data center. INCREASING VM DENSITY If you needed even more resources available to share across the organization, then it’s necessary to increase the density of virtual machines within the data center. Instead of a 5:1 VM –> physical server ratio you might want to try for 7:1 or 8:1. To do that, you’re going to have to tweak out those virtual servers and ensure they are as efficient as possible so you don’t compromise application availability or performance. Sounds harder than it is, trust me. The same technology – unified application delivery - that offloads compute intense operations from physical servers can do the same for virtual machines because what the solutions are really doing in the case of the former is optimizing the application, not the physical server. The offload techniques that provide such huge improvements in the efficiency of servers comes from optimizing applications and the network stack, both of which are not tied to the physical hardware but are peculiar to the operating system and/or application or web server on which an application is deployed. By optimizing the heck out of that, the benefits of offload technologies can be applied to all servers: virtual or physical. That means lower utilization of resources on a per virtual machine basis, which allows an organization to increase the VM density in their data center and frees up resources across physical servers that can be “shared” by the entire organization. CHANGE ATTITUDES AND ARCHITECTURES The hardest thing about sharing resources in a private cloud implementation is going to be changing the attitudes of business stakeholders toward the sharing of resources. IT will have to assure those stakeholders that the sharing of resources will not adversely affect the performance of applications for which those stakeholders are responsible. IT will need to prove to business stakeholders that the resulting architecture may actually lower costs of deploying new applications in the data center because they’ll only be “paying” (at least on paper and in accounting ledgers) for what they actually use rather than what is available. By sharing compute resources across all business entities in the data center, organizations can, in fact, realize the benefits of cloud computing models that comes from sharing of systems. It may take a bit more thought in what solutions are deployed as a foundation for that cloud computing model, but with the right solutions that enable greater efficiencies and higher VM densities the sharing of resources in a private cloud computing implementation can certainly be achieved. What is server offload and why do I need it? I am wondering why not all websites enabling this great feature GZIP? 3 Really good reasons you should use TCP multiplexing SOA & Web 2.0: The Connection Management Challenge Green IT: Reduce, Reuse, Recycle last modified: August 17, 2009 1 Comment(s): 0 The Myth of 100% IT Efficiency You must be logged in to post comments.