owlCloud computing management functionality and standards are right now laser-focused on virtual machines, and most APIs include the ability to stop,start,launch,etc…at that level of the infrastructure. This is because the application is still insulated by its virtualized environment. The “depth” of management and standards efforts today stops at the hard shell of the virtualization layer and leaves the soft, chewy application center alone. This means nothing is really all that different for developers. But it could, and some might argue should, be different.


The development of a web-application for a cloud computing environment today is really no different than the development of an application destined for deployment in a traditional data center. If the developers or architects are network-savvy, they know they need to worry about a few environmental specific conditions like persistence and stateful load balancing, but other than that they don’t have to change how they develop the application.

That’s because when they complete the application and deploy it into a web-application server, the entire environment – OS, application server, and application – will be packaged up into a neat virtual image and shipping out. There’s nothing more they need to do. Nothing different than it was before cloud computing appeared on the scene.

The focus in cloud computing environments, as evinced by a perusal of APIs offered up to standards organizations by a variety of cloud computing providers – Sun, Yahoo! – and organizations like OCCI, stops at the virtualization layer. Beyond the virtual machine there is no mention of application resources, no mention of how those might be managed or provisioned or priced. It is the virtual machine layer at which the buck stops.

Virtual machines virtualize the operating system; a complete environment. They do not virtualize an application, nor even an application server environment. Indeed, one could successfully argue that web application servers have long virtualized applications through the automated provisioning and management of isolated, virtual instances of applications. At lease enterprise-class web application servers have, the story is very different when you look at scripting-based languages like ASP, PHP, and Ruby and their deployment on web-servers where isolation is not provided for nor considered.


It is like at the web/application server tier that virtualization could make the biggest impact and thus it is likely in the PaaS (Platform as a Service) market that we will see the greatest advances in virtualization of applications.

Consider that rather than provisioning virtual machines you provision applications. I know, quite the concept, isn’t it? But at the core of what we’re trying to do isn’t that really what we want? To deploy an application into an environment? So let’s pretend that rather than moving around and provisioning and releasing virtual machine images we are actually working at the layer that’s most important to us: at the application layer.

Image a web/application server environment that acts much as we expect virtual machines to act today: it is the application server that is responsible for metering and billing of compute resources, but because the web/application server actually knows exactly what each application has consumed providers would be able to not only claim a “pay for what you use” model but actually implement one, rather than the “well, you’re paying for how many virtual machines you use, not really how much compute power you consume.”

The web/application server performs many of the tasks we already associate with management of virtual machines: launch, stop, suspend, provision. Many web/application server platforms are already remotely manageable and provide APIs through which their management functions can be controlled. Web and application server platforms are well-suited to becoming the layer at which we manage compute resources and application management and would certainly provide much more granular control over the environment than do virtual machines.


If we were truly provisioning at the application layer through cloud computing enabled web and application servers then we come to a place where developers might need to learn new tricks. 

For example, today there are environment/platform specific methods of declaring web-service accessible functions. [web service] and [web method] declare to Microsoft environments that certain objects and methods will be web-service enabled. Similar methods are used in the latest versions of Java, for example @WebService indicates in a JAX-RPC 2.0 environment that a class will be service-enabled. The development environment interprets those directives and prepares the objects and methods for service-enablement, including providing the interfaces necessary for management via the application server. balancing_act

Now, take that concept and apply it to virtualization. Imagine that in a development environment you know that a specific function/method/discrete block of application logic will be core to the application and heavily used. You decide that this block of code will be a bottleneck and thus it would be appropriate to scale it out. You preface the block of code with @virtualize or [virtualize] and go on with your coding. Optimally we’d like a profiling tool to be able to do this for us; to examine the code in a run-time scenario and determine where the most time and compute resources are spent and automatically suggest which workloads are good candidates for virtualization.

When the application is packaged the development environment then recognizes those directives and prepares that block of code to be “virtualized”. The directives, instructions, to the web/application server instruct it that the block of code can be virtualized which in turn means it may be deployed as a discrete workload on any capable application instance.

At run-time the application server, which is able to monitor and manage compute resource utilization, determines that load is increasing much too quickly and needs to increase capacity. Today this is accomplished by launching complete virtual machine images on other resources; the entire application is duplicated and thus requires X compute resources (and its associated costs) every time an image is launched. But in our scenario the application server recognizes the virtualizable workloads and simply indicates that additional instances of the workload should be launched, and uses mechanisms similar to RMI and CORBA and EJB naming to ensure that application requests to that workload are properly directed.

The instances of the workload require fewer compute resources than the entire application and thus should theoretically incur lower costs, which means the costs of scaling are reduced overall.


PaaS (Platform as a Service) is uniquely positioned to be the leaders in this aspect of cloud computing’s evolution. Because the platform, the application development and deployment platforms, are the focus of PaaS and PaaS providers like Microsoft (Azure) and Google (Google Apps) completely control the application servers upon which applications are deployed, they are in a unique position to take virtualization to the next level.

PaaS providers already must manage and monitor at a level lower than IaaS (Infrastructure as a Service) providers because the interface between PaaS and its customers is the application, not the virtual image. In fact, virtualization may not even be a part of the underlying PaaS architecture and in fact does not need to be involved. The “virtualization” in a PaaS can (and some might argue should) come directly from the isolation and management provided by the development and deployment platforms.

Christofer Hoff makes a similar assertion in “Incomplete Thought: Virtual Machines Are the Problem, Not the Solution…”:

blockquote So these virtualization players are  making acquisitions to prepare them for this next wave — the real emergence of Platform as a Service (PaaS.)

Some like Microsoft with Azure are simply starting there.  Even SaaS vendors have gone down-stack and provided PaaS offerings to further allow for connectivity, integration and security in the place they think it belongs. [emphasis added]

In the case of VMware and their acquisition of SpringSource, that piece of bloat in the middle can be seen as simply going away; whatever you call it, it’s about disintermediating the OS completely and it seems to me that the entire notion of vApps addresses this very thing.  I’m sure there are a ton of other offerings that I simply didn’t get before that are going to make me go “AHA!” now.

“In the place they think it belongs.” Exactly. I don’t know that the OS will go away, and virtualization is certainly not going away as the use for it in testing and even production deployment of a full-scale infrastructure will be necessary for quite some time. For virtual appliances, virtualization is where it’s at and the management and standards folks understand that in the case of infrastructure at least, there’s more to managing the environment than just the virtual machine. They get that we have to be able to integrate, to collaborate, with the infrastructure solutions deployed in those virtual machines.

vApps: Ensuring seamless application movement and choice between clouds
  • VMware vSphere includes support for vApp, a logical entity comprising one or more virtual machines, which uses the industry standard Open Virtualization Format to specify and encapsulate all components of a multi-tier application as well as the operational policies and service levels associated with it.
  • Just like the UPC bar code contains all information about a product, the vApp gives application owners a standard way to describe operational policies for an application which the cloud OS can automatically interpret and execute.
  • vApps can comprise of any applications running on any OS, and provide a mechanism for customers to move their applications between internal clouds or external clouds with still the same service levels.

It’s only when we get to the applications that everything falls apart and we lose control over that layer of the environment. While VMware’s vApps takes us a step closer, its primary goal is to control the operational environment in which applications run, and does not – and really cannot – descend into the internal gooey center of the application where the real advances in application virtualization are sure to come in this continuously evolving application deployment paradigm.

Until then, nothing really changes for developers.  

Follow me on Twitter    View Lori's profile on SlideShare  friendfeed icon_facebook

AddThis Feed Button Bookmark and Share