Leveraging Java EE and dynamic infrastructure to enable a shared resource, on-demand scalable infrastructure – without server virtualization

Many pundits and experts allude to architectures that are cloud-like in their ability to provide on-demand scalability but do not – I repeat do not – rely on virtualization, i.e. virtual machines. But rarely – if ever – is this possibility described. So everyone says it can be done, but no one wants to tell you how.

Maybe that’s because it appears, on the surface, to not be cloud. And perhaps there’s truth to that appearance. It is more pseudo-cloud than cloud – at least by most folks’ definition of cloud these days – and thus maybe you really can’t do cloud without virtualization. There’s also the fact that there is virtualization required – it’s just not virtualization in the way most people use the term today, i.e. equating it with VMware, or Xen, or Hyper-V

But it does leverage shared resources to provide on-demand scalability, and that’s really what we’re after with cloud in the long run, isn’t it?


THE JAVA EE BASED PSEUDO-CLOUD

One of the tenets of cloud is that scalability is achieved through the use of shared resources on-demand. Anyone who has deployed a Java EE environment knows that it is, above all else, a shared environment. The Java EE application server is essentially a big container, and it performs many of the same functions traditionally associated with virtualization platforms such as abstraction from the operating system, it receives requests via the network and hands them out to the appropriate application, etc… It’s not a perfectly analogous relationship, but the concept is close enough.

So you have a shared environment in which one or more applications might be deployed. The reason this is cloud-like is that just because an application is deployed in a given application server doesn’t mean it’s running all the time. In fact, it doesn’t even need to be loaded all the time, just deployed and ready to be “launched” when necessary.

imageIn order to provide the Java EE “cloud” with mobility we employ a file virtualization solution to normalize file access across a shared, global namespace. Each application server instance accesses the same application resource packages from the normalized file system, thus reducing the storage requirements on the individual server platforms.

The application delivery controller (a.k.a. load balancer plus) virtualizes the applications to provide unified access to the applications regardless of which application server instance they may be launched on. The application delivery controller, assuming it is infrastructure 2.0 capable, is also responsible for the implementation of the “on-demand scalability” necessary to achieve cloud-like status.


THE SECRET SAUCE
The “secret sauce” in this architectural recipe is the ability to integrate the application delivery controller (hence the requirement that it be Infrastructure 2.0 capable) and the application server infrastructure. This integration is really a collaboration that enables a controlling management application to instruct the appropriate application server to launch a given application upon specified conditions – typically upon reaching a number of connections that, once surpassed, is known to cause degradation of performance or the complete depletion of available resources.

Because the application delivery controller is mediating for the applications, it has a view of both the client-side and server-side environments, as well as the network. It knows how many connections are currently in use, how much bandwidth is being used, and even – when configured to do so – the current capacity of each off the application servers. And it knows this on a per “network virtual server” which generally corresponds to an application.

All this information can be retrieved by the controlling management application via the application delivery controller’s service-enabled control plane, a.k.a. API (either RESTful or SOAPy, as per the vendor’s implementation). The controlling management application uses this information to decide when (on-demand) to launch a new instance (or unload an instance) of an application on one of the application servers. Java EE application servers are essentially infrastructure 2.0 capable, as well, and provide several methods of remote control that enable the ability to remotely control an application and its environment.

Once the controlling management application has successfully launched (or unloaded) the application in the appropriate application server, the application itself becomes part of the process. A few lines of code effectively instrument the application to register – or deregister as the case may be – itself with the application delivery controller using the aforementioned control-plane. Once the application is registered, it is put into rotation and capacity of the application is immediately increased appropriately. On-demand, using otherwise idle-resources, as required by the definition of “cloud.”

Wash. Rinse. Repeat.


DYNAMIC INFRASTRUCTURE the ENABLING FACTOR

Dynamic infrastructure, such as an infrastructure 2.0 capable application delivery controller, is a necessary component of any successful on-demand architecture, whether “real cloud” or “pseudo cloud.” It is the ability of such infrastructure to interact with and integrate with management and application infrastructure that enables the entire architecture to affect an on-demand scalable posture capable of utilizing shared resources – whether virtualized or not. Without a dynamic infrastructure this architecture would still be possible; one could manually perform the steps necessary to launch when and where necessary and then add the application to the application delivery controller, but that would incur additional costs and the human latency required to coordinate actions across multiple teams is, well, exceedingly variable – especially on the weekends.

Certainly the benefits of a pseudo-cloud are similar, but not exactly the same, as that of a “real” cloud. You do get to take advantage of shared and quite possibly idle resources. You do get the operational efficiencies associated with automation of the provisioning and de-provisioning of application instances. And you also get the reduction in costs from leveraging a shared storage system. If business stake-holders are charged back only what they use, then you’re further providing value in potentially reducing the physical hardware necessary to ensure resources are available for specific applications, much of which is often wasted by the over-provisioning inherent in such traditional deployments. That reduces the CapEx and OpEx, which is yet another touted benefit that is desired by those exploring both public and private cloud.

This isn’t a simple task. The sharing of resources – particularly in controlling thresholds per application – is more difficult without virtualization a la VMware/Xen/Hyper-V. It’s not nearly as easy as just virtualizing the applications and it requires a bit more planning in terms of where applications can be deployed, but the orchestration of the processes around enabling the on-demand capability is no more or less difficult in this pseudo-cloud implementation as it would be in a real-cloud scenario.

It can be done, and for some organizations unwilling for whatever their reasons to jump into virtualization, this is an option to realize many of the same benefits as a “real” cloud.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share