Leveraging virtualization as a means to create a specialized architecture can realize significant gains in performance and IT efficiency

With all the talk about “packaging up applications” in a virtual machine and shipping them off to the cloud, it almost sounds as if virtualization might lead us to a return to architecting monolithic applications. The idea of packaging up everything you need to run an application in a virtual container and relieving the worries about connectors and adapters and integration is certainly appealing.

But let’s take a step back from the virtualization craze as it relates to consolidation. Examine the possibilities of how it could be leveraged, along with application delivery, to enable the architecture of more efficient, better performing networks. Doing so could have long-term benefits in both capital and operating expenses while realizing increased performance and capacity of applications.

The Impact of General Purpose Application Servers

One of the ways in which application infrastructure can evolve is to look to the evolution of general purpose hardware toward specialized Jack of all tradeshardware implementations. Specialized hardware implementations in the form of ASICs and FPGAs serving such needs as compression and cryptographic acceleration and packet processing evolved from a need to perform such functions faster and  more efficiently. The development of such specialized hardware resulted in faster networks, faster applications, and more efficient processing in general. Routers and switches today are capable of high- speed, high-bandwidth volume because they are endowed with specialized hardware-accelerated processing, not in spite of this attribute.

Web and application servers are general purpose, today. They serve images, dynamic content, static content, video, and audio. Like their hardware counterparts, this means that the core application is generalized and not specifically tuned for any given type of content, despite the gains in performance and efficiency that can be realized by tuning servers according to the unique characteristics of each type of content. Images, for example, generally require shorter time-out configurations on web servers than do servers responsible for dynamic content generation. Tuning servers based on specialization can improve performance (response time), capacity, and the general efficiency of the hardware and underlying software.

By packaging up applications together on a single, general purpose application or web server, the potential benefits of specialization are not realized. Virtualization as an application packaging and delivery mechanism is not efficient nor exceptionally performant.

But it can enable an architecture that is both.

Specialization through Virtualization

Virtualization gives us the means by which application and web servers can be specialized. By packaging up “specialized” servers, each tuned traditional-l7 to serve most efficiently and in as performant a fashion as possible specific types of content, we can realize many of the same gains achieved by specialized hardware in network and application delivery network solutions.

Rather than package up an application as a whole entity, decomposing it into even specific content types and tuning servers to achieve maximum efficiency can yield better consumption of resources and faster execution of application logic, which can result in significant performance gains.

In a “traditional” layer 7 load balancing architecture, each pool (farm, cluster) of servers is designated by the application delivery controllers as the source for specific types of content. The pool of servers is then tuned appropriately, and the intelligent routing capabilities of the application delivery controller are employed to ensure that requests for specific type of content are routed to the correct pool of servers for processing. The requests could be routed based on host name, the file extension of the resource being requested, some cue in the URI path (/images/ vs /scripts/ vs /content/) or by any unique variable in the actual application data that can be used to determine where the request should be routed.

This approach, while providing for significant performance gains, can result in underutilized servers. It also requires additional server hardware on which to deploy each “type” of server, which can increase the overall cost to acquire and subsequently manage/maintain. That negates the benefits of consolidation through virtualization, so we need a better solution; one that maintains the gains from virtualization while increasing the efficiency and performance of the overall architecture.

We need a specialized layer 7 load balancing architecture.

In a "specialized” layer 7 load balancing architecture, we take advantage of the ability of a single hardware server to host multiple virtual instances of a server. Each virtual instance of a server is still tuned for specific types of content, but each instance does not require multiple specialized-l7 hardware servers on which to be deployed. A single hardware server, then, can support specialized virtual instances of many applications, thus decreasing the investment necessary to architect a specialized infrastructure. This ensures that hardware resources are not underutilized while maximizing performance and efficiency of the overall architecture and infrastructure.

Using virtualization and layer 7 load balancing capabilities then, it is possible to re-architect an application to perform better while still achieving goals related to consolidation and overall increased IT efficiency.

Re-architecting without Re-writing

One concern with re-architecting application infrastructure is that it will require significant rework on the part of developers to adapt the applications for the new architecture.

One of the beautiful things about application delivery controllers and layer 7 load balancing in general is that such changes to the underlying application architecture can be made (generally speaking) with no changes to the application.

By taking advantage of layer 7 switching – the inherent ability of application delivery controllers to recognize application specific data and make routing decisions based upon that data – content-based routing decisions can be made on existing URLs. Assuming file extensions or file system hierarchies can be used to determine which specialized, virtual instance of a web or application server should receive a response, the application need not be changed in any way. The information carried in the URL will be enough for the application delivery controller to make an intelligent routing decision.

All that is necessary is to tune each specialized instance of the web or application server and deploy it. The application delivery controller will do the rest of the work.

 

Virtualization offers many benefits, not the least of which is the ability to specialize web and application servers in such a way as to improve performance while maintaining efficiency benefits realized through consolidation.  So while you’re consolidating and packaging up applications in virtual containers, consider that these disruptions are an opportune moment to re-architect your application delivery network and infrastructure in such a way as to achieve even more gains in performance and efficiency in the data center.

IMAGE: Graphic drawn in 1975 by PJ Udo C.J. Fischer.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share