We are living in the era of extreme programmability (i.e. software defined everything). Building on the success of compute virtualization and programmable access to infrastructure services, the availability of commercial and open source cloud software platforms make this an exciting time for IT. After over a decade of infrastructure virtualization rollouts, Enterprise IT is now entering the Datacenter Transformation phase. In this article we’ll briefly look at the history and then fast forward to this age of cloud implementations.

Looking back, a Datacenter rollout was as a Procure-Power-Provision approach for deploying infrastructure resources (network, storage, compute). This is sometimes referred to as the Traditional Datacenter approach. This approach is viable even today for certain business needs. Even within last 5 years, some of the largest datacenter builders were still deploying resources in this manner. It is sometimes considered inefficient, however, as long as the IT processes are well documented and operational procedures are defined, it is still a reasonable approach. If the business needs do not require a dynamically scaling datacenter and capacity consumption can be predicted well in advance to satisfy future needs, this approach works.

Next came the approach of dynamically creating resources, building on top of the virtualized infrastructure. This is sometimes referred to as the Virtualized Datacenter approach. With this approach, the underlying hardware components were procured and provisioned in the traditional manner. However, resources can now be carved out dynamically with the help of virtualization (for example, using a hypervisor for compute resources). This virtualization technology is available for compute, storage, and most recently network resources as well. By providing isolation of carved out resources, boundaries of resource consumption were implemented as if each resource was a whole unit in and itself. Virtualization abstracted the underlying hardware and each resource looked like its own thing. This is definitely true about virtual compute instances on top of a hypervisor running a full operating system. Definition of what a dedicated resource is, changes in the storage and network worlds, but the principles of virtualization still apply. IT departments do not debate about the need for compute or storage virtualization anymore – the business benefits are well understood, proven, and it is a matter of implementation – not evaluation.

The third wave is the cloud approach, also referred to as cloud computing or utility computing. Let's call this the Cloud Datacenter approach. This approach builds on top of virtualization and dynamic provisioning, to enable self-service consumption of infrastructure resources. The cloud approach changed how resources were consumed, not so much how they were provisioned for such a consumption. The resources are programmatically available through a multi-tenant cloud software platform, so each user that is provisioning the resource can use it with complete ownership and accounting of the usage. Instead of waiting for IT to provision resources, users can now get the resources without waiting. IT still has to plan for capacity, provision the infrastructure, rollout or update the cloud software platform, offer a self-service portal, and maintain SLAs on resource availability. What changed was the end user experience for procuring and consuming the resources. While some expect this to enable departmental allocation of resource consumption, the first order benefit is still that of enabling self-service infrastructure resources consumption.

The cloud datacenter approach can be further explained using the seven principles as described below. Any IT organization defining what cloud datacenter implementation translates into, and defining the success criteria for a phased cloud datacenter rollout, can use these principles. Here they are:

  1. Just-in-time provisioning
  2. Instant deployment
  3. Intelligent operations
  4. Complete visibility
  5. Real-time recovery
  6. Improved utilization
  7. Pay-per-use

Screen Shot 2014-06-12 at 10.07.50 PM

Cloud Datacenter Principles

  • Just-in-time provisioning – By front-ending the virtualized infrastructure with a layer of software that provides APIs for provisioning resources, the user can invoke an API and bring the resource into existence. Classic example of this is using the OpenStack Nova API or AWS EC2 API to start  a virtual machine running the desired operating system. From the end users' perspective, it does not matter who the provider is. The important capability available to the user is to create a resource via an API.

 

  • Instant deployment – By virtue of having created a resource on-demand, the user can now instantly begin consuming it for their application. Using another layer of abstraction that enables automatic assimilation of newly created resources into existing infrastructure capacity, the user can seamlessly scale their application in response to its usage. This allows the user to maintain high availability as well as keep their application current with latest changes.

 

  • Intelligent operations – IT administrators also gain programmatic access to infrastructure services, similar to the end user. Furthermore, since they are operating the cloud software platform, they have greater control over the operation of the infrastructure services. As users consume resources on-demand, IT administrators can easily monitor all virtualized resources, discover all access paths, and since everything is available programmatically – be able to troubleshoot issues quickly. Taking it a step further, IT administrators can automate many of the operational tasks using APIs and efficiently manage the infrastructure services to maintain SLAs without causing application downtime or latency issues. 

 

  • Complete visibility – By front-ending the virtualized infrastructure with a cloud software platform, IT administrators can now log and monitor every action and every transaction touching the underlying physical and virtual  infrastructure. This generates massive amounts of data, but the good news is that all the data is now available. This massive amounts of data can be processed via Big Data platforms and further analyzed via analytics tools. Since it is all software, no IT administrator can now complain about lack of data on infrastructure provisioning and consumption.

 

  • Real-time recovery – Building on the intelligence gathered from infrastructure data, IT administrators can now quickly analyze root cause for failures and implement remediation procedures. Using automation, coupled with Big Data analysis, IT administrators can further improve the recovery periods by either pre-provisioning additional capacity ahead of the failure event (having analyzed the patterns in advance) or rapidly deploying spare capacity on-demand to enable quick recovery. Through pre-provisioning the failures can be minimized or eliminated, while rapid deployment helps reduce the time to recover from failures.

 

  • Improved utilization – Operating the infrastructure resources to the optimum capacity, without risking the SLAs, is every IT owners goal. Using virtualization, IT administrators were able to better leverage the underlying hardware. By deploying the cloud software platform on top of virtualization, IT administrators can further manage resource allocation at a granular level. By carving out smaller units of resources, and when necessary are geographically co-located near the application users, an IT administrator can match the capacity need to the application resource consumption. IT does not have to deploy a large virtual machine when a medium size virtual machine will suffice. The resource can also be recycled after use, freeing up underlying capacity for resizing on resource allocations.

 

  • Pay-per-use – From an end user standpoint, they are now able to pay for the exact amount of resources their application consumed. By doing so, they avoid paying for capacity that they did not use and also have the confidence that additional capacity will be available when they need it. Underlying cloud software platforms support granular metering of resource usage and integration with billing systems to manage entitlement of resource consumption by users and payment processing (direct and invoicing).

 

Depending on the cloud computing approach an Enterprise takes, some or all of these principles hold true. 

  • In case of a private cloud rollout, the primary role for IT becomes that of a cloud provider to their internal teams and operating the cloud datacenter along the seven principles. IT teams have full control over their private cloud datacenter.
  • In case of a public cloud adoption, the IT team becomes the governance and planning team, while using the seven principles to monitor the consumption and measure the SLAs of the cloud datacenter (provided by someone else). IT teams have lesser control over the public cloud datacenter.
  • In case of hybrid cloud, the IT team acts as a combination of the public and private roles described above. IT teams have mixed control over the cloud datacenters.

 

F5 is committed to helping customers rollout traditional, virtualized, and cloud datacenters by providing platforms that support all of these approaches. F5 continues to offer solutions that include purpose-built high-performance appliances, multi-tenant virtualized chassis platforms, as well as virtualized software platforms that run on top of a hypervisor – giving our customers all the flexibility they need to deliver their applications. Customers can deploy F5 application delivery solutions and management solutions in private, public or hybrid cloud software platforms. By providing solutions to implement the seven principles for cloud datacenters described in this article, F5 has remained relevant in this age of cloud computing and will continue to innovate on behalf of our customers. Please fill this web-based form to request a contact from F5 and learn about all the options you have to prepare for the cloud datacenter transformation.