The Database Tier is Not Elastic

It is the database tier and its unique characteristics that ultimate determine where an application will be deployed.

cloud computing is mostly about “elasticity.” The extraction and contraction of resources based on demand. It is the contraction of resources which is oft times forgotten but without it, cloud computing and highly dynamic, virtualized infrastructures are little more than seamless capacity growth engines. For web and application architectural tiers, the contraction of resources is as much a requirement to realize the benefits of shared, dynamic capacity as the ability to rapidly expand. But in the database tier, the application data layer, contraction is more a contradiction than anything else.

WHAT COMES UP USUALLY COMES DOWN

Elasticity in applications is a good thing. It is important to the overall success rate of cloud computing and dynamic infrastructure initiatives to remember that “what comes up, must come down” – especially in relation to provisioned compute resources. Applications should expand their resource consumption to meet demand, but when demand wanes, so too should their resource consumption rates. By spreading compute resources around the various applications that need  them in a dynamic way, based on demand, we achieve peak efficiency and make the most of our capital expenditures. Such architectural approaches allow us to allocate “temporary” compute resources when necessary from cloud computing environments external to the organization, and release them when not necessary.

This is all well and good, except when we’re talking about the database.

Databases employ a number of techniques by which they can improve their performance, and most of them involve complex caching and pooling strategies that make use of lots and lots of RAM. At the database tier, RAM may increase, but it rarely decreases. It’s a different kind of workload than web and application servers, which can easily be scaled out using parallel processing strategies. Many, many copies of the same code can execute in isolated chunks around the data center because they do not need access to a centralized store of information about all the sessions that may be occurring at the same time. In order to maintain consistency, databases use indexes and locks and other computational techniques to manage access to data, especially in the case of modification. This means that even though the code to perform such tasks can be ostensibly executed on multiple copies of a database, the especial data required to ensure consistent operations is contained in a single, contiguous data structure. That data cannot be easily transferred or replicated in real-time to other copies. There is a single data overlord that must maintain a holistic view of the data and therefore must (today) run on a single machine (virtual or iron).

That means all access is through a single gateway, and scaling that gateway is generally only possible through the expansion of resources available to the database application. Scale up is the traditional strategy, and until we learn how to share memory blocks across the network in a way that assures consistency we can either bow to the belief that eventual consistency is good enough or that there will be one, ginormous system that continually expands along with data growth. 

YOU CAN SCALE OUT READ but NOT WRITE

It is the unique characteristics of data that result in a quirky architecture that allows us to scale out read but only scale up write. This makes the database tier a lot more complex than perhaps it once was. In the past, a single ginormous server housed a database and it was the only path to data. Today, however, the need for better performance and support for hyperscaling of applications has led to a functional partitioning scheme that separates reads from writes and assumes that eventual consistency is better than non-availability.

This does not mean it’s impossible to put a database into an external cloud computing environment. It just means that it’s going to run, 24x7, and scalability cannot necessarily be achieved by scaling out – the traditional means by which a cloud computing environment enables scale. It means that scaling up will require migration, if you haven’t adjusted for future growth to begin with, and that there may be, depending on the cloud computing environment you choose, an upper bounds to your data growth. If you’ve only got X amount of disk and memory available, at some point your database will hit that upper bound and either it will begin to drag down performance or availability or simply be unable to continue growing.

Or you’ll need to consider the use of distributed database systems which can scale out by distributing data across multiple database nodes (local or remote) either using replication or duplication. When used over a LAN – low latency, high performing, high bandwidth – the replication and/or duplication required for the master database to manage and maintain its minion databases can be successful. One would assume, then, that the use of distributed database systems in a cloud computing environment would be the appropriate marriage of the two architectural approaches to scalability. However, most enterprise applications existing today – both developed in-house and packaged – do not take advantage of such technology and there exist no standardized means by which a traditional DBMS can be morphed into a DDBMS.

Additionally, the replication/duplication of database systems over a WAN – high latency, lower performing, low bandwidth – is problematic for maintaining consistency. Which often means a closed-system, LAN connected only approach to application architecture is the only feasible option.

Which puts us right back where we were – with the database tier being upward-bound only, not elastic, and potentially outgrowing the ability of a provider to offer an appropriate level of compute resources to maintain performance and capacity, effectively limiting data growth.

Which is not a good thing, because limiting data growth means limiting business growth.

DATA GROWTH is AN INDICATOR of BUSINESS SUCCESS

It is almost universally true that the growth of data is an indicator of business success. As business grows, so does the customer data. As business grows, so does the user-generated content. As business grows, so do the financial and employee records and e-mail. And, of course, the gigabytes of Power Point presentations and standard operating procedure documents that grow, morph, and are ultimately discarded – but maintained for posterity/reference in the future grow along with the business.

Data grows, it doesn’t shrink. There is nothing that so accurately lives up to the “pack rat” mentality as a business. And much of it is stored in databases, which live in the data tier and are increasingly web (and mobile client) enabled.

So when we talk about elastic applications we’re really talking just about the applications, not necessarily the data tier. Unless you have employed a sharded architectural approach to enabling long-term growth, you have “THE database” and it’s going to grow and grow and grow and never shrink. It isn’t elastic; the parts of an application that are are the applications that access THE database.

It is this “nut” that needs to be cracked for cloud computing to truly become “the” standard for data center architectures. Until we either see DDBMS become the standard for database systems or figure out how to really share compute resources across the LAN such that RAM from multiple machines appears to be a contiguous, locally accessible chunk of memory, the database tier will be the limiting – and deciding – factor in determining how an application is architected and where it might end up residing.

      AddThis Feed Button Bookmark and Share

Related blogs & articles:

Published Dec 01, 2010
Version 1.0

Was this article helpful?