Steve (apparently yes, we are on a first name basis) offers up his thoughts on developing APIs for the Cloud in “A Cloud Tools Manifesto.” While the inclusion of the word “manifesto” in the title raised quite the stir (“Manifestogate” is still fresh on the minds of many cloud-oriented people), what really caught my eye is his inclusion of a “mock endpoint” primarily for testing of API based integration and development. This is something that’s increasingly important not just to cloud but to Web 2.0 and social networking sites that provide APIs via which other sites and client applications can access the functionality and data unique to that application.

A “mock endpoint” for testing provides developers with a way to test their client / integration code without affecting the actual system. This is very similar to synthetic transactions used for performance and availability management systems in which the generation of a “real” transaction is detrimental to the application.

In any case, the reason this caught my eye is that for many sites, especially those that are heavily used, this is an excellent use for network-side scripting. If you have an application delivery controller/load balancer already then it’s likely those APIs will be accessed through such an intermediary for scalability and high-availability assurance, so it makes sense to take advantage of its capacity for handling connections and throughput when providing a testing interface for APIs as it reduces the strain on the real system but still allows developers to build and test applications that work with the system. It has the added benefit of providing developers with some experience on what can/cannot be achieved in a network-side scripting environment and perhaps finding new ways to leverage such platforms to improve performance or increase efficiency of their own platforms.

Steve provides a list of properties he believes a mock endpoint should exhibit:

Provide a mock endpoint that:

  • Has the same API and error responses as the production endpoint
  • Simulates the allocation/release of VMs and other assets, validates all requests
  • Can be set up by a caller to fail for the next request from a specific account, with a specific failure.
  • Is free to use to everyone with an account.
  • Can be used by test accounts whose authentication details aren't required to be kept a secret. This would let us embed the tests in open source releases, run on hudson, etc.
  • If the mock endpoint can be redistributed as a program , a library or a VM Image, provide a means of downloading or hosting it for independent testing.

I’ve marked Steve’s requirements in red where they are not necessarily possible/applicable to a network-side scripting implementation of a mock endpoint. While simulating the allocation/release (provisioning/deprovisioning) of VMs and other assets is technically possible using network-side scripting, it’s probably going further than we really want to go. The redistribution, of course, is not applicable but the “hosting it for independent testing” portion is easily fulfilled.

Looking at what’s left – including the validation of all requests – network-side scripting can certainly fulfill these requirements and provide a highly available, easily scalable mock endpoint for just about any API whether that’s for cloud, Web 2.0, or . This is particularly relevant for RESTful APIs which transported via HTTP and based primarily on URI differentiation, both of which are highly germane to application delivery solutions as they are purpose-built for inspecting, manipulating, and directing HTTP-based requests.

Simple (seriously, VERY simple) example using iRules
when HTTP_REQUEST {   switch -glob [string tolower [HTTP::uri]] {      "/api/api_call" {         # note: you could base the decision whether to send the API call to the application         # on a value in the URI or on a parameter included or just about anything          HTTP::respond 200 content "put your mock data here"        }      default {         # Malformed request?          HTTP::respond 200 content "put your mock error response here"       }   }}

Basically the concept is to (1) determine the API call being invoked by examining the appropriate portion of the URI and (2) returning canned responses based on the API call. The example above is very simplified, but consider that a network-side scripting platform offers you the ability to, well, script functionality. You can easily insert parameters or POST data into the response, create random data and insert it, etc… There are some limitations, of course, such as you can’t make external calls to services to return test data, but if you’re just trying to mock up an endpoint you probably don’t need that level of integration anyway. 

You can determine what you want to do based on more than just URI, of course, there’s a veritable cornucopia of options with network-side scripting that, while not nearly as robust as a full server-side scripting environment, is quite robust. That means the requirement to “be set up by a caller to fail for the next request from a specific account, with a specific failure” should be elementary using persistence-based mechanisms such as cookie handling.


NETWORK-SIDE ENDPOINTS NOT JUST FOR TESTING


In fact, why stop at just building the mock API in network-side scripting? If you proxy it all through network-side scripting there are a number of benefits to your application:

  1. Makes it easier to scale up. You can execute load balancing/distribute requests based on the API, i.e. calls that read the database go here while calls that write the database go there. This makes it easier to scale up if specific API calls are causing undue stress on servers and better utilize the resources you have available.
  2. Let’s you handle versioning / deprecation more elegantly. Because you have a chance to examine the calls before they are sent off to the application, you can segment by version or degrade support for obsolete API calls more elegantly and without burdening the application with calls it can’t / won’t execute anyway.
  3. Makes it easier to integrate third-party applications. Whether it’s right up front or later on, proxying API calls via network-side scripting let you switch out authentication methods and take advantage of other sources for functionality without needing to make massive changes within the application. Using network-side scripting early in the API development process lets you determine how best to include the functionality available on the application delivery platform in your overall application architecture strategy. It gives you an excuse to test out what you can and cannot do with such platforms and leverage a very strategic point of control in the exchange of data between clients and your application.
  4. Allows for more accurate throttling. Because the network-side scripting platform is likely to have higher capacity limits it can handle higher volumes without timing out while providing accuracy in throttling of requests based on user, IP address, application, type of call – anything. The contextually aware property of advanced application delivery platforms enabled with network-side scripting make it possible to perform API throttling based on a variety of variables including the specific API calls being invoked.
  5. You can add new functionality even before the implementation is finished. Because you can “mock up” the endpoint, you can start partners, customers, users, etc… integrating new functionality through new APIs before the actual implementation is finished. This excels the “time to market.”

Network-side scripting is an elegant way to augment or implement mock endpoints and even production-quality APIs for any application, but particularly for Web 2.0 and RESTful focused APIs. Using network-side scripting allows you to employ additional functionality delivered by advanced application delivery platforms that may not be easily accessible in server-side environments such as user location, network conditions, and other infrastructure and network-focused data (context!). It also empowers applications with acceleration and optimization capabilities on a per API call basis, such as applying compression or encryption to specific API calls but not others, based on size or sensitivity of the data being exchanged.

Follow me on TwitterView Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share