What is a Programmable Data Path

#SDN #devops It's the "do not disturb" sign on the hotel door....

 

I admit it. I'm guilty of using self-referential definitions when discussing the topic of programmable networks. Somehow, while trying to define what a programmable network is, I end up using the term "programmable".

Because that seems fairly obvious, right?

Only it's not.

The concept is not a new one. Routers, switches, load balancers and a variety of other network-hosted intermediate devices have been modifying inbound and outbound traffic for myriad purposes for some time. Data flows into the device, it's transformed, modified, or otherwise manipulated, and then it's sent on to the next hop in the network (or service in the chain, whichever you prefer). The same thing occurs on the outbound path - data is received, inspected, transformed if necessary, and then sent on its merry way back to the end-user.

The notion of a programmable data path, however, is the addition of extensibility by the operator. It's the ability for you to implement logic that performs said modification or transformation. To implement this logic requires the ability to construct a program, hence the term programmable.

Rather than being presented with a checkbox or radio button and perhaps a few configurable parameters, you write logic that executes "in the network" that directly interacts with the data plane.

An example: API Metering

For example, let's say you want to implement an API metering solution. Sure, you could let the application developers do that. After all, they're the ultimate endpoint. But the load balancing solution (which you know you'll have because you need to scale and well, load balancing) as the penultimate end-point is architecturally a better option. That's because part of the point of limiting API service to X calls per minute/hour/day is to maintain availability and if a request gets all the way to the API service it's consuming resources and defeating the purpose of saying "no" to the end-user anyway. Seriously, it's like waiting until house keeping is in the room to tell them you'd rather not be disturbed. We don't roll like that, we put a sign on the door (the load balancing service, if you will) to let them know they can skip your room. That's the architectural principle behind offloading to application delivery services, especially for requests that may end up being denied for security or capacity reasons. Why stress the compute if you don't have to?

The problem is that it's not like the load balancing service keeps track in its memory how many API calls user X has made thus far. That's usually stored elsewhere, such as in a database. Generally speaking, network-hosted services do not communicate with databases. But when we start talking about programmability, then that's exactly what we want to happen. The load balancing service should be able to execute logic that a) queries the database and b) determines whether or not a given API request should be fulfilled based on the usage history for the user (API key, usually, but there's a relationship there so let's run with it).

A network-hosted service with a programmable data path would allow you to write the code necessary to implement this kind of API metering logic. It would allow you to assign it to a particular end-point (a virtual server or virtual IP address in the vernacular) and it then executes every time it's triggered. The trigger is some network "event", such as an HTTP request or response.

In this way, you've extended the capability of "the network" by imbuing it with intelligence through the ability to execute application logic. It's programmable, changeable, adaptable.

The Performance Factor

The key, of course, is that while this is certainly preferable over more traditional architectures in which the application manages such functionality, there remains a concern over performance. One of the reasons we generally don't see this kind of functionality "in the network" is because it adds latency to every request and latency is a no-no. Oh, it exists, we all know that, but the goal is to limit its impact as much as possible to keep performance within an acceptable range for the end-user.

Advancements in circuitry and processors is changing that, and it's come to the point where achieving this kind of agility in the network is not only possible, it's possible without compromising on performance.

And that means there are exciting times ahead as we start exploring the different ways in which we can take advantage of a programmable network.


 

Published Mar 22, 2013
Version 1.0

Was this article helpful?

No CommentsBe the first to comment