Sticking with my VM/Roadwork analogy; seems to hold up rather well.  :)

I've been working a good bit lately on VDI architecture in the data center.  Not so much the implementation of things like VMware View and building brokers and such, but one level up on topics like application traffic management and universal access and user policies.  Fun stuff  but I do wish I could add a few more hours in the work day so I could spend more time on these ideas - I still haven't given ThinApp the time it deserves (don't tell the dog: he expects any extra hours in the day to be dutifully turned into trips to the dog/snow park). congestion

One of the ideas that's been picking at me lately is the migration of client data from the user network into the data center with VDI.  I telecommute and have a home computing environment that keeps my F5 VPN connection (to BIG-IP SAM, Secure Access Manager, no less ;) back to corporate sandboxed: my internal work data (email, intranet) flows over my SSL VPN, and the rest of my work-related traffic (reading the internet, blogging, tweeting) goes over my direct connection.  There is no sharing of data - I only route necessary work data through the VPN.  I can do this because I have control over my local work computing environment (along F5 IT, of course, who enables this via my SSL VPN access policy) - my application traffic is sourced from my home office so I can control how it leaves my home office.  My networking and routing environment is local. Contrast this model with the traditional VDI model (granted this is a bit extreme for example and assumes that all of my work-tethered network traffic is pushed through my remote desktop).

This type of distributed routing architecture changes, however, when users move to a fully remote and distributed VDI implementation where we don't own our desktop networking.  My "local" desktop is now hosted in a data center along side 100s of other "local" desktops as well as non-desktop applications.  The only work traffic that comes and goes in my home office is desktop/GUI transport and vdivdm_diagramencapsulation (insert your choice of RDP, ICA, or some future transport protocol here) funneling everything back to the corp data center.  All my user-spawned work-related traffic - the stuff I normally spawn from my local desktop, such as browsing my Google Reader feeds - is now created in a remote data center and sourced out of that data center. The application network request doesn’t come from my home office. This is good (in theory) for my local broadband connection because I'm not using it for anything beyond my VDI client.  This is bad, though, for the network connection in and out of the data center that's hosting my desktop. Now all my user traffic is sharing ports, switches, and pipes with everyone else’s desktop traffic. And bad for my user experience because my source traffic is now competing with everyone else (just like I was in the office, which is one of the reasons I benefit from working remotely today) and then re-packaged and tunneled down to me, adding processing and delivery time to my experience. I get hit on waiting for my requests to leave and return to the data center and to have the response remotely displayed and packed for delivery over VDI. 

The best example of this is Web 2.0 traffic, sites like Google Maps.  A quick map request for Alexandria, Egypt consumes ~1Meg of HTTP traffic from desktop request to desktop response. Today for me, that's 1Meg of traffic on the segmented non-VPN’d portion of my broadband connection and it's easy- takes ~5 seconds to pull up Maps, make the request, and get back a sat picture of Alexandria. . My employer doesn't have to manage or deal with that traffic. But when I'm using a remote desktop, that's 1Meg of traffic round-trip from my VDI image in a data center, going out the link shared with the rest of my co-worker's VDI images, and most likely some other non-VDI traffic, and half of that (the response) is packaged for delivery over RDP/ICA in a GUI. I'm not generating that request from my home connection - I'm generating it on and over work's network resources from my VDI image.

So at what point does the benefit from having managed desktops in the data center (and believe me, I think there are huge benefits to this model from the management and security sides) outweigh the cost and limitations of moving all of this user-based traffic into the data center?  What impact does all this VDI client traffic impose on the data center network? There are network-level solutions for traffic management and optimization, such as BIG-IP LTM, optimizing RDP/ICA to the user on one side and optimizing HTTP on the other. But what about outbound from the data center, and more importantly for the best user experience and security issues, correlating the client-side traffic with the data center-side user traffic?

I’m really looking forward to what VDI looks like a few years in the future, when managing remote desktops is not hampered by discreetly managing user traffic in the data center.  We’ll get there, as long as we keep thinking about how all these new virtual technologies impact the network. Think about the infrastructure first and we’ll reap the benefits.