There’s been a lot that has changed with the release of Horizon with View in June 2014. Aside from the support for RDS hosted desktops and published applications using PCoIP, there is also a new feature called Cloud Pod Architecture (CPA). CPA enables entitlements to desktops between multiple View pods within or across multiple data centers.

F5’s Local Traffic Manager (LTM), Access Policy Manager (APM), and Global Traffic Manager (GTM) solution has been able to address this challenge for some time. From a 30,000-foot view, here is how today’s integrated VMware/F5 solution works when detecting an existing session without Cloud Pod Architecture:

  • GTM gets you to a data center based on source IP, geo, least connections, etc.
  • You then land in one of two typical configurations:
    • LTM load-balances you between View Security Servers (external connections)
    • LTM load balances you between View Connection Servers (internal connections)
  • You authenticate…
  • APM can detect an existing user’s session across multiple View pods, and send you to that data center to reconnect to an existing desktop
  • You are reconnected to your session!

With the introduction of Cloud Pod Architecture, how does this impact the F5 solution? What’s different? What value-add does F5 provide in this updated environment?

The beauty of the VMware/F5 relationship is that the solutions COMPLEMENT each other very well. But, a word to the wise - what you need (versus want) should be driven by an organization’s business and technical requirements in concert with the View/F5 solution capabilities.

Cloud Pod 101

So, let’s take a quick look at what Cloud Pod Architecture is and how it works. I’m not going to reinvent the wheel explaining this, as Narasimha Krishnakumar (Director of Product Management – EUC @ VMware) does a spot-on job of explaining it – check out this link for more info:

http://blogs.vmware.com/euc/2014/04/vmware-horizon-6-cloud-pod-architecture.html

Basically, you can federate multiple “independent” View pods and bring together pools from each View Pod to appear as a “single” global pool (the official term is called Global Entitlement). If a user connects into one View Pod, and their desktop resides in another, the View Pod they connect to authenticates and brokers the connection on behalf of the other – and BAM! you are connected to your desktop.

This graphic - courtesy of VMware’s EUC Technical Enablement team - is the picture that’s worth a thousand words:

Let’s walk through the flow of a connection to a Cloud Pod-enabled desktop pool:

  1. The user connects with a single namespace URL managed by a load balancer or directly to a View Connection Server.  The user logs into View using the appropriate credentials
  2. View Connection Servers will search the Global AD LDS (where the CPA pool information is stored) and local View Pod’s AD LDS
  3. View Connection Server then checks the state of the desktop using the VIPA protocol and enumerates the desktops in the client.
  4. The user chooses the desktop.
  5. If they chose the desktop pool that is CPA enabled and their desktop is in the other View Pod (in this case, the other data center), the connection is made from the client to the desktop in the remote location.

Even though the desktop is in NYC (in this example), the user connected to the London Connection Servers – these brokers authenticated the user on behalf of NYC, so the user never passes through the brokers in NYC. This same traffic flow would also apply if there were Security Servers – the connection to the NYC data center would be proxied through the Security Servers in London.

So, does this remove the need for the F5 Username Persistence solution or the need for load balancing in general?

Well, the honest answer is “it depends”. You still need to load balance between security servers and connection servers for system resiliency and scalability. Around whether CPA will adequately replace F5’s username persistence solution, you need to do some homework to determine the best approach. Here are some key points on how to determine what you’ll need to address load balancing/connection management and session persistence features when using F5’s APM and/or View’s Cloud Pod Architecture (CPA):

  • You STILL need to route the initial connection to the appropriate data center (in a multiple data center model). CPA doesn’t get the connection to the data center. F5’s Global Traffic Manager (GTM) module is the method used to make this happen.
  • You STILL need to load balance connections between a View Pod’s Connection Servers and Security Servers. CPA doesn’t do this either. F5 Local Traffic Manager (LTM) is the best choice for intelligent load management and monitoring of Connection/Security Server resources..  
  • Cloud Pod Architecture supports RDS hosted desktops and traditional hosted desktops – HTML desktops and RDS hosted applications (App Remoting) are not currently supported.
  • Although Cloud Pod Architecture can broker access and proxy the connection to a desktop in another pod, the network connection to the final communication between the client and the desktop (or security server, if external) may not be an optimal path. The connection path may cross an internal network connection that’s constrained for bandwidth or high latency.

If we use the picture above as example, the user is accessing their desktop in the NYC Pod through the London Pod.  Therefore, the path of data flow is over the internal link – which needs to be able to handle PCoIP traffic in addition to handling other inter data center traffic when hauling PCoIP over latency-sensitive connections.

How does F5's Username Persistence solution complement View's Cloud Pod Architecture?

F5’s username and session persistence solution can address many of the previously mentioned challenges through the use of GTM, LTM, and/or APM. Here's some guidance that will help you choose the right path:

  • Leverage F5’s Username/Session Persistence to address these requirements:
    • Ability to detect and reconnect to existing RDS hosted application sessions - F5’s APM can detect existing sessions and route users to that existing data center or View Pod.
    • Requirement to reconnect to HTML-based desktops across multiple View Pods or data centers. Username and session persistence works with HTML Desktops.
    • Provide an option to route the user’s View desktop/application connection across the most optimal connection, rather than traversing an internal or constrained/latent network connection.
  • Use APM’s-integrated PCoIP Proxy feature to keep access simple and secure.
    • It’s a secure and scalable alternative in the DMZ to removing the need for Security Servers in the DMZ.
    • Works OUT OF THE BOX with View’s Cloud Pod Architecture.
    • If you already have an F5 Big-IP device in the DMZ and wish to enhance its functionality and leverage your existing investment.
    • Ability to provide multiple, unique instances of PCoIP Proxy Servers for different access scenarios, all running on a single appliance.

Well, that wraps up this blog post. Our next blog post will focus on understanding and implementing F5’s PCoIP Proxy feature – we’ll cover how it works, when to use it, and how to integrate it with View.

You can also send any topics or ideas to vmwarepartnership@f5.com.

Until next time...