Forum Discussion

Geoff_Duke_2020's avatar
Geoff_Duke_2020
Icon for Nimbostratus rankNimbostratus
Jun 11, 2015

Exchange 2013 load balancing per preferred architecture

I'm new-ish to Exchange and to the f5 LTM platform, and I'm trying to get a handle on the best way to implement a load balancing configuration that aligns with Microsoft's Exchange Preferred Architecture and their recommendations regarding load balancing.

 

If I understand correctly, the preference is for layer 7, no session affinity, and per-protocol availability. They want to the availability of services on the load balancer to match closely the availability of services on the Exchange server itself, as the Exchange Managed Availability service monitors and responds to service issues.

 

The f5 Exchange 2013 Deployment Guide appears to use a dedicated user account to perform actual connections to OWA in order to check availability, rather than leveraging the /healthcheck.htm URL as recommended by Microsoft.

 

My questions:

 

  1. Is anyone in the community here has configured their LTM to monitor Exchange service availability using the healthcheck.htm URL?

     

  2. Do you encounter any problems with Kerberos when using SSL Offloading?

     

  3. Do you use Layer 4 instead? How do you do nPath routing with two sites and separate vLANs for each? (My two data centers are a few miles apart, with 20 Gb connection between them, so I'm planning to have both sites active.)

     

My team and I aren't particularly enthusiastic about iApp and templates (and $$$) for a config that doesn't align with Microsoft's recommendations.

 

Any suggestions and pointers to docs, sample configs would be most appreciated.

 

8 Replies

  • Dayne_Miller_19's avatar
    Dayne_Miller_19
    Historic F5 Account

    Hi Geoff-

     

    I'm one of the F5 engineers that creates and validates our Exchange guidance and iApp.

     

    To answer your questions:

     

    1. Our monitors (as configured by the iApp) have two modes that you can choose between: simple and advanced. 'Simple' does use the healthcheck.htm URIs for each service you're deploying. 'Advanced' substitutes in monitors that do actual logins using real accounts that you provide (for Autodiscover and ActiveSync, we actually use both the full-login monitors and the healthcheck.htm monitors when in Advanced mode). We don't find Microsoft's Managed Availability to be very effective. For instance, a CAS server can have no access to a Mailbox server (for any number of reasons) and still report healthy services. You can demonstrate this easily enough by turning off all Mailbox servers in your environment; you'll be able to successfully connect to the healthcheck.htm URI all day long. The flip side of that is that full-login monitors, which do take into account the ability to connect all the way through to the Mailbox servers on each protocol, also can erroneously mark CAS servers down if the mailbox database associated with the monitored account is down, or the account is locked. That's one reason we suggest using two separate monitors with different accounts.
    2. There are no problems that have been reported to us using Kerberos with SSL Offload.
    3. I'm not sure I've ever heard of a customer using L4 and nPath routing for Exchange. You lose almost all of the features by which BIG-IP LTM adds value: per-application monitoring, SSL Offload, content caching, content compression, OneConnect (TCP multiplexing), etc. Although Microsoft shows L4 as a supported configuration, it's not ideal for all the reasons they call out, and they don't mention nPath in any document I've seen for Exchange. You also don't have the option in the future of adding things like pre-authentication (via BIG-IP's APM module).

    I'm curious about your objections to iApp templates "and $$$"; iApps are an integral part of BIG-IP version 11.0 and later, and don't add to cost in any way. Actually, since they simplify both initial setup and application lifecycle management, while allowing those who may not be familiar with BIG-IP to set up and maintain otherwise-complex environment, they can be seen to be a cost-reduction tool in the vast majority of cases. That has been, and remains, one of our primary goals. If there's a specific feature or set of features that you believe the Exchange iApp should include, please feel free to respond to this thread or send your request to 'solutionsfeedback@f5.com'.

     

    I hope that information was helpful.

     

    -Dayne

     

  • Thanks for your quick reply, Dayne.

     

    Regarding question 1: given that the Client Access Server proxies traffic to the mailbox server that contains the active copy of a user's mailbox, I would want to preserve the availability of the protocols that are part of the CAS role, which would potentially handle any user's requests, and let Exchange (and my monitoring solution) deal with the availability of mailbox databases. I wouldn't want to take a whole server, or even a protocol service, out of availability because a test user's database was offline. My implementation will involve over two hundred mailbox databases.

     

    2: OK, that's good to know.

     

    3: The discussion we're having is regarding the complexities of SSL offload, SNAT, header manipulation, etc. as compared to the relative simplicity of nPATH. Since Microsft describes a methodology for preserving per-protocol availability while still leveraging Layer 4 load balancing, that's an attractive route. If there are limitations other than compatibility with additional BIG-IP products, I'd love to know about them soon.

     

    4: I acknowledge your point that a configuration tool can save time and money. We have a concern about tools and wizards that hide complexity, especially when we need to troubleshoot things and we find that we don't understand how they work.

     

    Clearly, I have a lot more reading and testing to do.

     

    Again, thanks for your response, Dayne. It's very helpful.

     

  • Dayne_Miller_19's avatar
    Dayne_Miller_19
    Historic F5 Account

    Hi again, Geoff.

     

    Unfortunately, with Layer 4 load-balancing, there is NOT a way to preserve per-protocol availability while still leveraging Layer 4 load balancing. The single Layer 4 method described by Microsoft explicitly states:

     

    As long as the OWA health probe response is healthy, the load balancer will keep the target CAS in the load balancing pool. However, if the OWA health probe fails for any reason, then the load balancer will remove the target CAS from the load balancing pool for all requests associated with that particular namespace. In other words, in this example, health from the perspective of the load balancer, is per-server, not per-protocol, for the given namespace. This means that if the health probe fails, all client requests will have to be directed to another server, regardless of protocol.

     

    nPath aka Direct Server Return is actually pretty complicated because it requires a non-standard topology and changes to the target servers themselves, and results in a configuration that's really hard to troubleshoot (all traffic is encrypted, request path is different than return path, etc.)

     

    iApps are designed to aid in both initial configuration and lifecycle management, but you still have full visibility into all created objects and configuration parameters. Actually, with Components View, you have a better view of the items that directly relate to your application than if you were to configure objects manually.

     

    Assuming your Exchange environment is already configured and you already have your certificate and key loaded onto BIG-IP, you can quite literally complete the Exchange iApp in less than 2 minutes, resulting in a well-vetted and reproducible deployment. We have thousands of customers that have used the Exchange iApp for large deployments.

     

    We do acknowledge that the monitoring solution is less than ideal, but we offer more flexibility and more options than anyone else (including Microsoft). The architecture of Exchange 2013 (and 2016) is such that CAS is mostly a "dumb" proxy, and Managed Availability doesn't accurately reflect the state of the CAS server in terms of end-to-end availability or even per-protocol health in most cases, making it tough to determine if a server is actually appropriate as a target for traffic. We're considering other options for determining accurate health in the future, but don't have a good solution yet. However, no one else has anything better, or even as good in most cases ;)

     

  • Our plan is to create a VIP per protocol, and a separate pool for each with tests for the protocol-specific healthcheck.htm file. Seems like it should work.

     

    • benjamin_gate's avatar
      benjamin_gate
      Icon for Altostratus rankAltostratus

      Hi Geoff, I've deployed Exch2010 w 4 nodes using iApp f5.microsoft_exchange_2010_2013_cas.v1.6.2 and it works a treat. I'm not using SSL offloading; I opted for additional security as I have dedicated ESXi hosts for each half of the cluster. I'm having a few SSO issues. Currently working with F5 support on an intermittent SSO issue that affects any 4/4500 users randomly per week - very strange. Apart from that, the system is rock solid. Easily achieved the targeted 99.999% availability on the system in the two years it's been in operation for.

  • Welsh's avatar
    Welsh
    Icon for Nimbostratus rankNimbostratus

    While Exchange 2013 offers a wide variety of architectural choices for on-premises deployments, the architecture discussed below is our most scrutinized one ever. While there are other supported deployment architectures, other architectures are not recommended liteblue.me

  • sasds's avatar
    sasds
    Icon for Nimbostratus rankNimbostratus

    intermittent SSO issue that affects any 4/4500 users randomly per week - very strange. Apart from that, the system is rock solid. Easily achieved the targeted 99.999% availability

  • nd myself needing to get my diet back on track (which, as a food writer, is often). The thing is, I