The Disadvantages of DSR (Direct Server Return)

I read a very nice blog post yesterday discussing some of the traditional pros and cons of load-balancing configurations. The author comes to the conclusion that if you can use direct server return, you should.

I agree with the author's list of pros and cons; DSR is the least intrusive method of deploying a load-balancer in terms of network configuration. But there are quite a few disadvantages missing from the author's list.

Author's List of Disadvantages of DSR

The disadvantages of Direct Routing are:

  • Backend server must respond to both its own IP (for health checks) and the virtual IP (for load balanced traffic)
  • Port translation or cookie insertion cannot be implemented.
  • The backend server must not reply to ARP requests for the VIP (otherwise it will steal all the traffic from the load balancer)
  • Prior to Windows Server 2008  some odd routing behavior could occur in
  • In some situations either the application or the operating system cannot be modified to utilse Direct Routing.

Some additional disadvantages:

  1. Protocol sanitization can't be performed.
    This means vulnerabilities introduced due to manipulation of lax enforcement of RFCs and protocol specifications can't be addressed.
  2. Application acceleration can't be applied.
    Even the simplest of acceleration techniques, e.g. compression, can't be applied because the traffic is bypassing the load-balancer (a.k.a. application delivery controller).
  3. Implementing caching solutions become more complex.
    With a DSR configuration the routing that makes it so easy to implement requires that caching solutions be deployed elsewhere, such as via WCCP on the router. This requires additional configuration and changes to the routing infrastructure, and introduces another point of failure as well as an additional hop, increasing latency.
  4. Error/Exception/SOAP fault handling can't be implemented.
    In order to address failures in applications such as missing files (404) and SOAP Faults (500) it is necessary for the load-balancer to inspect outbound messages. Using a DSR configuration this ability is lost, which means errors are passed directly back to the user without the ability to retry a request, write an entry in the log, or notify an administrator.
  5. Data Leak Prevention can't be accomplished.
    Without the ability to inspect outbound messages, you can't prevent sensitive data (SSN, credit card numbers) from leaving the building.
  6. Connection Optimization functionality is lost.
    TCP multiplexing can't be accomplished in a DSR configuration because it relies on separating client connections from server connections. This reduces the efficiency of your servers and minimizes the value added to your network by a load balancer.

There are more disadvantages than you're likely willing to read, so I'll stop there. Suffice to say that the problem with the suggestion to use DSR whenever possible is that if you're an application-aware network administrator you know that most of the time, DSR isn't the right solution because it restricts the ability of the load-balancer (application delivery controller) to perform additional functions that improve the security, performance, and availability of the applications it is delivering.

DSR is well-suited, and always has been, to UDP-based streaming applications such as audio and video delivered via RTSP. However, in the increasingly sensitive environment that is application infrastructure, it is necessary to do more than just "load balancing" to improve the performance and reliability of applications. Additional application delivery techniques are an integral component to a well-performing, efficient application infrastructure.

DSR may be easier to implement and, in some cases, may be the right solution. But in most cases, it's going to leave you simply serving applications, instead of delivering them.

Just because you can, doesn't mean you should.

Published Jul 03, 2008
Version 1.0

Was this article helpful?

4 Comments

  • @Paul,

     

     

    I assume by transparent proxy you mean that traffic flows through the load balancer (which acts as a forwarding switch) even though it is deployed in a DSR configuration.

     

     

    This would be similar to an L4 or L7 delayed binding scenario. Once the connection to the server is made, it simply forwards all requests/responses appropriately, but does not inspect the packets as they flow through the device. Because it is essentially acting as a bridging switch after the initial connection, all of the disadvantages apply.

     

     

    If it's acting as a true (full) proxy, then you can avoid the disadvantages, but you can't deploy the load balancer in a DSR configuration because the connections are coming from the load balancer (it becomes the client as far as the server is concerned) and must return to the load balancer.

     

     

    So I guess the short answer is "no", I'm not seeing a way to deploy DSR without incurring the disadvantages. :-)
  • @snofrar

     

     

    Can you define what you mean by "PIP" in this context? The term is frequently used in a variety of ways and we're trying to make sure we understand how you're using it so we can answer your question correctly.

     

     

    Thanks!

     

  • @dave

     

     

    You assertion that: "load balancer can't use load-adaptive feedback balancing algorithms because it can't deduce origin state other than by heuristics" is incorrect. Modern load balancers are capable of integrating with servers and applications and subsequently leveraging their state in dynamic load balancing algorithms.

     

     

    As far as compression on the origin server, it's inefficient and uses excessive CPU cycles. See: this post and included references re: CPU utilization, RAM, and impact on the origin server.

     

     

    Lori

     

     

  • Here is an interesting article about L3DSR

     

     

    http://www.nanog.org/meetings/nanog51/presentations/Monday/NANOG51.Talk45.nanog51-Schaumann.pdf