Rate Pace To Reduce Packet Loss

Rate Pace is a TCP Express™ feature that you should be using. Most TCP profile options have difficult tradeoffs. When we introduced Rate Pace in F5® TMOS® 11.5.0, it was unquestionably better for throughput but had difficult implications for CPU efficiency in some deployments. We've whittled away at those efficiency issues, and today Rate Pace is an essentially free improvement in application throughput.

TCP's various windows sometimes allow senders to transmit a burst of packets all at once. This burst traditionally happens at the line rate of the local interface. If this line rate exceeds the throughput of the bottleneck router, packets will accumulate in the router buffer and even overflow, causing drops. Packet losses then cause TCP to throttle back its bandwidth estimate of the path, with bad implications for performance.

The solution is obvious. TCP should space packet transmissions out in accordance with the bottleneck bandwidth. This allows the bottleneck router to deal with each packet before we send another one. 

And indeed, a quick test shows some benefit. The chart below shows throughput of a 50 MB download over a 54Mbps, 10ms Round Trip Time (RTT) WAN with a 64 KB router buffer, for two congestion controls. Rate Pace benefits Highspeed congestion control significantly. Woodside, which is less prone to fill queues to exhaustion, only benefits a bit.

The hard part is figuring out what that bottleneck bandwidth should be. How exactly TMOS does this depends on whether the profile has a setting for rate-pace-max-rate, which we introduced in TMOS 12.0.

If there is no configured maximum rate, rate pacing is not in effect at all until there is a packet loss, which indicates TCP has reached the maximum bandwidth of the path. At that point, TCP limits the overall rate to the congestion window (cwnd) divided by the RTT. Assuming there is data available at the BIG-IP, TCP will send out a full-size (Maximum Segment Size, or MSS) packet every (RTT * MSS / cwnd) milliseconds.

If the profile configures rate-pace-max-rate, we expect that the user has set it to reflect knowledge of bottleneck bandwidth in the path. That rate limit is in effect from the beginning of the connection, even before any packet loss. After a packet loss, it uses the cwnd/rtt calculation above, but the value cannot exceed the maximum rate.

Rate Pacing, in general, is not on by default in our built-in TCP profiles. It's one of several reasons that mptcp-mobile-optimized, which enables it, is generally the highest-performing of our built-in profiles. It's also a reason we're planning a refresh of our built-in TCP profiles. Until then, boost your performance by simply turning this on in the profiles you use.

Published Jun 03, 2016
Version 1.0

Was this article helpful?

4 Comments

  • zipzip_65424's avatar
    zipzip_65424
    Historic F5 Account

    Hi Mike,

     

    Is it possible to in the future to consider "rate-pace-max-rate" to only be applied when congestion is noticed rather than all the time?

     

    We see specific sites being congested in peak periods, and would prefer to maximise the bandwidth any other time.

     

    thanks, nick

     

  • "Congestion" isn't a binary state; in fact, a large-enough TCP flow will self-congest its length with most congestion control algorithms.

     

    However, the older rate pace functionality (without rate-pace-max-rate) does what you describe. It uses the first packet loss (i.e. congestion event) to estimate the bandwidth and pace accordingly.

     

    The purpose of rate-pace-max-rate is to avoid nonsense sending rates that exceed the known bottleneck bandwidth. For instance, there is no point in sending at 1Gbps if all of the flows are going over 4G.

     

  • Is there an ETA on the TCP profile refresh? I'm guessing it'll be included in V13?

     

  • I can't promise any timelines, but we plan to turn our profiles into something that evolve over time, rather than remaining static. The first iteration will occur as soon as we get it through our various approval processes.