Last summer I introduced TCP Early Retransmit and Tail Loss Probe, two new features intended to shave as much as a few hundred milliseconds off some TCP connections. That may not sound like a lot, but there are plenty of market anecdotes about how delay leads to reduced business: one large company found that each 100ms of delay cost them 1% of revenue. Time is money, and at internet scale a little time is a lot of money.

In TMOS® version 12.0.0, F5® rolled out support for TCP Fast Open, a new standard that can save the time TCP usually spends in connection setup. Connection setup usually takes one round trip, which can be hundreds of milliseconds in many networks.

How It Works

A Normal 3-Way Handshake

As many of you know, TCP starts connections with a three-way handshake. The client sends SYN with some client-side initial values. The server responds with a SYN-ACK that acknowledges those values and provides the server-side counterparts. Finally, the client acknowledges the ACK and both sides are ready for data transfer.

In the original TCP specifications from the early 1980s, there is no prohibition on including data with the SYN. In practice, however, servers don't allow this because it leaves them vulnerable to the "SYN flood" Denial of Service attack. As the SYN packet requires no client state, an attacker or botnet could generate large numbers of SYNs with arbitrary addresses, and a naive server would establish a full connection for each of them, potentially consuming all of its resources. Serverside implementations, including F5's TMOS, allocate little or no memory to a connection until the client has committed more resources by acknowledging the SYN-ACK.

Fast Open gets around this problem by allowing the client to request from the server  a unique, encrypted "Fast Open cookie" during a connection. In subsequent connections with that server, it can attach data to a SYN along with the cookie to verify that the source address is valid. If the data is a complete HTTP request or initial packet in an application-layer handshake, it allows the server to get started on the next step one round-trip earlier. This can be a big difference in short connections.

Earlier this year, Lori MacVittie discussed Fast Open in depth here on DevCentral.

Does Anyone Use It?

Google drove adoption of the Fast Open standard, so they've been sure to support it in Chrome browsers running on Linux, ChromeOS, or Android. It's enabled by default in recent Linux kernels. Furthermore, Apple announced that iOS 9 and OS X 10.11 will support Fast Open, though not by default. So it's coming, especially for mobile users.

Security and Resource Considerations

It will inevitably consume additional system resources to generate, encrypt, and decrypt cookies, and to start a full TCP connection when a SYN arrives. While this penalty is not huge, we place a limit on the number of simultaneous Fast Open connections to avoid any possibility of it degrading your system.  Furthermore, Fast Open is disabled by default in all of our TCP profiles, so you have to make the conscious decision to turn it on.

Even so, if your applications fit the profile, I encourage you to turn this option on. If connections are short enough that the one round-trip will make an appreciable difference, and the first client packet is enough to make something happen on the serverside, then Fast Open can make a real difference to your user experience and your bottom line.