Forum Discussion

DFresh_4130_150's avatar
DFresh_4130_150
Icon for Nimbostratus rankNimbostratus
Jun 07, 2017

Slow uploads through BIG-IP

We have a web app running behind an F5 BIG-IP running version 11.6.0. The app is responsible for storing large binary files so it regularly gets large uploads and downloads. During initial setup a while ago I was able to get the download speeds increased to around 30Mbps which was acceptable. We did this by updating the TCP profiles and found the Memory Management settings directly correlated to significant improvements. Below is what we're using for both client and server TCP profiles with the tcp-lan-optimized and tcp-wan-optimized profiles being the parent. We also have no iRules applied to this VIP.

Proxy Buffer High: 2516544
Proxy Buffer Low: 1258272
Receive Window: 1179630
Send Buffer: 1179630
Nagle: Disabled

The receive window and send buffer are the ones that have direct correlation to the increases in download speeds. I've been trying to increase our upload speeds, but can't seem to find any setting that makes a difference. If I upload directly from the node the app runs on itself I'm getting 30M+ upload speeds, but my desktop running curl shows around 975k on average. Output from curl on upload and download is below. Any suggestions on what to try next or how to troubleshoot?

C:\Temp>curl -u user -T file.mp4 -X PUT https://app.example.com/abc/test-generic/ > nul
Enter host password for user 'user':
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  5  616M    0     0    5 34.7M      0   975k  0:10:46  0:00:36  0:10:10 1025k^C

C:\Temp>curl -O https://app.example.com/abc/test-generic/file.rpm > nul
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  117M  100  117M    0     0  22.1M      0  0:00:05  0:00:05 --:--:-- 26.1M

6 Replies

  • First thing is to understand at which layer is the delay. Is that because of interfaces errors, TCP settings, HTTP profile, etc...

    Interfaces errors, if easy to check in the interface status.

    tmsh show net interface

    For TCP, you need to get a tcpdump in the F5, and look for transfer pauses, zero windows size, etc..

    For HTTP profile, is easy to rule out, just remove the profile from the virtual server and any client ssl or serverssl. You could also change the virtual server to forward IP, in case you are using the default standard type.

    After you narrow down the problem, is easy to give a suggestion about what could be.

    For tcpdump, check this solution:

    https://support.f5.com/csp/article/K411

  • It may the that the proxy buffers and receive windows are actually too big. I'd recommend setting the proxy buffers back to 131072, but do increase the receive window to 256 or 512 KB. Send buffer of 1 MB is probably fine.

     

    For the server profile, you can generally use tcp-lan-optimized, unless there is high latency, loss, or congestion expected between the BigIP and the backend servers.

     

    Here's my blog post. I've been able to achieve 100 Mbps uploads and 250 Mbps downloads.

     

  • I think F5 is buffering entire HTTP POST request before it starts transferring data to the pool member. So obviously this, in turn, increases the delay

     

  • I think F5 is buffering entire HTTP POST request before it starts transferring data to the pool member. So obviously this, in turn, increases the delay