Forum Discussion

Type11_8030's avatar
Type11_8030
Icon for Nimbostratus rankNimbostratus
Mar 23, 2011

Http file download is 50% slower through f5 than direct to apache server

We have been seeing some slowness when serving large files from 4 Sun 5120s behind a f5 BigIP 3600 running apache.

 

 

For a test we have isolated a setup in test with one 5120 one f5 3600 and one client machine running ab bench.

 

 

 

If the client machine runs ab bench to get a 1 GB arbitrary binary file (machines connected on gigabit extreme switch) the transfer rate is

 

 

 

Transfer rate: 112893.68 [Kbytes/sec] received

 

 

 

(Test 1 below)

 

 

 

If the client machine runs ab bench to get a 1 GB arbitrary binary file connects directly to the F5 and the f5 connects to the extreme switch which has the 5120 on it the speed is noticeably lower.

 

 

 

 

Transfer rate: 80984.89 [Kbytes/sec] received

 

 

 

(Test 2 below)

 

 

 

 

 

In the f5 it is a very simple setup that has one virtual server on port 80 with http profile that has one pool with one member that is the apache server machine listening on port 8080 ( so there is a PAT).

 

 

 

I have looked for anything that might be throttling but can't find anything and the CPU load is negligible.

 

 

 

Any ideas the discrepancy in speeds?

 

 

 

thanks in advance

 

 

 

Test 1

 

 

 

[loadgen@ca120lablg01 ~]$ ab -c 2 -n 2 http://10.77.199.145/firmware/196208_big_file.dat

 

This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0

 

Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/

 

Copyright 2006 The Apache Software Foundation, http://www.apache.org/

 

 

 

Benchmarking 10.77.199.145 (be patient).....done

 

 

 

 

 

Server Software: Apache/2.2.14

 

Server Hostname: 10.77.199.145

 

Server Port: 80

 

 

 

Document Path: /firmware/196208_big_file.dat

 

Document Length: 1048576000 bytes

 

 

 

Concurrency Level: 2

 

Time taken for tests: 18.140962 seconds

 

Complete requests: 2

 

Failed requests: 0

 

Write errors: 0

 

Total transferred: 2097152530 bytes

 

HTML transferred: 2097152000 bytes

 

Requests per second: 0.11 [/sec] (mean)

 

Time per request: 18140.963 [ms] (mean)

 

Time per request: 9070.481 [ms] (mean, across all concurrent requests)

 

Transfer rate: 112893.68 [Kbytes/sec] received

 

 

 

Connection Times (ms)

 

min mean[+/-sd] median max

 

Connect: 1 5 5.7 9 9

 

Processing: 17847 17993 206.5 18139 18139

 

Waiting: 8 10 3.6 13 13

 

Total: 17856 17998 200.8 18140 18140

 

 

 

Percentage of the requests served within a certain time (ms)

 

50% 18140

 

66% 18140

 

75% 18140

 

80% 18140

 

90% 18140

 

95% 18140

 

98% 18140

 

99% 18140

 

100% 18140 (longest request)

 

 

 

 

 

Test 2

 

 

 

 

[loadgen@ca120lablg01 ~]$ ab -c 2 -n 2 http://192.168.128.33:8080/firmware/196208_big_file.dat

 

This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0

 

Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/

 

Copyright 2006 The Apache Software Foundation, http://www.apache.org/

 

 

 

Benchmarking 192.168.128.33 (be patient).....done

 

 

 

 

 

Server Software: Apache/2.2.14

 

Server Hostname: 192.168.128.33

 

Server Port: 8080

 

 

 

Document Path: /firmware/196208_big_file.dat

 

Document Length: 1048576000 bytes

 

 

 

Concurrency Level: 2

 

Time taken for tests: 25.445323 seconds

 

Complete requests: 2

 

Failed requests: 0

 

Write errors: 0

 

Total transferred: 2097152530 bytes

 

HTML transferred: 2097152000 bytes

 

Requests per second: 0.08 [/sec] (mean)

 

Time per request: 25445.324 [ms] (mean)

 

Time per request: 12722.662 [ms] (mean, across all concurrent requests)

 

Transfer rate: 80486.30 [Kbytes/sec] received

 

 

 

Connection Times (ms)

 

min mean[+/-sd] median max

 

Connect: 13 13 0.0 13 13

 

Processing: 25122 25277 219.2 25432 25432

 

Waiting: 1 6 7.8 12 12

 

Total: 25135 25290 219.2 25445 25445

 

 

 

Percentage of the requests served within a certain time (ms)

 

50% 25445

 

66% 25445

 

75% 25445

 

80% 25445

 

90% 25445

 

95% 25445

 

98% 25445

 

99% 25445

 

100% 25445 (longest request)

 

 

 

 

6 Replies

  • Hi Brendon,

     

     

    Can you capture a tcpdump of the direct connection to the server and compare that with slower transfer via LTM? I would guess this is an issue at layer 4 that might require tweaking the TCP profile. You could try comparing the default TCP profile with the LAN optimized profile for client and serverside connections.

     

     

    Aaron
  • Aaron,

     

     

    First off thanks so much for the quick reply. I did get captures the other day and was looking for anything that jumped out to me but didn't find anything but then again I must admit I am not a wireshark expert. So any pointers on what I could look for in the captures? I will try switching the profiles around like you suggested, that makes sense. One thing I was thinking is could an MTU mismatch or chunking mismatch of some sort cause this? anyway thanks again.

     

  • Brendon - It could very well be Nagle's that's causing the problem too. This comes up occasionally and usually switching the client/server tcp profiles from "tcp" to "tcp-lan-optimized" will fix things.
  • Yeah you nailed it on the head. Changing the protocol profile fixed the slowdown. I will go through and diff the profiles and try and find the main culprits but this helped A LOT. I really appreciate your help on this, thanks again.
  • Brendon - everyone here is very helpful. Keep coming back! :)

     

     

    I'm actually happy when users encounter this situation because it helps them understand how flexible these settings can be. If the applications as expected right away, users might never touch the LTM settings again. Now that you've encountered this, you've learned more about the settings and can hopefully customize them to your own deployments.
  • It would be useful if you did find which setting(s) made the difference to report back on what you found.

     

     

    Thanks, Aaron