Forum Discussion

huudat_20099's avatar
huudat_20099
Icon for Nimbostratus rankNimbostratus
Oct 29, 2007

Performance of LTM3400

Hi experts,

 

 

I'm new user of f5. Now I'm configuring BIG-IP 3400 LTM. I knew that, 3400 LTM has 1GB RAM, Currently 3400LTM is balancing load for 2 real servers and having about 500 current connection for each servers, and I saw in the performance, CPU Usage about( CPU 10% and TMM CPU usage 20%)

 

 

If I use ssh and use TOP command, I saw that, CPU usage 98% and total RAM 480MB, Usage about 450MB. I didn't know why? Pls explain for me, difference between two above scenarios. Many thanks.

9 Replies

  • Hi there,

     

     

    There are a few solutions on AskF5 which detail this:

     

     

    SOL3242: Traffic Management Microkernel (TMM) CPU and RAM usage

     

    https://support.f5.com/kb/en-us/solutions/public/3000/200/sol3242.html ([url]https://support.f5.com/kb/en-us/solutions/public/3000/200/sol3242.html[/url])

     

     

    SOL3572: CPU usage is significantly higher after upgrading to BIG-IP version 9.0 or later

     

    https://support.f5.com/kb/en-us/solutions/public/3000/500/sol3572.html ([url]https://support.f5.com/kb/en-us/solutions/public/3000/500/sol3572.html[/url])

     

     

    Reply if you have any questions on this info.

     

     

    Aaron
  • Links don't work in this forum?

     

     

    [url]http://example.com[/url]

     

     

    Aaron
  • Hi Aaron,

     

     

    Many thanks for your help. I wounder that, RAM's LTM3400 is 1GB, but I saw on the graphic of performance Ram is only 400MB, I don't know why. are there ways to show RAM capacity and CPU speed on the LTM3400? Pls help me.

     

     

    thanks.
  • Hi,

     

     

    The GUI's performance graphs show how much memory has been allocated to the host and what's actually in use by TMM. They don't show how much memory that has been allocated to TMM or what's actually in use by the host. Clear as mud? :)

     

     

    You can use the command 'b global' to see the amount of physical memory installed in a unit. This is described in more detail in SOL6568 ([url]https://support.f5.com/kb/en-us/solutions/public/6000/500/sol6568.html[/url]).

     

     

    SOL6583 ([url]https://support.f5.com/kb/en-us/solutions/public/6000/500/sol6583.html?sr=1[/url]) describes how memory is allocated between TMM and the host.

     

     

    The CPU graph on the performance page should be accurate for both TMM and the host. You can use the top command to view CPU and memory usage of host processes.

     

     

    Aaron
  • Hi Aaron,

     

     

    I'm using a LTM340 to demo for my customer. Having one domain in two real server to balancing, one server had about 20Mbs and another 40Mbs throughput.Each server has multi domain. My customer used a software program to measure the speed, load and download time before and after using LTM3400 for domain demo and then they had the result as below:

     

     

    - Before using LTM3400: Average download speed 2.69Mbps;

     

    Average load time 0,041s;Average download time 0,280.

     

    - After using LTM3400: Average download speed 2.16Mbps;

     

    Average load time 0,055s;Average download time 0,354s.

     

     

    I didn't know why, when I used LTM3400 Average download speed was slow than and average load time was high than when I didn't used it.

     

     

    Did it cause from the delay of LTM3400? if it is right so when do we use LTM3400 to have the best effect?

     

     

    Thanks for your helps.
  • Hi,

     

     

    What protocol of traffic is being sent? I assume it's TCP-based.

     

     

    I'd first check to make sure the interfaces are set to full duplex, and then take a look at the virtual server type and TCP profile options.

     

     

    You can view/set the interface settings in the GUI under Network >> Interfaces or from the CLI using 'b interface'. Run 'b interface help' for examples.

     

     

    If you don't need to inspect layer 7, you could change from a standard TCP virtual server to a Performance (L4) VIP with a FastL4 profile. This uses the ASIC for processing the packets and should provide better performance compared with a standard TCP VIP.

     

     

    You can also tune the TCP profile options for the environment you're testing in. In 9.4.x, F5 added new stock TCP profiles for LAN and WAN environments. You can manually configure these in pre-9.4 versions.

     

     

    Here are some related AskF5 solutions:

     

     

    SOL7612: Configuring the media speed and duplex settings for network interfaces

     

    https://support.f5.com/kb/en-us/solutions/public/7000/600/sol7612.html

     

     

    SOL5017: Overview of virtual server types

     

    https://support.f5.com/kb/en-us/solutions/public/5000/000/sol5017.html

     

     

    SOL8082: Overview of TCP connection set-up for BIG-IP LTM virtual server types

     

    https://support.f5.com/kb/en-us/solutions/public/8000/000/sol8082.html

     

     

    SOL7559: Overview of the TCP profile

     

    https://support.f5.com/kb/en-us/solutions/public/7000/500/sol7559.html?sr=1

     

     

    Advanced design/configuration codeshare LAN/WAN optimized profiles

     

    http://devcentral.f5.com/wiki/default.aspx/AdvDesignConfig.CodeShare

     

     

    SOL4812: Reasons why interactive traffic, such as Telnet and SSH, is slower when passed through BIG-IP

     

    https://support.f5.com/kb/en-us/solutions/public/4000/800/sol4812.html

     

     

    SOL7399: Configuring a LAN-optimized TCP profile

     

    https://support.f5.com/kb/en-us/solutions/public/7000/300/sol7399.html

     

     

    SOL7402: Configuring a WAN-optimized TCP profile

     

    https://support.f5.com/kb/en-us/solutions/public/7000/400/sol7402.html

     

     

    How's that for a reading list? :)

     

     

    I'm sure others here have suggestions as well for making improving the performance through the BIG-IP.

     

     

    Aaron
  • Paul_Szabo_9016's avatar
    Paul_Szabo_9016
    Historic F5 Account
    Dat:

     

     

    Any time you run a single TCP stream and add any latency to the system, you should expect a difference in throughput.

     

     

    For example, if the server sends 8 packets before waiting for an acknowledgment and your increase the latency, your throughput will drop, as the number of times per second the server can send 8 packets will drop.

     

     

    Instead of calling this "performance" or "speed", which can mean a lot of things, I prefer to call this "single connection throughput".

     

     

    It's very hard to measure single connection throughput in a repeatable fashion, and get the same result in the real world.

     

     

    In a performance lab you can get latency down to a bare minimum, but I just measured the ping time to 3 floors away through a multitude of high-end switches at 1100 uS, versus a box same rack at 153 uS. Most traffic management devices aren't going to add that much latency to your packets. So your physical infrastructure in the real world will overwhelm any effects you might see in a performance lab when you add a traffic management device to the system.

     

     

    What kind of single connection throughput you get depends on the TCP stack implementations (window scaling enabled?), latency, and network congestion.

     

     

    As hoolio suggested, you could try switching to the wan-optimized TCP profile on the BIG-IP and see if that helps. Or you can cut latency through the BIG-IP by using the PVA ASIC, which has low 10s of microseconds of latency.

     

     

    So why does single-connection throughput matter for this application?

     

     

    Paul

     

     

     

     

  • Mike_Lowell_108's avatar
    Mike_Lowell_108
    Historic F5 Account
    My $0.02:

     

    1) Why does the server perform at only 2.69Mbps to begin with? Seems pretty slow, even for dynamic content.

     

    2) As for BIG-IP changing the performance, I'd want to highlight a few things:

     

    - Was the first test directly to the server on the same LAN segment? I've observed that some TCP stacks have different TCP behavior when communicating with hosts on the same LAN segment vs. going through it's default gateway. i.e. the server may be changing it's behavior and causing the change in performance, not directly BIG-IP.

     

    - BIG-IP's default settings are probably best suited for a reasonably fast WAN. To get ideal performance in a LAN, using FastL4 or a modified TCP profile may be required. To get ideal performance on a not-so-nice WAN, a modifying the TCP profile may also be necessary.

     

    - If the customers test simulates the expected use case when the product is deployed (i.e. appropriate client bandwidth, representative application access pattern, etc), it probably makes sense to do additional testing and figure out what's going on. However, if this test isn't representative of the expected use case (latency, bandwidth, access pattern, number of concurrent users, etc), then I'd probably try to look past this one test for the time being. It's not uncommon that performance may vary slightly in a small/isolated test, but it would be a big surprise if BIG-IP negatively impacted performance in a real deployment.

     

     

    Good luck!
  • We had a similar problem we do large downloads and crank out about 1.5Gbps through our 6800's.. Make sure your using a performance HTTP profile not standard.

     

     

    the standard profile inspects in/out packets where as performance HTTP only inspects inbound and doesn't really care what goes out. Depending on what irules you use fastHTTP might not work with some of your irules.

     

     

    If you run in fast L4 you'll notice almost no loss of speeds. in performance L4 to a single gigE server we hit 105MB/s but in performance HTTP we only got 40MB/s at best.