About 100% tmm cpu (50% in dual CPU) usage for BIG-IP 6400 LTM


We use two F5 BIG-IP 6400 LTM for Active-Standby mode. I've found that our tmm use 50% CPU resource in 6400 (in fact it's 100% in cpu01). First question is, is this normal behavior ? Both active & standby LTM use 100% in single cpu.

Second question is how many req/sec does 6400 LTM can handle ?


6 Answer(s):

Which LTM version are you running? Are you seeing 100% CPU usage by TMM from the top utility?

Take a look at SOL3242 ([url][/url]) for details on ways to accurately measure TMM CPU usage. tmstat is a handy utility you can use to get accurate metrics for TMM CPU and memory usage from the command line.

This post has some related info as well:

Performance question

Okay, we're running "BIG-IP 9.4.7 Build 330.0 Hotfix HF2". According to the document, it shouldn't be 100% cpu time but 90% max ?

CPU states: cpu user nice system irq softirq iowait idle
total 51.7% 0.0% 0.4% 0.0% 0.0% 0.0% 47.8%
cpu00 3.9% 0.0% 0.0% 0.0% 0.0% 0.0% 96.0%
cpu01 100.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
Mem: 4053936k av, 4014464k used, 39472k free, 0k shrd, 132856k buff
428896k actv, 35104k in_d, 62372k in_c
Swap: 6352892k av, 21532k used, 6331360k free 205000k cached

1763 root RT 0 3333M 3.3G 3328M R 50.0 84.1 2677h 1 tmm
6710 root 24 0 1072 1072 772 R 0.9 0.0 0:00 0 top

The behavior you are describing is normal. 6400 and 6800 platforms running v9.x use a single TMM process that consumes one CPU to 100%. The second CPU handles all other functions (health checks, administration, etc...). CMP (Clustered Multi-Processing) is explained in SOL7751 ( The solution Aaron referenced, sol3242, does reference this behavior on the 6400/6800 in the v9.4 - v9.4.1 section, just not very prominently. Here is relevant text from SOL3242:

"Although they are multi-processor platforms, BIG-IP 6400 and 6800 do not support CMP in these versions. They only run one TMM instance and process traffic as noted for BIG-IP versions 9.0 through 9.3.1 above."

In v10.x this behavior is changed on the 6400/6800 so that they match that of the other platforms; starting with v10.0.0 you will see two TMMs running, each of which can use a maximum of 90% of each CPU (each TMM is locked to a CPU). SOL9763 is a description of the performance changes that come with this change in behavior: (

Yep, tmstat or the performance graphs are the most accurate way to get the TMM CPU usage on a 6400.

Okay thanks. Now I need to find another reason to explain why our 6400 will report health check fail sometimes.
Monitoring is done by the bigd process which runs on the host CPU (CPU0). So it shouldn't be affected by CPU1 even if TMM were actually running high (which it probably isn't).

Is the problem for all monitors and pools or just for specific pool members? Does the problem appear to be load related (ie, does it occur throughout the day or just during high traffic periods)? What configuration do you have set for the monitors/pools which are being marked down?


Your answer: