Forum Discussion

O2_Support_6853's avatar
O2_Support_6853
Icon for Nimbostratus rankNimbostratus
May 12, 2010

Rate-Shaping bursty traffic

Currently we have an issue in our network with the following architecture: 6 servers ----------> F5 ver 10.1.0 -------------------> Cisco ASA ---------------------->Net B. All 6 servers need to send 200 messages to clients on the Net B at the exact same time through the F5. The message size is 100 bytes and all will schedule the messages within 1 millisecond. There is no traffic then for around 3 seconds and the same messages must be sent again and so on. Note: The F5 has Gig interfaces and the traffic is sent on already established TCP interfaces. The calculations are as follows: 200 messages x 100 bytes x 6 servers = 120000bytes in 1 millisecond or 96 Megabits per second. This is fine however the Cisco ASA device does not appear to have sufficient queue sizes and so tail drops the messages after 50 Megabits per second. We would like to implement Rate Shaping to resolve this issue. My query is if we use sfq on the F5 and set: Base Rate = 50Mbps Ceiling Rate= 50Mbps Burst Size = 0 Will the F5 keep the extra burst in a queue and allow it out of the queue in the 2nd second or will it discard the traffic. The docs are lacking in info on this matter. (Please Note: TCP rules should allow the dropped messages to be retransmitted but the applicaitions are failing after retransmit, we wish to resolve this without using retransmit). Thanks in advance.

4 Replies

  • Hi Hamish, thanks for the reply. I spoke to an SE in F5 and he stated that if you exceed the limit specified by the burst/ceiling rate it will discard the traffic as oppose to queue it so technically that is policing, not shaping. That worries me a little as it seems to imply no queue!

     

     

    I would however like to test in the lab as the spikes occur within a millisecond or two and no spikes occur again for a number of seconds so the link is fairly idle for the rest of the period. I will reply once it has been tested.
  • Hamish's avatar
    Hamish
    Icon for Cirrocumulus rankCirrocumulus
    That's worrying... Because although the solution notes do say that eventually if there's too much traffic is does get dropped, they also say the dropping happens when the queue is full... Which to me implies that there is at least SOME queuing...

     

     

    It would be nice to get a definitive answer... Is Spark or anyone around?

     

     

    H
  • Just would like to add this too, I hope somebody can help me check.

     

     

    If we configure a rate-shaping class set to:

     

     

    Base Rate : 160mbps

     

    Ceiling Rate: 160mbps

     

    Burst Size: 0

     

    Queue Method: pfifo

     

     

    1. Will the F5 buffer the excess traffic during the first millisecond and second millisecond that exceed the Ceiling Rate and transmit them in

     

    the following 2 milliseconds where the link is idle and has capacity.These are TCP streams, all terminating at different IP Addresses. Also,

     

    there are already established TCP sessions. Can you also confirm what amount of traffic may be pushed towards the F5 before it will also start

     

    to tail drop/wred as I see no reference to buffer/queue sizes in the documents.

     

     

    2. I have seen that SFQ queuing discipline has a queue depth of 128 packets by default in the Linux kernel and that these queues are split

     

    between 128 streams (so only 1 packet buffer per queue if you are using all queues) could you confirm that this is the setting compiled in the

     

    F5 Linux kernel.

     

     

    3. What is the granularity at which the F5 calculates the current system bps, is it based on milliseconds or averaged on seconds?

     

     

    4. What is the packet queue depth when pfifo is used and will we see this reflected in the ifconfig command as the txqueuelen parameter (as

     

    this is normal tc behavior)? Is it possible to modify the pfifo queue depth to buffer the traffic mentioned above to buffer the 80 packets per millisecond?

     

     

     

     

     

    Thanks!

     

    Raj
  • Hi everyone!

     

    I am new to the forum and may what I ask is something very simple, but I couldn't find sample solution anywhere on devcentral nor could figure it out myself from LTM docs.

     

    We need to implement simple throughput shaping, i.e. limit user bandwidth to specific value. User is identified by client IP.

     

    Rate Class is first thing to consider, but based on what I understood from manuals, rating is applied to each individual connection (or pseudo connection in UDP case)

     

    Thus if same user opens multiple simultaneous connections her total bandwidth usage is still not limited. Probably limiting number of connections per client IP may mitigate this possibility, but it won't be exactly what we need, as some active connections might be slow, others fast, and total usage may vary significantly depending on content being accessed.

     

    thanks in advance!