Forum Discussion

Charles_Harris's avatar
Charles_Harris
Icon for Nimbostratus rankNimbostratus
Jan 17, 2007

Host based LB'ing across pool members.

Hi I'm trying to write an iRule to ensure that one of our largest client applications gets evenly distributed across our services pool. We currently have 4 nodes in the pool with round-robbin LB in place.

 

 

The problem with this is that if another client requests a connection at the same time as our 'biggest hitter' client, the large client can end up with multiple connections to only a singe member of the pool instead of being properly distributed across all of the available members.

 

 

The backend and client are both weblogic based so the other standard LB profiles do not provide much assistance.

 

 

Any ideas? - Tips or pointers warmly received!

 

 

Currently:

 

--------

 

Server1 |-----(Big Hitter Client)-----|Member1

 

Server2 |-----(Big Hitter Client)-----|Member2

 

Server3 |-----(Big Hitter Client)-----|Member1

 

Server4 |-----(Big Hitter Client)-----|Member1

 

---------

 

---------

 

Others |-----------------------------|Members1,2,3,4

 

---------

 

 

Desired:

 

--------

 

Server1 |-----(Big Hitter Client)-----|Member4

 

Server2 |-----(Big Hitter Client)-----|Member3

 

Server3 |-----(Big Hitter Client)-----|Member2

 

Server4 |-----(Big Hitter Client)-----|Member1

 

---------

 

---------

 

Others |-----------------------------|Members1,2,3,4

 

---------

 

 

Sorry for the nasty pic.

 

9 Replies

  • Blatant jump for interest....

     

     

    I've looked at a few of the LTM iRules here, but I'm stuck still...

     

     

    Any ideas?

     

     

     

    Cheers,

     

     

    -=ChaZ=-

     

  • How are your applications configured on the LTM? Virtual for each application, or do you switch all applications via a single virtual? Do the requests to your LTM come directly from the clients or is there a web tier proxying these requests?
  • Hi,

     

     

    The applications we host are all configured with virtual addresses, but the client accessing them are not. All clients connect to the service via the vip which does least observed connection LB to the 4 backends through one pool.

     

     

    The client connections are all internal, there is no proxying involved.

     

     

    I'm trying to make sure that a single client (with multiple IPs) is spread evenly across the available nodes, for performance considerations.

     

     

    At the moment, the LB can end up distributing all of the desired client connections to one or two of the nodes, due to other client requests occurring at the same time, but not evenly across all four.

     

     

    Ultimately I end up with a disproportional load on two nodes that my target client is connected to as they are heavy users of the application...

     

     

    Cheers,

     

     

    -=ChaZ=-

     

     

     

  • Is there anything unique in the client request (heavy hitter and otherwise) that would allow you to capture that for decision making?
  • Unfortunately not that I know of, there maybe something embedded in the T3 protocol, but nothing I've seen (binary connection type)...

     

     

    The only way we can identify the client is by host IP address unless there is a way to look inside the t3 proto?

     

     

    Cheers!

     

     

    -=ChaZ=-

     

  • The details of the t3 protocol have never been released to my knowledge. Unless you can build classes of who belongs where, I don't know that I can help you. You might consider a separate vip for your heavy hitters and use a different load balancing method for them.
  • Hi all,

     

     

    Thanks jsabe for the code snip, it looks good I'll try it out an see how we get on. I have two slight concerns though, each 'big hitter' client will have multiple connections from each IP, so I'm not sure what the client to backend split will look like and what happens when one of the nodes is unavailable? - we use iControl to offline members of the pool in the event of application issues or failures?

     

     

    Thanks again, I'll let you know how we get on!

     

     

    -=ChaZ=-
  • Hi again!

     

     

    With some minimal tweaking, the iRule above seems to be working a charm. Quick question though.....

     

     

    Our default LB is least observed connections, when this rule is applied to the VIP with this configuration we see weird jumping around behavior. I assume this is because the LB:server for the incoming connection is moving about as per the initial LB.

     

     

    Is there a way to get the iRule to change the LB to round robin for only the selected big-hitter list?

     

     

    Thanks again for all the help!

     

     

    -=ChaZ=-