Forum Discussion

Narendra_26827's avatar
Narendra_26827
Icon for Nimbostratus rankNimbostratus
Apr 17, 2012

Universal Persistence Issue

Hello,

 

 

We have an custom iRule in which we are doing universal persistence on the value present in the http header.

 

 

 

 

 

The iRule is :

 

 

 

when HTTP_REQUEST {

 

 

switch -glob [string tolower [HTTP::uri]] {

 

 

"/api/channel*" {

 

pool "umps-gus1-chan-pool"

 

persist uie [HTTP::header "orgId"] 300

 

}

 

 

"/api/presence*" {

 

pool "umps-gus1-prsn-pool"

 

persist uie [HTTP::header "orgId"] 300

 

}

 

 

default {

 

pool "umps-gus1-nginx-pool"

 

}

 

}

 

}

 

}

 

 

 

 

As per the theory persistence record should be maintained based upon the value of orgId in the http header with timeout of 300 seconds.

 

 

 

But we are seeing that some of the requests about ~25% are not getting sticked to one node and they are gettting load balanced to other nodes.

 

 

 

Is there any thing which we are doing wrong here? Our intention is to have orgId value persistence record should be sticked to only node in the above pools (chan and prsn).

 

 

 

Can anybody help?

 

 

 

Thanks.

 

Narendra

 

3 Replies

  • Hi Narenda,

    Can you try this with persistence set to none for the last pool and additional debug logging? Can you also add a OneConnect profile to the virtual server? If you're using SNAT you can use the default OneConnect profile with a /0 source mask. If you're not doing serverside source address translation, create a custom OneConnect profile with a /32 source mask and add that to the virtual server.

    
    when HTTP_REQUEST {
    switch -glob [string tolower [HTTP::uri]] {
    "/api/channel*" {
    pool "umps-gus1-chan-pool"
    persist uie [HTTP::header "orgId"] 300
    log local0. "[IP::client_addr]:[TCP::client_port]: Using umps-gus1-chan-pool"
    }
    "/api/presence*" {
    pool "umps-gus1-prsn-pool"
    persist uie [HTTP::header "orgId"] 300
    log local0. "[IP::client_addr]:[TCP::client_port]: Using umps-gus1-prsn-pool"
    }
    default {
    pool "umps-gus1-nginx-pool"
    log local0. "[IP::client_addr]:[TCP::client_port]: Using umps-gus1-nginx-pool"
    persist none
    }
    }
    }
    when LB_SELECTED {
    log local0. "[IP::client_addr]:[TCP::client_port]: Selected [LB::server]"
    }
    when SERVER_CONNECTED {
    log local0. "[IP::client_addr]:[TCP::client_port]: Connected: [IP::server_addr]:[TCP::server_port], [LB::server]"
    }
    

    Aaron
  • Thanks hoolio. Will try to do that.

     

     

    Right now the flow of request is configured in such a way that client hits the eBigip (which has source addr persistence) first on vip port 443 which contains pool of vips (on port 80) of internal bigips.

     

     

    Now the vips (on port 80) on internal bigips (i.e. bigip1 & bigip2) have our iRules, pools and other health monitors. There the request is directed to appropriate pools i.e. channel, presence and nginx.

     

     

    It is suspected the request is getting load balanced from external bigips to the internal bigips due to this separate persistence record is getting created for the same orgId.

     

     

    Oneconnect is enabled on both the layers of bigips.

     

     

    What would you suggest? Could this be a issue?

     

     

    Thanks.

     

    Narendra

     

     

  • It is suspected the request is getting load balanced from external bigips to the internal bigips due to this separate persistence record is getting created for the same orgId. i do not think so. external bigip uses source address persistence, so all request from one client should be forwarded to same internal bigip.

     

     

    by the way, have you added logging as Aaron suggested? is there any useful information?