Forum Discussion

ekaleido's avatar
ekaleido
Icon for Cirrus rankCirrus
Aug 07, 2015

iRule persistence issue (aka persistence isn't working)

I have the below iRule that seems to only ever direct traffic to a single node in a pool, despite the $vws_id variable being unique for each user session. Anything obvious I'm missing?

 

when CLIENTSSL_DATA {

 

set vws_id [findstr [SSL::payload] TICKET 8 \]

 

if { $vws_id ne "" } {

 

log "Logged session ID as: $vws_id"

 

pool _vworkspace85.netsmartcloud.com_pool

 

persist uie $vws_id 1800

 

SSL::release

 

}

 

10 Replies

  • So three questions:

     

    1. Do you see the correct vws_id value in all of the logs?

       

    2. Are you actually setting the persistence table entry anywhere else with a "persist add"?

       

    3. Is this not HTTP traffic?

       

  • I do see the correct vws_id. I am not doing a "persist add" anywhere, just the persist you see above, and this is not HTTP.

     

    I can see the persistence records via Module Statistics for each of the sessions as well.

     

  • I'm suspecting then that there is no persistence entry. You could confirm this by actually looking at the persistence table:

    tmsh show ltm persistence persist-records
    

    If that's the case, and with multiple clients connecting you have nothing in the persistence table, you'd need to find something in the response (preferably the first application response) to use for that persistence entry. Does the server actually send the TICKET value to the client?

  • I am seeing the persistence entries via tmsh. I'd paste the output but the TICKET is in the neighborhood of 200 characters long and reads as a single line on the CLI output.

     

    This is a Citrix-like app, a user logs in via real HTTPS to a web interface which presents a desktop. The app server that actually runs the apps that are presented then open a session to the web interface as well, which is connected, by TICKET, to the user session and RDP is essentially tuneled back via SSL. (A least that is how it has been explained to me.) The web interfaces don't keep any state between themselves, so having the app server land on a different web interface is bad. The vendor claims the only way to do persistence is based on the TICKET id in the payload.

     

    Hopefully that maybe clears up a bit. Everything seems to be working, except for getting everyone pinned to the same web interface.

     

  • Okay then, time to shoot from the hip a little.

     

    1. Do you have OneConnect enabled in the VIP? I'm not sure how it would affect this particular protocol though.

       

    2. In the TMSH persistence list, do you see that each persistence record is pointing to different pool members or the same member?

       

    Again, just spitballing, but it could be that all of the clients are actually getting load balanced to a single server, or that maybe the TICKET value isn't getting read correctly with the persist uie command. Can you add a log statement in the iRule that shows a persist lookup (along with the client source address) and see what that gives you?

     

  • Not as familiar with persist lookup as I probably should be, but something like this?

     

    if { $vws_id ne "" } {

     

    set pserver [persist lookup source_addr [IP::remote_addr]]

     

    log "$pserver"

     

  • The persistence is based on the TICKET value, so:

    log local0. [persist lookup uie $vws_id]
    
  • So this is what gets logged right before it logs the ticket number:

     

    Rule /Common/ssl_payload_capture :

     

    So would that indicate it isn't reading the ticket properly?

     

  • It could mean that or that you're not actually load balancing to different pool members in the first place. When you do the TMSH persistence query, do you see that every client is being assigned different pool members?