Forum Discussion

t-roy's avatar
t-roy
Icon for Nimbostratus rankNimbostratus
Nov 28, 2012

uneven load with Universal persist profile

I created a universal persistence profile called X-FORWARDED-FOR and am using a very simple irule to persist on the x-forwarded-for header:

 

ltm persistence universal X-FORWARDED-FOR {

 

defaults-from universal

 

match-across-pools disabled

 

match-across-services disabled

 

match-across-virtuals disabled

 

mirror disabled

 

override-connection-limit disabled

 

rule X-FORWARDED-FOR

 

timeout 180

 

ltm rule X-FORWARDED-FOR {

 

when HTTP_REQUEST {

 

persist uie [HTTP::header "x-forwarded-for"]

 

}

 

}

 

The problem I am seeing is I don't see pesist records to each of the backend pool members. Here is the pool we are sending to:

 

ltm pool mypool-40290 {

 

members {

 

10.10.10.10:40290 {

 

session monitor-enabled

 

}

 

10.10.10.10:40310 {

 

session monitor-enabled

 

}

 

10.10.10.10:40320 {

 

session monitor-enabled

 

}

 

10.10.10.10:40330 {

 

session monitor-enabled

 

}

 

}

 

monitor GENERIC-TCP

 

partition Common

 

service-down-action reset

 

slow-ramp-time 30

 

}

 

 

When I curl from the other F5 in our HA pair to the VIP, inserting an X-FORWARDED-FOR header with various different IPs I am creating persistence records for each request, but ~80 seem to go to 10.10.10.10:40290. Is there a persist mask or something I need within my iRule? I am trying to vary the X-FORWARDED-FOR header with IPs in multiple subnets.

 

3 Replies

  • Can you reply with the virtual server config as well?

    Is curl reusing a TCP connection to send multiple HTTP requests?

    Can you try adding some debug logging to the iRule, repro the issue and reply back with the logs from /var/log/ltm?

    
    when HTTP_REQUEST {
    persist uie [HTTP::header "x-forwarded-for"]
    log local0. "[IP::client_addr]:[TCP::client_port]: XFF: [HTTP::header "x-forwarded-for"]"
    }
    when SERVER_CONNECTED {
    log local0. "[IP::client_addr]:[TCP::client_port]: persist record: [persist lookup uie [HTTP::header "x-forwarded-for"]]"
    log local0. "[IP::client_addr]:[TCP::client_port]: Connected: [IP::server_addr]:[TCP::client_port]"
    }
    

    Aaron
  • t-roy's avatar
    t-roy
    Icon for Nimbostratus rankNimbostratus
    sure, I will test and post the results, here is the vs config:

     

    ltm virtual myvirtual-40291 {

     

    destination 10.10.10.9:40291

     

    ip-protocol tcp

     

    mask 255.255.255.255

     

    partition Common

     

    persist {

     

    X-FORWARDED-FOR {

     

    default yes

     

    }

     

    }

     

    pool mypool-40290

     

    profiles {

     

    SSL {

     

    context clientside

     

    }

     

    ONECONNECT { }

     

    TCP-3600 { }

     

    http { }

     

    }

     

    snat automap

     

    }

     

  • t-roy's avatar
    t-roy
    Icon for Nimbostratus rankNimbostratus
    well I think things are looking better now that I increased the of requests I threw at it. Had to make a change:

     

    got error HTTP::header in rule X-FORWARDED-FOR) requires an associated FASTHTTP profile on the virtual server

     

    changed rule to:

     

    when HTTP_REQUEST {

     

    set xff [HTTP::header "x-forwarded-for"]

     

    persist uie [HTTP::header "x-forwarded-for"]

     

    log local0. "[IP::client_addr]:[TCP::client_port]: XFF: $xff"

     

    }

     

    when SERVER_CONNECTED {

     

    log local0. "[IP::client_addr]:[TCP::client_port]: persist record: [persist lookup uie $xff]"

     

    log local0. "[IP::client_addr]:[TCP::client_port]: Connected: [IP::server_addr]:[TCP::server_port]"

     

    }

     

     

    so here are the results varying the IPs equally:

     

    40290:18

     

    40310:14

     

    40320:13

     

    40330:16

     

     

    here ar the results if the XFFs are all from similar subnets:

     

    40290:14

     

    40310:15

     

    40320:16

     

    40330:14

     

     

    Think this was just a case of me not testing thoroughly enough... This load distribution looks great to me. Thanks for the help.