Forum Discussion

Brett_10751's avatar
Brett_10751
Icon for Nimbostratus rankNimbostratus
Jun 07, 2011

irule to change to downpage pool and change persistence

I have a VIP setup that has POOL_A with two nodes in it with default persistence profile - source address affinity with a value of 600 seconds.

 

 

On the VIP I have a irule setup that if it sees that the VIP is down then it will fail over to POOL_B without persistence. Pool B is a downpage that we wanted to use if the site exceeds its connection limits or is down for maintenance.

 

 

when LB_FAILED { persist none LB::reselect pool POOL_B }

 

 

 

So as a test I set each node in POOL_A to have a connection limit of 1. So after I initiated a second connection to either node in POOL_A I was sent over to POOL_B's downpage, but then I changed the connection limit back to unlimited on POOL_A and the pool was available but my client persisted on the downpage pool - POOL_B. After 10 minutes (600 seconds) after refreshing I was eventually redirected to POOL_A

 

 

 

Not sure what I persisted on the downpage even though POOL A was available

 

 

Shouldn't this work? Any ideas on how I can do this differently?

 

 

Thank you,

 

Brett

5 Replies

  • Hi Brett,

     

    Instead of refreshing did you close the application or browser and then restarted it?

     

     

    Also you can use 'persist delete' to remove a persistence entry from within the iRule instead of setting it to none:

     

     

    persist

     

    http://devcentral.f5.com/wiki/default.aspx/iRules/persist

     

     

    I hope this helps

     

    Bhattman
  • Arie's avatar
    Arie
    Icon for Altostratus rankAltostratus
    Aside from the persistence issue (and without knowing the architecture), will POOL_B always be available when POOL_A is not?

     

     

    Unless you need a lot of functionality on the "downpage" it may be preferable to embed the HTML on the LTM and send it via a "respond"-command when POOL_A is down. That way you can display the page even if the entire environment behind the LTMs is down.

     

  • it seems to be okay to me.

     

     

    [root@orchid:Active] config b virtual bar list

     

    virtual bar {

     

    snat automap

     

    pool foo1

     

    destination 172.28.17.88:http

     

    ip protocol tcp

     

    rules myrule

     

    persist source_addr

     

    }

     

    [root@orchid:Active] config b rule myrule list

     

    rule myrule {

     

    when LB_FAILED {

     

    persist none

     

    LB::reselect pool foo2

     

    }

     

    }

     

    [root@orchid:Active] config b pool list

     

    pool foo1 {

     

    members 10.10.70.110:http {

     

    limit 1

     

    }

     

    }

     

    pool foo2 {

     

    members 65.55.17.26:http {}

     

    }

     

     

    curl -I http://172.28.17.88/ <<<<< 1st request got response from foo1

     

    HTTP/1.1 200 OK

     

    Date: Fri, 10 Jun 2011 19:50:30 GMT

     

    Server: Apache/2.0.59 (rPath)

     

    Last-Modified: Sun, 24 Oct 2010 20:57:08 GMT

     

    ETag: "65c0-123-1e67b100"

     

    Accept-Ranges: bytes

     

    Content-Length: 291

     

    Vary: Accept-Encoding

     

    Content-Type: text/html; charset=UTF-8

     

     

    curl -I http://172.28.17.88/ <<<<< 2nd request got response from foo2

     

    HTTP/1.1 200 OK

     

    Date: Sat, 11 Jun 2011 04:56:09 GMT

     

    Server: Microsoft-IIS/6.0

     

    P3P:CP="BUS CUR CONo FIN IVDo ONL OUR PHY SAMo TELo"

     

    S: CO1MPPRENA19

     

    X-Powered-By: ASP.NET

     

    X-AspNet-Version: 2.0.50727

     

    Pragma: no-cache

     

    Set-Cookie: MC1=V=3&GUID=743c5453cf264ce69fc7072de5853936; domain=.17.88; expire s=Mon, 04-Oct-2021 19:00:00 GMT; path=/

     

    Cache-Control: no-cache

     

    Content-Type: text/html; charset=utf-8

     

    Content-Length: 6293

     

     

    curl -I http://172.28.17.88/ <<<<< 3rd request got response from foo1

     

    HTTP/1.1 200 OK

     

    Date: Fri, 10 Jun 2011 19:50:39 GMT

     

    Server: Apache/2.0.59 (rPath)

     

    Last-Modified: Sun, 24 Oct 2010 20:57:08 GMT

     

    ETag: "65c0-123-1e67b100"

     

    Accept-Ranges: bytes

     

    Content-Length: 291

     

    Vary: Accept-Encoding

     

    Content-Type: text/html; charset=UTF-8

     

     

    there is no foo2 in persistence table.

     

     

    [root@orchid:Active] config b persist show all

     

    PERSISTENT CONNECTIONS

     

    | Mode source addr Value 172.28.17.80

     

    | virtual 172.28.17.88:http node 10.10.70.110:http age 53sec

     

    | Mode source addr Value 192.168.206.96

     

    | virtual 172.28.17.88:http node 10.10.70.110:http age 1sec
  • I was able to test this further on a development vip/pools using examples provided by nitass and persistence does not seem to be an issue. It is working as it is suppose to. I didn't try bhattman's suggestion as it seemed to be working okay with persist none on the test VIP. I'll have to test this further on my actual production VIP which happens to be HTTPS which I failed to mention in the intial post.

     

     

    I also tried Arie's suggestion of doing a HTTP respond and it works well as long as it is an http vip with a http profile attached. I really like the idea of doing it this way, but is it possible if you have an HTTPS VIP passing traffic to HTTPS nodes, F5 NOT terminating the SSL connection?

     

     

    Thanks for all the help

     

     

    Brett
  • Hi Brett,

     

     

    If you import the SSL cert and key you can selectively enable decryption on the request if the pool is down:

     

     

    http://devcentral.f5.com/wiki/default.aspx/iRules/HTTPS_passthrough_fallback_URL.html

     

     

    I don't think you could do this outside of CLIENT_ACCEPTED though.

     

     

    Aaron