Learn F5 Technologies, Get Answers & Share Community Solutions Join DevCentral

Filter by:
  • Solution
  • Technology

iRule and Cisco finesse

I am working on putting a configuration together for Cisco finesse. I see two other related posts on here without a resolution. Let me lay it out.

Request is made for a VIP for a client to hit. There are 2 back end servers. The idea is that there will be monitors in place to check that these servers are in service. If they are both in service, the new request to the VIP will http redirect to either one of the back end servers. If one or the other of the back end servers becomes unavailable (failed monitor), the LTM will no discontinue http redirects to it, until it returns to service.

Is there a way to accomplish this with a single iRule. If so, can you assist with the code?


Rate this Discussion
Comments on this Discussion
Comment made 08-Mar-2016 by Faruk AYDIN 907
This likes standart LTM function. one VS and one pool which has two members and one monitor. No need to use iRule.

Replies to this Discussion


Hi Chris,

a simple round robin redirect functionality, based on the availabiity of a given pool could be implemented like that...

when RULE_INIT {
    set static::finesse_pool "YOUR_FINESSE_POOL_NAME"    
    unset -nocomplain node_list
    array set static::finesse_members {
        " 80"  "https://server1/folder"
        " 80"  "https://server2/folder"
    set active_members [active_members -list $static::finesse_pool]
    # log local0.debug "Active Members = $active_members"
    set selected_member [lindex $active_members [expr {int(rand()*[llength $active_members])}]]
    # log local0.debug "Selected Member = $selected_member"
    set redirect_location $static::finesse_members($selected_member)
    # log local0.debug "Redirect = $redirect_location"
    HTTP::redirect $redirect_location

Note: You have to create a pool for your Finesse application servers in combination with some health monitors. Then update the RULE_INIT event to reflect the [pool] name and adjust the IP/Port to redirect location mapping [array].

Cheers, Kai

Comments on this Reply
Comment made 09-Mar-2016 by Chris Ortiz 4
Thanks so much. I'll move ahead with something like this to test and see what I get.

This works beautifully. I'm now in different dilemma with the monitor assigned to the pool. There is no html file (according to the app owner) that I can execute a "get" on. When I curl the back end server, I get the 302 as expected. I'm having an issue coming up with a successful monitor for this. Below is the curl output and I have tried different iterations of a 302 receive string:

[admin@xxxx:Active:In Sync] ~ # curl -v http://abc.xyz.com/desktop/container/?locale=en_US
* About to connect() to abc.xyz.com port 80 (#0)
*   Trying connected
* Connected to abc.xyz.com ( port 80 (#0)
> GET /desktop/container/?locale=en_US HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 OpenSSL/1.0.1j zlib/1.2.3 libidn/0.6.5
> Host: abc.xyz.com
> Accept: */*
< HTTP/1.1 302 Moved Temporarily
< Pragma: No-cache
< Cache-Control: no-cache
< Expires: Wed, 31 Dec 1969 19:00:00 EST
< Location: https://abc.xyz.com:8443/desktop/container/?locale=en_US
< Content-Length: 0
< Date: Thu, 10 Mar 2016 15:02:04 GMT
< Server:
* Connection #0 to host abc.xyz.com left intact
* Closing connection #0

Thanks Kai, solved my problem as well.

I wanted to add something I found. If you're making several of these rules, customize the names of the variables in each one. Change "finesse_pool" and "finesse_members" to something unique to each rule. Example, in iRule1 you use finesse_pool_1 and finesse_members_1, in iRule2 you use the variables finesse_pool_2 and finesse_members_2, etc.

When I used this template I made 3 copies with different pools/members. When I tested the first one, no problem. When I tested the second one, it routed me to pool members from the first one, as if it wasn't clearing the variable and it was re-using it. Maybe a browser cache thing, I'm not really sure why, but customizing the variable names fixed it.

Can anyone explain to me the use of %1 in the IP addresses of the pool members?