Forum Discussion

greeblesnort's avatar
greeblesnort
Icon for Nimbostratus rankNimbostratus
Jan 09, 2014

is there a difference in the connection tracking if you specify a node via iRule?

We've run across a situation with a partner where we want to proxy certain requests, transparently, to a backend partner for processing, and then return the responses to the customer as if they were processed by us directly. The additional requirement from our SEO folks is that we cannot use a subdomain for these requests.

customer <-> us <-> "normal" requests <-> local backend servers
                <-> "special" requests <-> remote backend servers

Normally, we would simply put the IPs in a pool, have the partner give us some sort of monitor-able status for availability, and leave it at that. However, in this case, the partner is pretty insistent that we use a DNS name to forward our requests to.

Doing some digging around devcentral, I came up with this:

when RULE_INIT {
    set static::ldns /Common/dvlp_dns.app/dvlp_dns_udp_vs
}
when HTTP_REQUEST { 
if { ([URI::query [HTTP::uri] check ] ne "") } {
        set ips [RESOLV::lookup @$static::ldns -a remote.site.com]
        set firstip [lindex $ips 0]
        if { $firstip ne "" } {
            node $firstip 
        } else {
            log local0. "no destination available for GLB node command"
        }
    }
}

The "if" statement is a testing statement just to make sure my logic works and will be replaced by whatever is decided to use as the match for separating the "normal" and "special" requests. Obviously, this will introduce a certain amount of latency into these requests but, given the specific requirements, I don't see any way to avoid that.

Two questions:

  1. One of the other engineers is concerned that specifying the destination in this fashion will produce additional strain on the load balancer's resources because it may be using something outside of the normal connection tracking to keep tabs on the in/out traffic for each client request. It's my assumption that the virtual "object" is where the connection tracking is done, so other than bypassing the normal pool member selection mechanism, this is otherwise a normal internal process to the F5. Which one would be more correct or are we both nuts?

  2. Has anyone else run into this with similar requirements and found a better way than this to do it?

1 Reply

  • I think the most significant performance impact will be in 1) the DNS request, and 2) the packet path to the remote site. DNS should normally be cached for some configurable amount of time, so that's maybe not a huge concern. As for connection tracking, in/out metrics on the server side will absolutely depend on which direction the traffic is flowing, and client side will only be affected so much as the remote request/response slows down the application experience as a whole. Connection tracking is relative to your point of view, client side or server side. Otherwise a node command is no more or less efficient than a pool command.

     

    To your second question, this is a not too uncommon sort of thing actually. It's not the most efficient thing in the world, and is highly dependent on the latency and frequency of these remote requests, but it's sometimes unavoidable. Where you might run into trouble is in the availability of the remote service. One would hope that the remote site owner is running LTM with robust health monitors and GTM to serve up good addresses, but that's usually not the case, so the burden is on you to handle availability. You could, for example, use a pool instead that is periodically updated from a monitor script that does DNS lookups and health checks. You could evaluate the response and resend to a different IP if the response was bad (or perhaps didn't return in some amount of time). Or, hypothetically, you could consume the DNS response into a table and round robin through the IPs (marking bad responses) and maintaining persistence at the application level. You could also employ some basic caching/compression practices in LTM, or more powerful stuff in WA/AAM to basically prevent the remote request in the first place if the content is cacheable.