Learn F5 Technologies, Get Answers & Share Community Solutions Join DevCentral

Filter by:
  • Solution
  • Technology
Answers

fqdn nodes in non-default partitions

I think I hit a bug in BigIP 11.6.0.HF4

I'm trying to define FQDN nodes inside a custom partition which has a custom routing domain.

Here you can see the tmsh commands I used to setup the partition:

cd /Prod
create net vlan VLAN_PROD interfaces add { 1.1} tag 4081
create net route-domain production description "Routing for production properties" id 1 vlans add {VLAN_PROD}
cd /
modify auth partition Prod default-route-domain 1
cd /Prod
create net self F501_PROD address 10.203.11.10/24 vlan VLAN_PROD
create net route prod_default gw 10.203.11.1 network default 

I have a server on the same subnet, with two IP addresses: 10.203.11.241 10.203.11.242

I create a test pool, and two nodes, as follows:

cd /Prod
create ltm pool pool_prod members none monitor http
create ltm node test_ip_node address 10.203.11.242 description "static IP node" monitor /Common/icmp
create ltm node test_dns_node fqdn { autopopulate enabled name ip-10-203-11-241.eu-central-1.compute.internal down-interval 2 interval 15 } monitor /Common/icmp

A third (ephimeral) node is created by the fqdn one.

Now you can see the problem!

The fqdn node addresses have a %0 suffix (as if it was created on the default routing domain). It apparently can be reached by the monitor, but definitely fails when used in a pool.

cd /Prod
modify ltm pool pool_prod members replace-all-with { test_ip_node:80 }

My virtual server works.

cd /Prod
modify ltm pool pool_prod members replace-all-with { test_dns_node:80 }

Now accessing the virtual server fails with 'No data received', the same as when there is no node active in the pool.

My /Common (predefined) partition has no VLANs, nor any self-IPs, nor any routes defined. I don't really think this should be a problem in a true 'segregated' configuration.

Any hints?

Angelo.

0
Rate this Discussion
Comments on this Discussion
Comment made 31-Jul-2015 by Vishal_Bhugra 4
Hello Angelo Did you registered any bug for this, I am seeing the same issue in 11.6 HF 3 and 4
0
Comment made 03-Aug-2015 by RalphB
It looks like this has already been reported as Bug ID522465.
0

Replies to this Discussion

placeholder+image

Answer from F5:

ID522465 is marked to be fixed in our system in the next major release (after version 12) which is predicted to be out before the end of 2016.

1
Comments on this Reply
Comment made 20-Nov-2015 by Angelo Turetta 158
Thank you. By next major, you intend 13.x ?
1
Comment made 14-Dec-2015 by Dmitry A. Sysoev 185
I think so)
0
placeholder+image

Yes, it is a bug/lack of support with Route Domain. Would suggest to open up a case to track it.

0
placeholder+image

The problem is in hf6 also. frustrating. Makes the whole feature almost useless.

0
placeholder+image

The only way I succeeded to get this working with 11.6.0-hf6 was that I had to enable parent-child-relationship between the default and non-default partitions.

It works ok, but is not too flexible setup, because nodes are generated into common and as a result there is limitations how u can use different resources...

0
Comments on this Reply
Comment made 02-Jul-2016 by Con Spathas 1
We hit the same issue. I worked up an iRule to get us by in the meantime. You can take a look at it here: https://devcentral.f5.com/questions/fqdn-node-with-route-domains
0
Comment made 02-Jul-2016 by hpniemi 2
I'm using FQDN-nodes auto-populated by F5 successfully now with this basic setup: - parent-child-relationship enabled between RD 0 (nodes are generated here) and RD3 (vserver is here) - iRule in vserver, that uses correct RD0's nat pool with outbound connections from fqdn nodes when LB_SELECTED { #if RD is 0 if { [LB::server route_domain] == "" } { snatpool /fi-connect-lb/SNAT_xxx.xxx.xxx.xxx-xxx } This way it has worked.
0
Comment made 19-Jul-2016 by Manoj Gupta 0

@hpniemi,

I am new to F5 and running into same issue can you please share the irule that you are using.

@Con Spathas I have looked at your iRule, this resolves for a Single FQDN, what if pool has multiple servers all using FQDN? Pardon me if my question is incorrect as I mentioned I am new to this.

0
placeholder+image

Hi, simple iRule that workarounds this:

when LB_FAILED {
    set debug 0
    set dest_rd "%10"
    set dest_addr [LB::server addr]  
    if { $debug } then { log local0. "server address: $dest_addr"}
    set dest_addr_rd "$dest_addr$dest_rd"
    if { $debug } then { log local0. "complete server address: $dest_addr_rd"}
    LB::reselect node $dest_addr_rd
}

not tested with high requests rateo, but it could be a good workaround waiting for the feature to be implemented. Just modify to add your correct %ID.

Cheers

0
Comments on this Reply
Comment made 09-Aug-2017 by Stanislas Piron 10236

Hi,

You can retrieve dest_rd from [LB::server route_domain]

0
placeholder+image

this feature was mentioned during a session at agility in chicago, they couldn't give versions but it didn't feel like it would be 13.1, i would sooner think 14.0 (or later).

0
placeholder+image

Hello! Any update of this topic or may be solved this problem ?

0
Comments on this Reply
Comment made 3 months ago by Torti 805

Problem still exist in 13.1.1.2, thats a little disappointing.

0
Comment made 3 months ago by Lee Sutcliffe 2650

May be a bit overkill for most use cases but I posted a code share that can get round this issue. It can easily be modified to lookup only one FQDN per VIP

https://devcentral.f5.com/codeshare/dynamic-ephemeral-node-fqdn-resolution-with-route-domains-with-dns-caching-irule-1148

0