v.10 - A Look at Route Domains

Introduction

New to v.10 is a feature F5 calls route domains.  A route domain is an isolated routing environment where addresses and routes are appended (internal to the system) with a domain identification that allows reutilization of IP space within the BIG-IP system.  Each route domain has its own routing table, and can be nested so that a lookup that comes up empty in route domain 1 can peak into a parent route domain for an answer.  Also, a route in one route domain can also be routed towards a gateway in another route domain.  Note that the presence of a route still doesn't indicate a flow will occur, the BIG-IP is still a default-deny device and will not pass traffic without being configured to do so in a virtual server. The Route Domain ID is a two-octet field, and thus can be 0 - 65534.  However, each route domain needs a unique vlan. The number of route domains you can effectively deploy depends on platform and configuration objects in use per route domain.

 

Configuration

In order to implement a route domain, you need to create the following objects in order:

  • Vlan
  • Route Domain
  • Self IP Address
  • Routes*
  • Pool w/ members*
  • Virtual Server

(* For a local implemenation, no routes would be necessary.  For an advanced implementation with iRules, you may not even reference a pool.)

Since it's new and shiny and I'm itching to play, we'll configure the route domain example in the tmsh shell.

1) Create vlans, one for RD0 (default) and one for RD1

 create net vlan vlan40 tag 40 interfaces add { 1.2 }
 create net vlan vlan41_rd1 tag 41 interfaces add { 1.1 }

2) Create Route Domain 1

create net route-domain 1 description RD1 vlans add { vlan41_rd1 }

3) Create the Self IP addresses, 1 for RD0 and 1 for RD1

create net self 10.10.40.5/24 vlan vlan40
create net self 10.10.40.5%1/24 vlan vlan41_rd1

4) Create the Pools*

create ltm pool pool1 members add { 10.10.40.51:80 10.10.40.52:80 10.10.40.53:80 }
create ltm pool pool1_rd1 members add { 10.10.40.51%1:80 10.10.40.52%1:80 10.10.40.53%1:80 }
modify ltm pool pool1 monitor tcp
modify ltm pool pool1_rd1 monitor tcp

(* Note that segregating pool members from differing route domains is not required, just a preference for this demonstration)

5) Create the Virtual

create ltm virtual vip1 destination 10.10.20.50:80 pool pool1_rd1 ip-protocol tcp profiles { http tcp-lan-optimized }

I'm creating one virtual here for this example, but you could easily carry the route domain concept out to the virtual as well.  OK, so how does it all look now that it's configured?

vlan vlan40 {
    interfaces {
        1.2 { }
    }
    tag 40
}
vlan vlan41_rd1 {
    interfaces {
        1.1 { }
    }
    tag 41
}
route-domain 1 {
    description RD1
    parent 0
    vlans {
        vlan41_rd1
    }
}
self 10.10.40.5%1/24 {
    vlan vlan41_rd1
}
self 10.10.40.5/24 {
    vlan vlan40
}
pool pool1 {
    members {
        10.10.40.51:http {
            state up
        }
        10.10.40.52:http {
            state up
        }
        10.10.40.53:http {
            state up
        }
    }
    monitor tcp
}
pool pool1_rd1 {
    members {
        10.10.40.51%1:http {
            state up
        }
        10.10.40.52%1:http {
            state up
        }
        10.10.40.53%1:http {
            state up
        }
    }
    monitor tcp
}
virtual vip1 {
    destination 10.10.20.50:http
    ip-protocol tcp
    mask 255.255.255.255
    pool pool1_rd1
    profiles {
        http { }
        tcp-lan-optimized { }
    }
    snat automap
}

You'll notice that the only real indication besides the route domain object itself is the IP addresses on the self and pool members.  Now we have it configured, what can we do with it?  Well, the obvious use case is multitenancy, allowing a hosting organization to cookie cutter the backend servers without ever needing to manage IP space, each customer can be identical up through layer three.  Another use could be application versioning.  I've done this with different virtuals serving alternate versions of the application, which required unique IPs and ports on the backend, requiring additional work of the developers and network admins.  With route domains, the new application can be deployed identically to the existing version, requiring only a simple iRule to switch between them and no additional work from the developers.


iRules and Route Domains

ROUTE::domain returns the route domain of the current connection.  This is contextual, as the route domain on the client side may be different that the server side, as in our example.  This is illustrated in the log output from this iRule:

when CLIENT_ACCEPTED {
  log local0. "Route domain is [ROUTE::domain]"
}
when SERVER_CONNECTED {
 log local0. "Route domain is [ROUTE::domain]"
}

Apr 16 11:11:24 local/tmm info tmm[2498]: Rule rdtest <CLIENT_ACCEPTED>: Route domain is 0
Apr 16 11:11:24 local/tmm info tmm[2498]: Rule rdtest <SERVER_CONNECTED>: Route domain is 1

The LB::server command is not new, but the route_domain keyword replaces vlan.  All the other commands that deal with IP addresses should work as expected, but will return the route domain information if not in route domain zero.

when LB_SELECTED {
 log local0. "Route domain is [ROUTE::domain], Pool member is [LB::server addr]"
}

Apr 16 11:20:01 local/tmm1 info tmm1[2499]: Rule rdtest <LB_SELECTED>: Route domain is 0, Pool member is 10.10.40.53%1


Known Issues

Route domains are cool, easy, and incredibly useful.  However, at this time, the GTM and ZebOS modules only support the default route domain, so plan accordingly.  There is also a problem with NAT between route domains documented in Solution 9933 (requires a login to https://support.f5.com). 

  1. UPDATE (09/02/2009) - user earnhart has updated the wiki with an inventive workaround to the NAT/SNAT problem.
  2. UPDATE (06/29/2011) - The caveat above has been fixed in versions 10.2.1-HF2 and the 10.2.2 general release.  Please reference solution 10510 for more details on the issue.

 

Published Apr 16, 2009
Version 1.0

Was this article helpful?

1 Comment

  • One significant issue I ran into while trying to troubleshoot a node issue with routing domains is described in SOL10467:

     

     

    SOL10467: Userland applications on a BIG-IP system cannot connect to hosts in non-default route domains

     

    https://support.f5.com/kb/en-us/solutions/public/10000/400/sol10467.html