Forum Discussion

chris_15807's avatar
chris_15807
Icon for Nimbostratus rankNimbostratus
May 06, 2009

How to route packets from a route domain to default

My understanding on route domains is that one could configure a route domain to peek into a parent for routes, hence allowing packets to move across route domains.

 

 

However I've setup my device as follows:

 

 

- VLAN 1 (external)

 

- VLAN 110 (internal)

 

- Route domain 0 (default) with VLAN external

 

- Route domain 110 (internal) with VLAN internal, parent is route domain 0.

 

- Self IP 10.7.1.254 / 255.255.0.0 on VLAN external

 

- Self IP 192.168.0.254%110 on VLAN internal

 

- SNAT automap for all addresses on VLAN internal

 

 

In this configuration, I can't route packets from a host on VLAN internal out hosts on the external VLAN.

 

 

If I remove the route domains, then it works fine. That's with the following:

 

 

- VLAN 1 (external)

 

- VLAN 110 (internal)

 

- Route domain 0 (default) with all VLANs

 

- Self IP 10.7.1.254 / 255.255.0.0 on VLAN external

 

- Self IP 192.168.0.254 on VLAN internal

 

- SNAT automap for all addresses on VLAN internal

 

 

Any thoughts?

6 Replies

  • Solution 9933 (Click here) details a nat issue between route domains, but I'm not sure if this includes snats. Might want to open a case with support to verify.
  • I can't imagine SNATs wouldn't work and it would still pass test. They're pretty essential if you're going to have multiple VLANs with the same IP subnets - which is one of the discussed advantages of using route domains.

     

     

    I guess I'll raise a support request.

     

     

    cheers,

     

    chris
  • OK, so after some bang-head-against-wall moments this morning, I was able to get this working with some help from Mr. L4L7 himself. On my v10HF2 system, I can get RD1 to forward traffic to RD0 with snat automap enabled, but not with a default snat. I had to create a virtual forwarder (0.0.0.0%1) with snat automap enabled, then do a restart on tmm.
  • I can't reproduce this work-around.

     

     

    Do you mean a virtual server, configured as:

     

    - destination type: host

     

    - destination address: "0.0.0.0%110"

     

    - service port: "* All Ports"

     

    - type: "forwarding (layer 2)"

     

    - protocol: "* All protocols"

     

    - enabled on vlan "internal"

     

    - SNAT Pool: "automap"

     

     

    ?

     

     

    I restarted tmm by "bigpipe restart tmm" (also did a box reboot). No joy.

     

  • Thanks! Initial observations are that this appears to work (DNS requests certainly made it through). Will test some more.