Forum Discussion

nejasmicz_37699's avatar
nejasmicz_37699
Icon for Altocumulus rankAltocumulus
Nov 15, 2018
Solved

F5 LTM SNAT: only 1 outgoing connection, multiple internal clients

I have an F5 LTM SNAT configured:

ltm snat /Common/outgoing_snat_v6 {
    description "IPv6 SNAT translation"
    mirror enabled
    origins {
        ::/0 { }
    }
    snatpool /Common/outgoing_snatpool_v6
    vlans {
        /Common/internal
    }
    vlans-enabled
}

... with a translation configured as:

ltm snat-translation /Common/ext_SNAT_v6 {
    address 2607:f160:c:301d::63
    inherited-traffic-group true
    traffic-group /Common/traffic-group-1
}

... with snatpool configured as:

ltm snatpool /Common/outgoing_snatpool_v6 {
    members {
        /Common/ext_SNAT_v6
    }
}

... and finally, with the SNAT type set to automap:

    vs_pool__snat_type {
        value automap
    } 

The goal is to achieve a single Diameter connection (single source IP, port) between F5 and the external element, while internally multiple Diameter clients connect via F5 to the external element:

However, what ends up happening with this SNAT configuration is that multiple outgoing Diameter connections to the external Diameter element are opened, with the only difference between them being the source port (source IP, destination IP and port remained the same).

The external element cannot handle multiple connections per the same origin IP and the same Diameter entity (internal clients are all configured to use the same Origin-Host during the Capabilities Exchange phase).

Is there a way to configure F5 to funnel all the internal connections into a single outgoing one?

  • After a lot of back and forth, this is the configuration we ended up implementing on F5 LTM v12.1.3.6, that allowed us to utilize MRF to combine multiple connections into a single outgoing connection. The connection exits via the SNAT IP. Hope this helps someone.

    First, we defined a Virtual Server to which the clients send the Diameter requests:

    ltm virtual /Common/virtual_Diameter_Message_Routing {
        destination /Common/HSS_v_Diameter_v6:3868
        ip-protocol tcp
        profiles {
            /Common/profile_diam_message_routing { }
            /Common/profile_diam_message_routing_router_profile { }
            /Common/tcp { }
        }
        rules {
            /Common/qux
        }
        source-address-translation {
            pool /Common/diameter_snatpool
            type snat
        }
        translate-address enabled
        translate-port enabled
    }
    

    ... while the destination is defined as:

    ltm virtual-address /Common/HSS_v_Diameter_v6 {
        address fd41:2:2:1::111
        arp enabled
        icmp-echo enabled
        traffic-group /Common/traffic-group-1
    }
    

    The profiles are defined as:

    ltm message-routing diameter profile session /Common/profile_diam_message_routing {
        acct-application-id 4294967295
        app-service none
        auth-application-id 16777217
        defaults-from /Common/diametersession
        origin-host myoriginhost.test.com
        origin-host-rewrite myoriginhost2.test.com
        origin-realm test.com
        product-name product
        vendor-id 10415
    }
    
    ltm message-routing diameter profile router /Common/profile_diam_message_routing_router_profile {
        app-service none
        defaults-from /Common/diameterrouter
        routes {
            /Common/profile_diam_message_routing_static_route_to_peer
        }
    }
    
    ltm message-routing diameter route /Common/profile_diam_message_routing_static_route_to_peer {
        peers {
            /Common/profile_diam_message_routing_peer
        }
        virtual-server /Common/virtual_Diameter_Message_Routing
    }
    
    ltm message-routing diameter peer /Common/profile_diam_message_routing_peer {
        pool /Common/pool_diameter_server
        transport-config /Common/profile_diam_message_routing_transport
    }    
    
    ltm message-routing diameter transport-config /Common/profile_diam_message_routing_transport {
        ip-protocol tcp
        profiles {
            /Common/profile_diam_message_routing { }
            /Common/tcp { }
        }
        rules {
            /Common/qux
        }
        source-address-translation {
            pool /Common/diameter_snatpool
            type snat
        }
    } 
    

    The SNAT is defined as:

    ltm snatpool /Common/diameter_snatpool {
        members {
            /Common/ext_SNAT_v6
        }
    }
    
    ltm snat-translation /Common/ext_SNAT_v6 {
        address 2607:f160:11:1101::63
        inherited-traffic-group true
        traffic-group /Common/traffic-group-1
    }
    
    ltm snat /Common/outgoing_snat_v6 {
        description "IPv6 SNAT translation"
        mirror enabled
        origins {
            ::/0 { }
        }
        snatpool /Common/outgoing_snatpool_v6
        vlans {
            /Common/internal
        }
        vlans-enabled
    }
    

    ... and finally, the iRules had to be setup to remove Mandatory flags from some of the AVPs that should not have the mandatory bits (bug?) and to send additional Diameter AVPs:

    ltm rule /Common/qux {
        when DIAMETER_EGRESS {
            switch [DIAMETER::command] {
                "257" {
                     260 Vendor-Specific-Application-Id
                     258 Auth-Application-Id
                     266 Vendor-Id
    
                    set aaid_avp [DIAMETER::avp create Auth-Application-Id 0 1 0 0 16777264 unsigned32]
                    set vid_avp [DIAMETER::avp create Vendor-Id 0 1 0 0 10415 unsigned32]
                     DIAMETER::avp append is not designed to create nested avp (ID371630)
                     set grouped_avp [DIAMETER::avp append Auth-Application-Id $aaid_avp source $vid_avp]
                    set grouped_avp ${vid_avp}${aaid_avp}
                    set vsa_avp [DIAMETER::avp create Vendor-Specific-Application-Id 0 1 0 0 $grouped_avp grouped]
                    DIAMETER::avp delete Vendor-Specific-Application-Id
                    DIAMETER::avp insert Vendor-Specific-Application-Id $vsa_avp
    
                    if { [DIAMETER::is_request] } {
                        DIAMETER::avp mflag set Product-Name 0
                        DIAMETER::avp mflag set Firmware-Revision 0
                    }
                }
                default {
                     do something
                }
            }
        }
    }
    

10 Replies

  • GRamanan_294373's avatar
    GRamanan_294373
    Historic F5 Account

    Not sure i fully understand the scenarios which you described; however If you use standard virtual server, its a full-proxy architecture, so it will maintain two separate TCP connections between client-side and server-side. In your case if you have multiple clients they all individually initiate capability exchange with bigip, and bigip initiate separate capability exchange with server (in this process bigip send its own Origin-Host in the CER) when first message receive from client.

     

    • nejasmicz_37699's avatar
      nejasmicz_37699
      Icon for Altocumulus rankAltocumulus

      GRamanan,

       

      The idea would be to take all these internal Diameter clients making outgoing Diameter connections, and funnel them through a single outgoing TCP connection.

       

      The internal Diameter clients are using SNAT currently when establishing an outgoing connection, and that results in the same originating connection IP, but different ports, and therefore multiple connections.

       

      So, the solution we'd need would combine all these different outgoing connections in a single TCP connection, but through it, multiple Diameter CERs would flow.

       

  • GRamanan_294373's avatar
    GRamanan_294373
    Historic F5 Account

    Proxying CER/CEA is against RFC (https://tools.ietf.org/html/rfc6733section-5.3) and let me rephrase the full-proxy architecture, as i mentioned in my previous comment, proxy will maintain two separate connections for client and server (let assume your internal diameter element is a client side and external diameter element is a server side). So all your internal clients establish individual connection towards bigip, and bigip establish separate connection towards the external server, and these connections remain forever (unless some other issues terminated the connection). Diameter is not like other protocols (example HTTP, request & response and close the connection), it has a mechanism to send Watch-Dogs if the connection is idle and that's maintain the connections between diameter elements.

     

    Hope the above clarifies, you dont need something like send all CER's from your internal element to proxy to your external element and again it violate RFC. why because the proxy(bigip) is sitting in between and maintain the connections separately and route the message based on routes.

     

    • nejasmicz_37699's avatar
      nejasmicz_37699
      Icon for Altocumulus rankAltocumulus

      GRamanan,

      With our current SNAT configuration:

      vs_pool__snat_type {
          value automap
      } 
      

      ... we are still seeing multiple outgoing TCP connections being brought up. In essence, there's no pooling of internal connections into a single outgoing one being done. Each new outgoing TCP connection uses the SNAT IP, but a different port.

  • Hi,

     

    I've configure a simple setup based on diameter message routing on a 13.1 LTM. I should be as generic as possible to achieve the setup nejasmicz requested.

     

    What it does:

     

    • CER/CEA went well between client an F5
    • F5 selects server (10.10.10.1) from correct pool
    • F5 does correct SNAT: source is 10.10.10.10 towards server

    What's wrong:

     

    • the CER to server F5 seems to ignore the parameters I've defined in "profile_diam_message_routing" to be used for CER.

    The config I used:

     

    ltm pool /Common/pool_diameter_server {
        members {
            /Common/10.10.10.1:3868 {
                address 10.10.10.1
            }
        }
        monitor /Common/monitor_GatewayFast
        service-down-action reselect
    }
    
    ltm snatpool /Common/diameter_snatpool {
        members {
            /Common/10.10.10.10
        }
    }
    ltm virtual /Common/virtual_Diameter_Message_Routing {
        destination /Common/1.1.1.1:3868
        ip-protocol tcp
        mask 255.255.255.255
        profiles {
            /Common/profile_diam_message_routing { }
            /Common/profile_diam_message_routing_router_profile { }
            /Common/tcp { }
        }
        source 0.0.0.0/0
        translate-address enabled
        translate-port enabled
    }
    ltm message-routing diameter peer /Common/profile_diam_message_routing_peer {
        pool /Common/pool_diameter_server
        transport-config /Common/profile_diam_message_routing_transport
    }
    ltm message-routing diameter route /Common/profile_diam_message_routing_static_route_to_peer {
        peers {
            /Common/profile_diam_message_routing_peer
        }
        virtual-server /Common/virtual_Diameter_Message_Routing
    }
    ltm message-routing diameter transport-config /Common/profile_diam_message_routing_transport {
        ip-protocol tcp
        profiles {
            /Common/Diameter_server_tcp { }
            /Common/diametersession { }
        }
        source-address-translation {
            pool /Common/diameter_snatpool
            type snat
        }
    }
    ltm message-routing diameter profile router /Common/profile_diam_message_routing_router_profile {
        app-service none
        defaults-from /Common/diameterrouter
        max-pending-bytes 0
        max-pending-messages 0
        mirror disabled
        mirrored-message-sweeper-interval 1000
        routes {
            /Common/profile_diam_message_routing_static_route_to_peer
        }
        traffic-group /Common/traffic-group-1
        transaction-timeout 10
        use-local-connection enabled
    }
    ltm message-routing diameter profile session /Common/profile_diam_message_routing {
        acct-application-id 0
        app-service none
        array-acct-application-id { 0 }
        array-auth-application-id { 0 }
        auth-application-id 0
        defaults-from /Common/diametersession
        dest-host-rewrite none
        dest-realm-rewrite none
        handshake-timeout 10
        host-ip-address 10.10.10.10
        max-message-size 0
        max-watchdog-failures 1
        origin-host siteserver.customf5.com
        origin-host-rewrite none
        origin-realm customf5.com
        origin-realm-rewrite none
        persist-avp SESSION-ID[0]
        persist-timeout 180
        persist-type none
        product-name none
        reset-on-timeout enabled
        vendor-id 10415
        vendor-specific-acct-application-id 0
        vendor-specific-auth-application-id 16777264
        vendor-specific-vendor-id 10415
        watchdog-timeout 30
    }
    

    These are the 2 types of CER

     

     

    1.) CER from client to F5:

     

     

     Command Code: 257 Capabilities-Exchange
     ApplicationId: Diameter Common Messages (0)
     Hop-by-Hop Identifier: 0x80000000
     End-to-End Identifier: 0x8dc56807
     AVP: Origin-Host(264) l=20 f=-M- val=hss.loadgen.com
     AVP: Origin-Realm(296) l=16 f=-M- val=loadgen.com
     AVP: Host-IP-Address(257) l=14 f=-M- val=2.2.2.2
     AVP: Vendor-Id(266) l=12 f=-M- val=0
     AVP: Product-Name(269) l=12 f=--- val=LOADGEN
     AVP: Inband-Security-Id(299) l=12 f=-M- val=TLS (1)
     AVP: Vendor-Specific-Application-Id(260) l=32 f=-M-
          AVP Code: 260 Vendor-Specific-Application-Id
          AVP Flags: 0x40, Mandatory: Set
          AVP Length: 32
          Vendor-Specific-Application-Id: 0000010a4000000c000028af000001024000000c01000030
               AVP: Vendor-Id(266) l=12 f=-M- val=10415
                    AVP Code: 266 Vendor-Id
                    AVP Flags: 0x40, Mandatory: Set
                    AVP Length: 12
                    Vendor-Id: 10415
                    VendorId: 3GPP (10415)
               AVP: Auth-Application-Id(258) l=12 f=-M- val=3GPP SWm (16777264)
                    AVP Code: 258 Auth-Application-Id
                    AVP Flags: 0x40, Mandatory: Set
                    AVP Length: 12
                    Auth-Application-Id: 3GPP SWm (16777264)
    

     

    2.) CER from F5 to server node (10.10.10.1)

     

     

     

     Command Code: 257 Capabilities-Exchange
     AVP: Origin-Host(264) l=19 f=-M- val=host.f5.com
     AVP: Origin-Realm(296) l=14 f=-M- val=f5.com
     AVP: Host-IP-Address(257) l=14 f=-M- val=10.10.10.10
     AVP: Vendor-Id(266) l=12 f=-M- val=3375
     AVP: Product-Name(269) l=16 f=-M- val=F5 Bigip
     AVP: Origin-State-Id(278) l=12 f=-M- val=0
     AVP: Auth-Application-Id(258) l=12 f=-M- val=Diameter Common Messages (0)
     AVP: Acct-Application-Id(259) l=12 f=-M- val=Diameter Common Messages (0)
     AVP: Firmware-Revision(267) l=12 f=-M- val=1
    

     

    In v13 these Vendor-Specific-Application-Ids are supported, which is mandatory for our usecase.

     

     

     

    The static route does intentionally matches all Application IDs, Origin Realms and Desitination Realms to forward all incoming Diameter traffic (to 1.1.1.1:3868) regardless of its AVPs to the peer node (10.10.10.2).

     

    Why are these parameters not evaluated in the CER towards my server node but instead Origin-host is set to "; and Product-Name is "F5 Bigip" - is it something blueprint? Furthermore my Vendor-Specific-Application-Ids are ignored completely.

     

    Had anyone experience in configuring message routing for diameter and could point out what went wrong?

     

    • GRamanan_294373's avatar
      GRamanan_294373
      Historic F5 Account

      I suggest you to open a support case with packet capture and qkview so that support can assist further on this case.

       

  • After a lot of back and forth, this is the configuration we ended up implementing on F5 LTM v12.1.3.6, that allowed us to utilize MRF to combine multiple connections into a single outgoing connection. The connection exits via the SNAT IP. Hope this helps someone.

    First, we defined a Virtual Server to which the clients send the Diameter requests:

    ltm virtual /Common/virtual_Diameter_Message_Routing {
        destination /Common/HSS_v_Diameter_v6:3868
        ip-protocol tcp
        profiles {
            /Common/profile_diam_message_routing { }
            /Common/profile_diam_message_routing_router_profile { }
            /Common/tcp { }
        }
        rules {
            /Common/qux
        }
        source-address-translation {
            pool /Common/diameter_snatpool
            type snat
        }
        translate-address enabled
        translate-port enabled
    }
    

    ... while the destination is defined as:

    ltm virtual-address /Common/HSS_v_Diameter_v6 {
        address fd41:2:2:1::111
        arp enabled
        icmp-echo enabled
        traffic-group /Common/traffic-group-1
    }
    

    The profiles are defined as:

    ltm message-routing diameter profile session /Common/profile_diam_message_routing {
        acct-application-id 4294967295
        app-service none
        auth-application-id 16777217
        defaults-from /Common/diametersession
        origin-host myoriginhost.test.com
        origin-host-rewrite myoriginhost2.test.com
        origin-realm test.com
        product-name product
        vendor-id 10415
    }
    
    ltm message-routing diameter profile router /Common/profile_diam_message_routing_router_profile {
        app-service none
        defaults-from /Common/diameterrouter
        routes {
            /Common/profile_diam_message_routing_static_route_to_peer
        }
    }
    
    ltm message-routing diameter route /Common/profile_diam_message_routing_static_route_to_peer {
        peers {
            /Common/profile_diam_message_routing_peer
        }
        virtual-server /Common/virtual_Diameter_Message_Routing
    }
    
    ltm message-routing diameter peer /Common/profile_diam_message_routing_peer {
        pool /Common/pool_diameter_server
        transport-config /Common/profile_diam_message_routing_transport
    }    
    
    ltm message-routing diameter transport-config /Common/profile_diam_message_routing_transport {
        ip-protocol tcp
        profiles {
            /Common/profile_diam_message_routing { }
            /Common/tcp { }
        }
        rules {
            /Common/qux
        }
        source-address-translation {
            pool /Common/diameter_snatpool
            type snat
        }
    } 
    

    The SNAT is defined as:

    ltm snatpool /Common/diameter_snatpool {
        members {
            /Common/ext_SNAT_v6
        }
    }
    
    ltm snat-translation /Common/ext_SNAT_v6 {
        address 2607:f160:11:1101::63
        inherited-traffic-group true
        traffic-group /Common/traffic-group-1
    }
    
    ltm snat /Common/outgoing_snat_v6 {
        description "IPv6 SNAT translation"
        mirror enabled
        origins {
            ::/0 { }
        }
        snatpool /Common/outgoing_snatpool_v6
        vlans {
            /Common/internal
        }
        vlans-enabled
    }
    

    ... and finally, the iRules had to be setup to remove Mandatory flags from some of the AVPs that should not have the mandatory bits (bug?) and to send additional Diameter AVPs:

    ltm rule /Common/qux {
        when DIAMETER_EGRESS {
            switch [DIAMETER::command] {
                "257" {
                     260 Vendor-Specific-Application-Id
                     258 Auth-Application-Id
                     266 Vendor-Id
    
                    set aaid_avp [DIAMETER::avp create Auth-Application-Id 0 1 0 0 16777264 unsigned32]
                    set vid_avp [DIAMETER::avp create Vendor-Id 0 1 0 0 10415 unsigned32]
                     DIAMETER::avp append is not designed to create nested avp (ID371630)
                     set grouped_avp [DIAMETER::avp append Auth-Application-Id $aaid_avp source $vid_avp]
                    set grouped_avp ${vid_avp}${aaid_avp}
                    set vsa_avp [DIAMETER::avp create Vendor-Specific-Application-Id 0 1 0 0 $grouped_avp grouped]
                    DIAMETER::avp delete Vendor-Specific-Application-Id
                    DIAMETER::avp insert Vendor-Specific-Application-Id $vsa_avp
    
                    if { [DIAMETER::is_request] } {
                        DIAMETER::avp mflag set Product-Name 0
                        DIAMETER::avp mflag set Firmware-Revision 0
                    }
                }
                default {
                     do something
                }
            }
        }
    }
    
    • nejasmicz_37699's avatar
      nejasmicz_37699
      Icon for Altocumulus rankAltocumulus

      Do note that there's a bug on F5 LTM v12.1.3.6 that prevents the Virtual Server's IP address to float properly between active and the standby unit.

      This is the bug: https://cdn.f5.com/product/bugtracker/ID608511.html, and the solution is to explicitly define a Traffic Group in 'ltm message-routing diameter profile router':

      ltm message-routing diameter profile router profile_diam_message_routing_router_profile {
          app-service none
          defaults-from diameterrouter
          routes {
              profile_diam_message_routing_static_route_to_peer
          }
          traffic-group traffic-group-1              <--------------------- Attach traffic group here
      }