Forum Discussion

ADAJ_180030's avatar
ADAJ_180030
Icon for Nimbostratus rankNimbostratus
Dec 13, 2014

Is it possible to pair TCP and UDP streams?

Hello, I have an application that utilizes a pair of and UDP traffic streams per application session. Is it possible to configure BIG-IP load balancing so that the UDP traffic for a particular application session is directed to the same load-balanced server where the companion TCP traffic is directed? Thanks..

 

8 Replies

    • ADAJ_180030's avatar
      ADAJ_180030
      Icon for Nimbostratus rankNimbostratus
      I should have confessed my ignorance of load balancing scenarios and techniques. Not only do I need session persistence on the TCP connection, I also need packets of a companion UDP stream to be directed to the exact same server. What I am looking for is a a way to instruct the load balancer to locate a key identifier in the UDP packet, use it to identify an existing TCP stream and direct the UDP packet to the same server. If "match across" option does that, then it is my answer, but I was not able to come to that conclusion reading the description of match across.
    • ADAJ_180030's avatar
      ADAJ_180030
      Icon for Nimbostratus rankNimbostratus
      I should have confessed my ignorance of load balancing scenarios and techniques. Not only do I need session persistence on the TCP connection, I also need packets of a companion UDP stream to be directed to the exact same server. What I am looking for is a a way to instruct the load balancer to locate a key identifier in the UDP packet, use it to identify an existing TCP stream and direct the UDP packet to the same server. If "match across" option does that, then it is my answer, but I was not able to come to that conclusion reading the description of match across.
  • What I am looking for is a a way to instruct the load balancer to locate a key identifier in the UDP packet, use it to identify an existing TCP stream and direct the UDP packet to the same server.

    your understanding is absolutely correct. locating key identifier is mandatory for persistence. match across comes in after that.

    e.g.

     configuration
    
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm virtual bar1
    ltm virtual bar1 {
        destination 172.28.24.10:8
        ip-protocol tcp
        mask 255.255.255.255
        persist {
            myuie {
                default yes
            }
        }
        pool foo1
        profiles {
            tcp { }
        }
        rules {
            qux1
        }
        source 0.0.0.0/0
        source-address-translation {
            type automap
        }
        vs-index 10
    }
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm pool foo1
    ltm pool foo1 {
        members {
            200.200.200.101:7 {
                address 200.200.200.101
            }
            200.200.200.111:7 {
                address 200.200.200.111
            }
        }
    }
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm persistence universal myuie
    ltm persistence universal myuie {
        app-service none
        defaults-from universal
        match-across-virtuals enabled
    }
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm rule qux1
    ltm rule qux1 {
        when CLIENT_ACCEPTED {
      TCP::collect
    }
    when CLIENT_DATA {
      set key [TCP::payload]
      persist uie [TCP::payload]
      TCP::release
    }
    when SERVER_CONNECTED {
      log local0. "key=$key server=[IP::server_addr]:[TCP::server_port]"
    }
    }
    
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm virtual bar2
    ltm virtual bar2 {
        destination 172.28.24.100:88
        ip-protocol udp
        mask 255.255.255.255
        persist {
            myuie {
                default yes
            }
        }
        pool foo2
        profiles {
            udp { }
        }
        rules {
            qux2
        }
        source 0.0.0.0/0
        source-address-translation {
            type automap
        }
        vs-index 11
    }
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm pool foo2
    ltm pool foo2 {
        members {
            200.200.200.101:77 {
                address 200.200.200.101
            }
            200.200.200.111:77 {
                address 200.200.200.111
            }
        }
    }
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm rule qux2
    ltm rule qux2 {
        when CLIENT_ACCEPTED {
      set key [UDP::payload]
      persist uie [UDP::payload]
    }
    when SERVER_CONNECTED {
      log local0. "key=$key server=[IP::server_addr]:[UDP::server_port]"
    }
    }
    
     test
    
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) show ltm persistence persist-records all-properties
    Sys::Persistent Connections
    universal - 172.28.24.10:8 - 200.200.200.101:7
    ----------------------------------------------
      TMM           1
      Mode          universal
      Value         a
    
      Age (sec.)    8
      Virtual Name  /Common/bar1
      Virtual Addr  172.28.24.10:8
      Node Addr     200.200.200.101:7
      Pool Name     /Common/foo1
      Client Addr   172.28.24.1
      Owner entry
    
    universal - 172.28.24.10:8 - 200.200.200.111:7
    ----------------------------------------------
      TMM           1
      Mode          universal
      Value         c
    
      Age (sec.)    32
      Virtual Name  /Common/bar1
      Virtual Addr  172.28.24.10:8
      Node Addr     200.200.200.111:7
      Pool Name     /Common/foo1
      Client Addr   172.28.24.1
      Owner entry
    
    universal - 172.28.24.10:8 - 200.200.200.111:7
    ----------------------------------------------
      TMM           0
      Mode          universal
      Value         b
    
      Age (sec.)    35
      Virtual Name  /Common/bar1
      Virtual Addr  172.28.24.10:8
      Node Addr     200.200.200.111:7
      Pool Name     /Common/foo1
      Client Addr   172.28.24.1
      Owner entry
    
    Total records returned: 3
    
    [root@ve11a:Active:In Sync] config  cat /var/log/ltm
    Dec 14 13:05:34 ve11a info tmm[14890]: Rule /Common/qux1 : key=a  server=200.200.200.101:7
    Dec 14 13:05:37 ve11a info tmm1[14890]: Rule /Common/qux1 : key=b  server=200.200.200.111:7
    Dec 14 13:05:39 ve11a info tmm[14890]: Rule /Common/qux1 : key=c  server=200.200.200.111:7
    Dec 14 13:05:45 ve11a info tmm1[14890]: Rule /Common/qux2 : key=c  server=200.200.200.111:77
    Dec 14 13:05:50 ve11a info tmm[14890]: Rule /Common/qux2 : key=b  server=200.200.200.111:77
    Dec 14 13:05:53 ve11a info tmm1[14890]: Rule /Common/qux2 : key=c  server=200.200.200.111:77
    Dec 14 13:06:17 ve11a info tmm[14890]: Rule /Common/qux2 : key=a  server=200.200.200.101:77
    
  • What I am looking for is a a way to instruct the load balancer to locate a key identifier in the UDP packet, use it to identify an existing TCP stream and direct the UDP packet to the same server.

    your understanding is absolutely correct. locating key identifier is mandatory for persistence. match across comes in after that.

    e.g.

     configuration
    
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm virtual bar1
    ltm virtual bar1 {
        destination 172.28.24.10:8
        ip-protocol tcp
        mask 255.255.255.255
        persist {
            myuie {
                default yes
            }
        }
        pool foo1
        profiles {
            tcp { }
        }
        rules {
            qux1
        }
        source 0.0.0.0/0
        source-address-translation {
            type automap
        }
        vs-index 10
    }
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm pool foo1
    ltm pool foo1 {
        members {
            200.200.200.101:7 {
                address 200.200.200.101
            }
            200.200.200.111:7 {
                address 200.200.200.111
            }
        }
    }
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm persistence universal myuie
    ltm persistence universal myuie {
        app-service none
        defaults-from universal
        match-across-virtuals enabled
    }
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm rule qux1
    ltm rule qux1 {
        when CLIENT_ACCEPTED {
      TCP::collect
    }
    when CLIENT_DATA {
      set key [TCP::payload]
      persist uie [TCP::payload]
      TCP::release
    }
    when SERVER_CONNECTED {
      log local0. "key=$key server=[IP::server_addr]:[TCP::server_port]"
    }
    }
    
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm virtual bar2
    ltm virtual bar2 {
        destination 172.28.24.100:88
        ip-protocol udp
        mask 255.255.255.255
        persist {
            myuie {
                default yes
            }
        }
        pool foo2
        profiles {
            udp { }
        }
        rules {
            qux2
        }
        source 0.0.0.0/0
        source-address-translation {
            type automap
        }
        vs-index 11
    }
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm pool foo2
    ltm pool foo2 {
        members {
            200.200.200.101:77 {
                address 200.200.200.101
            }
            200.200.200.111:77 {
                address 200.200.200.111
            }
        }
    }
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm rule qux2
    ltm rule qux2 {
        when CLIENT_ACCEPTED {
      set key [UDP::payload]
      persist uie [UDP::payload]
    }
    when SERVER_CONNECTED {
      log local0. "key=$key server=[IP::server_addr]:[UDP::server_port]"
    }
    }
    
     test
    
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) show ltm persistence persist-records all-properties
    Sys::Persistent Connections
    universal - 172.28.24.10:8 - 200.200.200.101:7
    ----------------------------------------------
      TMM           1
      Mode          universal
      Value         a
    
      Age (sec.)    8
      Virtual Name  /Common/bar1
      Virtual Addr  172.28.24.10:8
      Node Addr     200.200.200.101:7
      Pool Name     /Common/foo1
      Client Addr   172.28.24.1
      Owner entry
    
    universal - 172.28.24.10:8 - 200.200.200.111:7
    ----------------------------------------------
      TMM           1
      Mode          universal
      Value         c
    
      Age (sec.)    32
      Virtual Name  /Common/bar1
      Virtual Addr  172.28.24.10:8
      Node Addr     200.200.200.111:7
      Pool Name     /Common/foo1
      Client Addr   172.28.24.1
      Owner entry
    
    universal - 172.28.24.10:8 - 200.200.200.111:7
    ----------------------------------------------
      TMM           0
      Mode          universal
      Value         b
    
      Age (sec.)    35
      Virtual Name  /Common/bar1
      Virtual Addr  172.28.24.10:8
      Node Addr     200.200.200.111:7
      Pool Name     /Common/foo1
      Client Addr   172.28.24.1
      Owner entry
    
    Total records returned: 3
    
    [root@ve11a:Active:In Sync] config  cat /var/log/ltm
    Dec 14 13:05:34 ve11a info tmm[14890]: Rule /Common/qux1 : key=a  server=200.200.200.101:7
    Dec 14 13:05:37 ve11a info tmm1[14890]: Rule /Common/qux1 : key=b  server=200.200.200.111:7
    Dec 14 13:05:39 ve11a info tmm[14890]: Rule /Common/qux1 : key=c  server=200.200.200.111:7
    Dec 14 13:05:45 ve11a info tmm1[14890]: Rule /Common/qux2 : key=c  server=200.200.200.111:77
    Dec 14 13:05:50 ve11a info tmm[14890]: Rule /Common/qux2 : key=b  server=200.200.200.111:77
    Dec 14 13:05:53 ve11a info tmm1[14890]: Rule /Common/qux2 : key=c  server=200.200.200.111:77
    Dec 14 13:06:17 ve11a info tmm[14890]: Rule /Common/qux2 : key=a  server=200.200.200.101:77
    
    • ADAJ_180030's avatar
      ADAJ_180030
      Icon for Nimbostratus rankNimbostratus
      Thank you for the detailed example. Very much appreciated!