Forum Discussion

Ed_Summers's avatar
Ed_Summers
Icon for Nimbostratus rankNimbostratus
Jan 23, 2014

LTM SNAT TCP timeout = "indefinite"

Ran into an issue of port exhaustion for one server in a SNAT. While researching I found SOL7606 which states:

"Note: When set to Indefinite, UDP or IP SNAT translation idle time-outs are internally limited to a maximum of 300 seconds."

It does not, however, indicate if there is a default timeout for TCP. Looking at an idle connection on tms I see the TCP "idle timeout" listed as 4294962795. The connection was not reaped after 5 minutes (300 seconds). Can someone confirm that SNAT TCP connections, when configured with an "indefinite" timeout, will remain active in the system for this indefinite amount of time if not closed gracefully?

Just looking to confirm my understanding of the tmos operation. We're implementing an explicit timeout value for the SNAT. Version is 10.2.3.

Thanks! -Ed

8 Replies

  • I haven't actually tested it but that is my understanding. Of course the idle timeouts assigned to UDP and IP protocols are limited because otherwise they would be a security risk

     

  • Can someone confirm that SNAT TCP connections, when configured with an "indefinite" timeout, will remain active in the system for this indefinite amount of time if not closed gracefully?

    this unit is 10.2.4.

     config
    
    root@ve10(Active)(tmos) list ltm virtual bar
    ltm virtual bar {
        destination 172.28.24.9:ssh
        ip-protocol tcp
        mask 255.255.255.255
        pool foo
        profiles {
            tcp_indef { }
        }
    }
    root@ve10(Active)(tmos) list ltm pool foo
    ltm pool foo {
        members {
            200.200.200.101:ssh { }
        }
    }
    root@ve10(Active)(tmos) list ltm profile tcp tcp_indef
    ltm profile tcp tcp_indef {
        defaults-from tcp
        idle-timeout 4294967295
    }
    root@ve10(Active)(tmos) list ltm snat
    ltm snat snatbar {
        origins {
            0.0.0.0/0
        }
        translation 200.200.200.252
    }
    
     test
    
    root@ve10(Active)(tmos) show sys connection cs-server-addr 172.28.24.9 cs-server-port 22 all-properties
    Sys::Connections
    192.168.206.178:65164 - 172.28.24.9:22 - 200.200.200.101:22
    -----------------------------------------------------------
      TMM           0
      Type          any
      Protocol      tcp
      Idle Time     465
      Idle Timeout  4294967295
      Unit ID       1
      Lasthop       external 00:01:e8:d5:d4:47
      Virtual Path  172.28.24.9:22
    
                              ClientSide             ServerSide
      Client Addr  192.168.206.178:65164  200.200.200.252:65164
      Server Addr         172.28.24.9:22     200.200.200.101:22
      Bits In                      22.5K                  27.3K
      Bits Out                     26.0K                  22.2K
      Packets In                      24                     23
      Packets Out                     19                     23
    
    Total records returned: 1
    
  • RobS's avatar
    RobS
    Icon for Altostratus rankAltostratus

    Hopefully that setting has been fixed. I remember years ago running 9.4 code we found out indefinite did not mean indefinite and it was really 5 minutes. We had to manually set a very high value, like 14,400 I think. We had to meet with our CIO every day for 2 weeks to discuss application disconnects until we discovered that.

     

  • Hi,

     

    Sorry for using this thread. I have SNAT pool and a vserver with a TCP profile (7200s idle timeout) that uses an irule to selectively snat internal connections. Application owners complain that TCP connections are not being closed gracefully by the LTM (that is not using FIN/ACK), instead the connections are silently dropped. Is there anything i can configure on the vserver or the tcp profile to fix this.

     

    TIA!

     

  • aj1's avatar
    aj1
    Icon for Nimbostratus rankNimbostratus

    Hi,

     

    Sorry for using this thread. I have SNAT pool and a vserver with a TCP profile (7200s idle timeout) that uses an irule to selectively snat internal connections. Application owners complain that TCP connections are not being closed gracefully by the LTM (that is not using FIN/ACK), instead the connections are silently dropped. Is there anything i can configure on the vserver or the tcp profile to fix this.

     

    TIA!