Forum Discussion

senthil147_1421's avatar
senthil147_1421
Icon for Nimbostratus rankNimbostratus
Jun 06, 2017

F5 LTM sending reset packets to Pool member

F5 LTM has a pool member listening on port 8003 and its mainframe server. Pool member is UP but Mainframe console shows consistently F5 is rejecting the connection ( No issues in accessing the VIP ). So I checked the TCpdump and I see F5 is resetting the connection after 3 way handshake . Can someone check and let me know why. its not health check because I configured healthcheck as ICMP.

 

10:01:14.987346 IP 172.31.x.x.18925 > server91.test.com.8003: S 3137956918:3137956918(0) win 5840 10:01:14.987651 IP server91.test.com.8003 > 172.31.x.x.18925: S 531369297:531369297(0) ack 3137956919 win 65535 10:01:14.987797 IP 172.31.x.x.18925 > server91.test.com.8003: . ack 1 win 46 10:01:14.995006 IP server91.test.com.8003 > 172.31.x.x.18925: P 1:28(27) ack 1 win 4096 10:01:14.995023 IP server91.test.com.8003 > 172.31.x.x.18925: P 28:42(14) ack 1 win 4096 10:01:14.995214 IP 172.31.x.x.18925 > server91.test.com.8003: . ack 28 win 46 10:01:14.995219 IP 172.31.x.x.18925 > server91.test.com.8003: . ack 42 win 46 10:01:15.004958 IP 172.31.x.x.18925 > server91.test.com.8003: R 1:1(0) ack 42 win 46

 

8 Replies

  • NAG_65570's avatar
    NAG_65570
    Historic F5 Account

    F5 BigIP can add reset cause in to the packet or in to LTM logs. You can use following commands for enabling reset cause related dbkeys.

     

    Example::

     

    Reset cause: BIG-IP: [0x1abb6c9:1532] Policy action

     

    To log a message in ltm logs:: tmsh modify /sys db tm.rstcause.log value enable To to include reset cause within packet:: tmsh modify /sys db tm.rstcause.pkt value enable

    To disable dbkey after troubleshooting:: tmsh modify /sys db tm.rstcause.log value disable tmsh modify /sys db tm.rstcause.pkt value disable

     

    Once you know the reset cause, it is relatively easy to find the root cause

     

  • Hi,

    If possible perform on BIG-IP:

    tmsh reset-stats net rst-cause

    watch tmsh show net rst-cause

    Then try to initiate connection to VS and observer results of watch command - which reason has counter increase?

    What can be seen on client side of VS? Can you perform trace - maybe client connecting to VS is sending RST?

    Piotr

    P.S. Please use Preformatted Code formating for tcpdump output

  • HI

     

    I have enabled reset cause in the logs and I see following error.

     

    What does this mean also no issues reported in accessing the VIP. its purely communication between LTM to Node.

     

    Jun 7 07:04:08 local/tmm2 err tmm2[6122]: 01230140:3: RST sent from server91.test.com:8003 to 172.31.x.x:53005, [0x11ca629:5165] RST from BIG-IP internal Linux host Jun 7 07:05:08 local/tmm2 err tmm2[6122]: 01230140:3: RST sent from server91.test.com:8003 to 172.31.x.x:53005, [0x11ca629:5165] RST from BIG-IP internal Linux host

     

    • dragonflymr's avatar
      dragonflymr
      Icon for Cirrostratus rankCirrostratus

      Hi,

       

      If I can recall RST from BIG-IP internal Linux host is most often related to monitor. Are you using monitor for your Pool or nodes? If so is it marking members/nodes up - guess yes so maybe this counter is unrelated to your issue.

       

      Best option is to trace connection on both sides of BIG-IP:

       

      client -> VIP

       

      BIG-IP -> mainframe

       

      Original trace looks just fine, there is 3WHS, some packets from mainframe to BIG-IP, ACK for those frames and then RST

       

      To be honest it looks like monitor connection from BIG-IP, is src IP 172.31.x.x for packets == to self IP on VLAN facing mainframe?

       

      If so this is most probably monitor traffic - what monitor are you using for your pool/nodes? Is pool monitor http?

       

      HTTP monitor is issuing RST if it received expected Receive String and more packets are coming from monitored host.

       

      Piotr

       

  • Hi

     

    I am using TCP monitor also tried changing to icmp_gateway still same issue

     

    Senthil

     

    • dragonflymr's avatar
      dragonflymr
      Icon for Cirrostratus rankCirrostratus

      Sorry, but what is exactly issue? You are not able to connect to mainframe via VIP? Is your monitor marking pool member down?

       

      trace you posted - if generated by your monitor seems OK for me.

       

      Piotr

       

    • dragonflymr's avatar
      dragonflymr
      Icon for Cirrostratus rankCirrostratus

      But this trace do not look like either tcp or icmp monitor so it was probably http before?

       

      Piotr

       

  • I believe you will see this behavior when you have an HTTP profile on the virtual and a problem occurs on the serverside or with the internals (modules/plugins) of the BIGIP system itself. I could see this happening if no SNAT exists, there is a conflict with a module/plugin or the address used for SNAT or the server's SYN/ACK is never returned (whatever the reason).

     

    A standard type virtual server maintains connection state independently. If you have an HTTP profile then BIGIP will not attempt a serverside LB pick until the first HTTP command is seen (typically a GET). This "delayed binding" behavior can be confusing since it looks like the VS is resetting the client but really it MUST do so since it is unable to communicate with pool resources (despite forcing it to be marked up via ICMP).