Forum Discussion

Kevin_M_182964's avatar
Kevin_M_182964
Icon for Nimbostratus rankNimbostratus
Feb 10, 2015

Can health monitor maintain TCP connection?

Can a TCP health monitor check be configured such that the load balancer maintains a persistent TCP connection with the pool members? I have a send string for a TCP health check, but our development team says that it would be better to keep the same source port (from the load balancer), rather than opening and closing a new connection each time. Is there a way to do this?

 

Thanks.

 

6 Replies

    • Kevin_M_182964's avatar
      Kevin_M_182964
      Icon for Nimbostratus rankNimbostratus
      Hmm. That doesn't seem to be what I'm looking for. Based on the link, a 'passive monitor' is an iRule that is parsing the responses to an active monitor's requests, looking for specific health indications that a regular health monitor might not be able to catch. I'm looking for something that could nail up a connection and send periodic keep-alive messages without having to redo the handshake each time. As you say, it sounds like that's not how the standard health monitor interval/timeout is designed to work, so I may just be out of luck. Thanks, Kevin M.
  • Kevin_K_51432's avatar
    Kevin_K_51432
    Historic F5 Account

    Oops, I didn't see the "in conjunction". Well, it may be worth considering if this can be used in conjunction with an ICMP (or ping) monitor?

     

    I'll be sure to update this if something more fitting shows up.

     

    Thanks, Kevin

     

    • Kevin_M_182964's avatar
      Kevin_M_182964
      Icon for Nimbostratus rankNimbostratus
      I haven't been in the load balancing business very long, so I'm just guessing here, but it sounds like a passive monitor could be used to watch regular traffic going back and forth between clients and servers, and decide based on some indication from the server when to mark it as overloaded or whatever. I think the 'in conjunction' must come into play because if you were to mark a member of the pool as down, clients can't reach it, so there are no responses to check to mark it as up again, and it would be down forever. If you run the passive monitor on some traffic that is being generated as part of one of the regular health checks, there would be the possibility that the server would start responding correctly and would be enabled for regular traffic.
    • Kevin_K_51432's avatar
      Kevin_K_51432
      Historic F5 Account
      I did set this up to answer the question "so there are no responses to check to mark it as up again" and have found that the response to the client is the object parsed by the iRule (not the monitor traffic) and the pool member becomes down. Then the monitor comes around and marks it back up until another client request is parsed and marks the member down again. Note: I set the iRule to trigger on 200 OK to make this easier. Client connection results: HTTP/1.1 200 OK Date: Tue, 10 Feb 2015 22:00:48 GMT Server: Apache/2.2.22 (Ubuntu) MISS MISS HTTP/1.1 200 OK Date: Tue, 10 Feb 2015 22:00:48 GMT Server: Apache/2.2.22 (Ubuntu) MISS MISS ------------------------------------------------------------------ What is logged to /var/log/ltm: notice mcpd[6987]: 01071681:5: SNMP_TRAP: Virtual /Common/me2 has become available err tmm[10521]: 01010221:3: Pool /Common/test1 now has available members err tmm[10521]: 01010028:3: No members available for pool /Common/test1 notice mcpd[6987]: 01070638:5: Pool /Common/test1 member /Common/10.12.23.27:80 monitor status iRule down. [ /Common/gateway_icmp: up ] [ was up for 0hr:0min:0sec ] notice mcpd[6987]: 01071682:5: SNMP_TRAP: Virtual /Common/me2 has become unavailable notice mcpd[6987]: 01070727:5: Pool /Common/test1 member /Common/10.12.23.27:80 monitor status up. [ /Common/gateway_icmp: up ] [ was iRule down for 0hr:0min:1sec ] notice mcpd[6987]: 01071681:5: SNMP_TRAP: Virtual /Common/me2 has become available err tmm[10521]: 01010221:3: Pool /Common/test1 now has available members err tmm[10521]: 01010028:3: No members available for pool /Common/test1 notice mcpd[6987]: 01070638:5: Pool /Common/test1 member /Common/10.12.23.27:80 monitor status iRule down. [ /Common/gateway_icmp: up ] [ was up for 0hr:0min:0sec ] notice mcpd[6987]: 01071682:5: SNMP_TRAP: Virtual /Common/me2 has become unavailable notice mcpd[6987]: 01070727:5: Pool /Common/test1 member /Common/10.12.23.27:80 monitor status up. [ /Common/gateway_icmp: up ] [ was iRule down for 0hr:0min:1sec ]
  • shaggy's avatar
    shaggy
    Icon for Nimbostratus rankNimbostratus

    Is there a reason the development team is asking this question? Generally you would want a new connection for each monitoring request since it allows the F5 to better reflect how a new client request will look. For example, say a service is disabled on the server that disallows new TCP connections but allows current connections to complete naturally. The F5 would continue using the open connection for the monitor, continue marking the node up/available, but actual forwarded client requests would fail.

     

    As @Kevin.K pointed out, there are alternatives to request/response based monitors, but any solution depends on why the development team wants to see fewer connections. From an application-traffic perspective, you can achieve this functionality by assignign a oneconnect profile to HTTP/HTTPS virtual servers, thus re-using server-side connections (non-monitor) for subsequent client requests.