Forum Discussion

anto1_213321's avatar
anto1_213321
Icon for Nimbostratus rankNimbostratus
Jul 29, 2015

health monitor issue

Hi all

 

We modified the health monitor option on a per-node basis, from default to None, for 10.10.7.100. A few minutes later, the active unit stated that a completely different node (other vlan, IP, and trunk ) has a monitor status down:

 

Wed Jul 25 07:29:27 CEST 2015 NODE_ADDRESS modified: name="/Common/10.10.100.7" new_session_enable=2 monitor_rule="/Common/none" update_status=1

 

Jul 25 07:33:30 slot1/xxx notice mcpd[9264]: 01070640:5: Node /Common/10.10.7.8 address 10.10.7.8 monitor status down. [ /Common/icmp: down ] [ was up for 1250hrs:40mins:20sec ] Jul 25 07:33:30 slot1/xxx notice mcpd[9264]: 01070638:5: Pool /Common/ra_pool member /Common/10.10.7.8:1813 monitor status node down. [ /Common/udp: up ] [ was up for 1250hrs:40mins:20sec ]

 

Do you have any idea what may cause this ?

 

Thanks !

 

2 Replies

  • I'm not really offering more than echoing Patrik's thoughts. The provided log indicates the node was marked down by a failure from an ICMP monitor. Your implementation may have changed, but default for the monitor is a 16-second timeout. Four minutes would be unusually long for an ICMP monitor timeout though verification of the monitor settings would confirm the timeout. Unless you have evidence or reason to suspect they are linked I'd agree they are unrelated to the monitor change you made.