health monitor issue
Hi all
We modified the health monitor option on a per-node basis, from default to None, for 10.10.7.100. A few minutes later, the active unit stated that a completely different node (other vlan, IP, and trunk ) has a monitor status down:
Wed Jul 25 07:29:27 CEST 2015 NODE_ADDRESS modified: name="/Common/10.10.100.7" new_session_enable=2 monitor_rule="/Common/none" update_status=1
Jul 25 07:33:30 slot1/xxx notice mcpd[9264]: 01070640:5: Node /Common/10.10.7.8 address 10.10.7.8 monitor status down. [ /Common/icmp: down ] [ was up for 1250hrs:40mins:20sec ] Jul 25 07:33:30 slot1/xxx notice mcpd[9264]: 01070638:5: Pool /Common/ra_pool member /Common/10.10.7.8:1813 monitor status node down. [ /Common/udp: up ] [ was up for 1250hrs:40mins:20sec ]
Do you have any idea what may cause this ?
Thanks !