Forum Discussion

kbose_49650's avatar
kbose_49650
Icon for Nimbostratus rankNimbostratus
May 26, 2011

Using LTM as a change element

I am working on a design where I have to use an F5 LTM 1500 (ver 9.4.8) as a failover device and VIP provider for a pool of 3 MySQL database nodes. I have written an external monitor which does an ssh to the database and execute a SQL SELECT statement to determine the status of the database node.

 

 

I am using priority group activation with round robin load balancing, although only one node will be active). The end result is that one of the database nodes will be the active one to which applications will talk to via the VIP provided by the LTM.

 

 

In the event when the primary database node fails, the LTM will change the VIP to point to the next database node according to the priority scheme.

 

 

I have a need to -

 

1) Be able to detect when the VIP change happens and

 

2) Be able to make changes to the surviving database nodes to inform of the new primary database node.

 

 

Is there a way this is possible with the F5 LTM 1500 and this version of the TMOS?

 

 

The other alternative would need to involve an external element like HP Openview which can receive a trap from the LTM and execute the changes on the database node.

3 Replies

  • If you're already using an external monitor to check the database nodes, you could potentially trigger a message (via syslog, ssh, etc) to the host(s) you need to change the config on. If you need to get a current list of the servers that are marked up, you could use tmsh or bigpipe from the external monitor script.

     

     

    Aaron
  • Thanks Aaron. I was poking around with the bigpipe command options.

     

    bigpipe pool would give the status of the members, but how do you know which member the VIP is pointing to. is there another specific command to determine this?
  • bigpipe persist virtual [virtual-server] show

     

    shows where a VIP is pointing to. Although, this may be incorrect depending on persistence rules.

     

    In my case, where I was using "dest_addr" as the persistence profile, I had to persist the connection indefinitely, until that node went down, at which point the VIP would point to another node based on the load balancing rules.