Hi Matt -- I'll just throw in my 2c as well.
While you can use iRules to affect node status, I don't think there's any good way to also pull in the stats you'd need to make the decision to change the status.
You could either use a custom scripted external monitor or iControl to both retrieve/evaluate server stats AND set the pool member status to DISABLED (allow new connections for persistent sessions only) rather than DOWN (reject all new connections).
The iControl solution is preferred. The best way to implement that would be to write an iControl script that runs on each DB server, monitoring CPU load and toggling the pool member status between UP and DISABLED based on that info. To ensure service availability, I'd recommend implementing an LTM-based service monitor against the pool members as well, and have the iControl script simply exit if the reported pool member status is already showing DOWN.
If you decided to go for the monitor-based solution, you'd need to use a custom external monitor for a couple of reasons:
1) The standard built-in health monitors by design are binary -- pool member is either UP or DOWN based on whether the expected response was received. To set nodes to DISABLED rather than having the node marked UP or DOWN by the monitor daemon, you'd need to use an external monitor to run snmpget against the server, evaluate the result & issue the appropriate bigpipe command to change the pool member status.
2) The WMI and SNMP monitors are a special case, and can't exactly be used to give the result you're looking for. These monitors don't check service availability & mark pool members UP & DOWN like a normal monitor does. They are what we call "performance monitors", and are meant to be used with the Dynamic Ratio LB mode to determine the best candidate server. The monitor derives a dynamic ratio value of a pool member based on the configured statistics, and that value is then compared with that of the other pool members to choose the load balancing target.
HTH
/deb