Forum Discussion
6 Replies
- HamishCirrocumulus
Nope... But what you could do is put a monitor on the default pool that checks the pool the iRule references... [e.g. An external monitor or a built-in transparent monitor - but you'd need one of those per member of the iRUle referenced pool)
H
- Kevin_StewartEmployee
I think it really depends on what you mean by "down". If you mean that you don't want to send any traffic to the iRule-referenced pool based on availability of the attached pool, then you could use the active_members command:
https://devcentral.f5.com/wiki/iRules.active_members.ashx
If, however, you need the VIP to be disabled completely, perhaps so that GTM will see it as down, then you could use an iCall (or raw user_alert.conf script) to effectively disable the VIP by triggering TMSH commands on Syslog monitor events.
https://devcentral.f5.com/questions/clearing-machine-cache
- Dan_PachecoCirrus
If you use a Traffic Policy instead of an iRule on v12.1.2 you don't get the same behavior as the irule. The VS goes down even if the path-match pool is up. I have successfully tested. Hopefully F5 doesn't screw it up in a new OS version to make Traffic Policies more like iRules.
- Alexander_Poly1Altocumulus
In version 13, when you drop all pools in traffic policy - the VS does not goes down(. Maybe i miss something. Dan Please show your settings.
- Dan_PachecoCirrus
Hi Alexander, What I am trying to achieve is the opposite of that. We drop the default pool, and the VS goes down, even if the policy has a pool that is passing healthcheck. I just tested this out in the lab on a v13.0.0 HF3 VE and the behavior is consistent with v12.1.2. Hope this helps.
root@(DCO-F5VE-LAB-3)(cfg-sync Standalone)(Active)(/Common)(tmos) list ltm policy ltm policy test_policy { controls { forwarding } last-modified 2017-12-05:08:25:42 requires { http } rules { test { actions { 0 { forward select pool https_pool_anoel1 } } conditions { 0 { http-uri path contains values { /admin } } } } } status published strategy first-match } root@(DCO-F5VE-LAB-3)(cfg-sync Standalone)(Active)(/Common)(tmos) list ltm poo ltm pool http_pool_default { members { dcosvrlab-1:http { address 10.15.60.43 session monitor-enabled state down } dcosvrlab-2:http { address 10.15.60.44 session monitor-enabled state down } dcosvrlab-3:http { address 10.15.60.45 session monitor-enabled state down } } monitor failing-monitor } ltm pool https_pool_anoel1 { members { dcosvrlab-1:https { address 10.15.60.43 session monitor-enabled state up } dcosvrlab-2:https { address 10.15.60.44 session monitor-enabled state up } dcosvrlab-3:https { address 10.15.60.45 session monitor-enabled state up } } monitor tcp } root@(DCO-F5VE-LAB-3)(cfg-sync Standalone)(Active)(/Common)(tmos) list ltm virtu Components: virtual virtual-address root@(DCO-F5VE-LAB-3)(cfg-sync Standalone)(Active)(/Common)(tmos) list ltm virtual ltm virtual test { destination 1.1.1.1:http ip-protocol tcp mask 255.255.255.255 policies { test_policy { } } pool http_pool_default profiles { http { } tcp { } } source 0.0.0.0/0 translate-address enabled translate-port enabled vs-index 2 }`text`
- Alexander_Poly1Altocumulus
Thank you Dan. Now I understand!