Learn F5 Technologies, Get Answers & Share Community Solutions Join DevCentral

Filter by:
  • Solution
  • Technology
Answers

vCMP Guest Failover after removing Blade

Hi! Having two Viprions with two Blades each one, and a vCMP Guest running on both Viprions using two Blades, if we remove a Blade on the Viprion where the vCMP Guest is Active, will it trigger a Failover?

As the Standby vCMP Guest will have more resources than then Active vCMP Guest (the first one will have two Blades, meanwhile the second one will have 1 Blade), we want that removing a Blade trigger an vCMP Guest Failover...

Thanks!

0
Rate this Question

Answers to this Question

placeholder+image
USER ACCEPTED ANSWER & F5 ACCEPTED ANSWER

You might try to misuse the "minimum number of slots" setting of the vcmp guest for that purpose, though it's probably not exactly what you want to achieve. Setting "minimum number of slots" to 2 for your vcmp guest which spans 2 blades should revert the state of the vcmp guest from deployed to configured after the removal of one blade, which will trigger a failover to the other vcmp guest (unfortunately, I cannot test that due to the lack of multi-blade test viprions). The drawback of this is, you will end up in cluster without failover capability because the first guest is no longer running...

One note on your consideration "As the Standby vCMP Guest will have more resources than then Active vCMP Guest (the first one will have two Blades, meanwhile the second one will have 1 Blade)":

This article might clarify the effect a bit:

Citation:

Effect of blade removal on a guest

If a blade suddenly becomes unavailable, the total traffic processing resource for guests on that blade is reduced and the host must redistribute the load on that slot to the remaining assigned slots. This increases the number of connections that each remaining blade must process.

Fortunately, there is no reduction in memory allocation, given that when you create a guest, you instruct the host to allocate the full amount of required memory for that guest to every slot in the guest's cluster (through the guest's Cores per Slot property). However, each connection causes some amount of memory use, which means that when a blade becomes unavailable and the host redistributes its connections to other blades, the percentage of memory use on these remaining blades increases. In some cases, the increased memory use could exceed the amount of memory allocated to each of those slots.

For example, if a guest spans three slots which process 1,000,000 connections combined, each slot is processing a third of the connections to the guest. If one of the blades becomes unavailable, reducing the guest's cluster to two slots, then the two remaining blades must each process half of the guest's connections (500,000), resulting in a memory use per slot that could be higher than what is allocated for that slot. Assigning as many slots as possible to each guest reduces this risk.

1
Comments on this Answer
Comment made 18-Jun-2018 by Franco 175

Hi! Great, my interpretation of the "minimum number of slots" parameter was that only it was evaluated during the change to deployement status, but now it's clear from your post that also is evaluated meanwhile it's in the deployed status.

Thanks!!

0
Comment made 18-Jun-2018 by tatmotiv 1021

Hi, this was an assumption that still needs to be evaluated... As I said: unfortunately, I cannot test that due to the lack of multi-blade test viprions. Can you test it?

0