Learn F5 Technologies, Get Answers & Share Community Solutions Join DevCentral

Filter by:
  • Solution
  • Technology
Answers

vCMP Design and Architecture

With vCMP out I haven't seen any white papers or design docs on actual deployments. Does anyone have any examples of how they have vCMP setup in their environment? Is it worth the jump in the current iSeries Models to go with vCMP? Anyone have any current beneficial architectures where vCMP is helpin them?

0
Rate this Question

Answers to this Question

placeholder+image
USER ACCEPTED ANSWER & F5 ACCEPTED ANSWER

Hey Denver

I have not seen any whitepapers either regarding vCMP but there are some nice manual chapters. For instance:

Manual: vCMP for VIPRION Systems: Administration

It's a bit old and it covers VIPRION and not the iSeries chassis. But, this guide does not give you any deployment scenarios.

vCMP is a great way to segment the BIG-IP device into smaller but independent BIG-IP instances that will act entirely as an ordinary BIG-IP would. That means you can have a scenario where you have the first line of defence consisting of a guest provisioned with AFM and DNS. Then in the second tier you could have APM and ASM and lastly in the third tier LTM. But that is just playing around with the design. This should be applicable to what you need.

What I have seen at many customers, they use vCMP to create separate environments. Looking like this:

  • First Guest - PROD
  • Second Guest - STAGE
  • Third Guest - TEST

You have the exact same version running on all guests and you have different virtual servers and pool members behind each guest. But with this, you can have the app team add features to the application in the test environment without impacting production traffic.

Same goes for upgrades. You can test your upgrades before it will actually affect production by first upgrading TEST, then STAGE and lastly PROD. If TEST and STAGING works it will most likely work for PROD as well. It is really nice when PROD are responsible for 300k active sessions and you will less likely run into unforeseen problems.

I have also seen scenarios where you have created a vCMP guest only intended for REST API calls and separating this from the other guests for the sake of security and fault tolerance. The API calls will not mistakenly screw up the production guest.

Again, if vCMP is worthwhile is part of the initial design and the need you have within the organization.

0
Comments on this Answer
Comment made 5 days ago by DenverRB 65

Thank you for your input and I would like to dive into additional questions on the guest vs partition aspect.

From a functionality perspective, question if you have ever performed an example deployment with a First Guest - Prod, Second Guest - Test deployment?

I have an example of maybe you can provide some input or if you have experienced this same problem.

With an example of an environment comparing to a non vcmp environment. Two partitions built out, one production, one Test, I have seen separation of environments with partitions this way. The drawback I have seen from this deployment is both production and test partition are on the same device. I have seen a test server, let's say linux and apache configured incorrectly, cause connections to hang through the F5 and caused the entire F5 LTM to max out on connections, impact utilization, and impact the entire environment. The result because of the large impact was to move to two physical pairs, offloading test to a separate device.

In a vcmp environment if a problem existed with a First or Third Guest where connections maxed or a problem with a partition occurred would this ever impact the entire chassis? Would I ever need to fail over in an HA Pair cluster with vcmp and reboot the physical to resolve an issue with the one guest? Should I be able to reboot the individual guest vcmp and not impact the other guests on the device?

The later of the questions all depend on experience with active vcmp, I would be interested in knowing if anyone has had experience with issues with vcmp.

Thanks,

0
Comment made 4 days ago by Philip Jonsson 891

Hey Denver!

As you mention, sharing the same box can be a problem even if you split the box into two separate partitions and route domains. If you want to upgrade the test environment to a more recent BIG-IP TMOS version, you would have to upgrade the production environment as well. And if you were to run a performance test on the test environment, that traffic would pass through the same BIG-IP and its connected TMM instances, having it mixed in with production traffic. If the BIG-IP receives a performance hit, it will affect the whole box, interrupting production traffic. Just as you have experienced.

This will not happen with vCMP as every BIG-IP guest (instance) will have it's own dedicated CPU/memory and will not be connected with each other, no matter if they are run on VIPRION or an iSeries appliance.

The only time all guests will have an impact, is when the vCMP host (the hypervisor) is affected. It could be a bug on the vCMP host that affects all vCMP guests. Compare it with VMware ESXi. If you have a bug in ESXi, it will affect your Windows servers. But if you have a Windows bug it will not affect your Linux servers. They are all separate from each other.

If your vCMP guest is experiencing performance problems, you can simply just add more CPU cores/memory to it and you will immediately have more resources. If this is a VIPRION, you can add a completely new blade and you will have more resources and additional redundancy with a new blade (if you provision across slots).

Regarding failing over "the entire chassis" I'm not sure I follow exactly. When building a vCMP based solution, you have two vCMP hosts, and each vCMP host has at least one vCMP guest. Then you configure HA between these guests. As follows:

vCMP Host 1 | vCMP Host 2

production_guest1 <-HA-> production_guest2

test_guest1 <-HA-> test_guest2

There could be scenarios where there is an impact on the vCMP host but if it's a bug, then most likely both vCMP hosts are running the same version and both needs to be upgraded to fix the bug. But in those cases where you upgrade the vCMP host, you would have to fail over both production_guest1 and test_guest1.

But if you have a problem on production_guest1 and you want to reboot it, the reboot would simply cause a failover to production_guest2 like any other none-vCMP BIG-IP device.

All in all, there are a lot of benefits with using vCMP and in some cases it is the cheapest option, as the license is added to the vCMP host and it will allow you to create as many BIG-IP instances you want based on the hardware, with any modules you have licenses on the vCMP host.

So let's say that you have 6 BIG-IP devices running the following modules:

  • BIG-IP i2600 1-2: AFM, LTM, DNS
  • BIG-IP i2600 3-4: LTM, APM
  • BIG-IP i2600 5-6: ASM, LTM.

You will most likely have a cheaper option, buying the bigger boxes and using vCMP instead, licensed with BEST. I'm no sales guy but I have seen options where vCMP is cheaper than non vCMP. :)

This white paper is quite nice, covering vCMP in a high-level overview. https://www.f5.com/services/resources/white-papers/virtual-clustered-multiprocessing-vcmp

I hope this answers your questions :).

1