Forum Discussion

daveram_265365's avatar
daveram_265365
Icon for Nimbostratus rankNimbostratus
Sep 10, 2016

Viprion (Running vCMP) vs BIG-IP 5000 Series (Running vCMP)

Just wondering, is there much difference in configuration between these two hardware platforms? I understand, they all run BIG-IP. My main areas is what differences are there and any gotchas? i.e. Virtual guests are limited to 1 vCPU per blade.

 

Thanks for any help in advance.

 

10 Replies

  • Also to note, I am currently running (BIG-IP v12.1) vCMP on 2 - 5250v appliances.

     

  • So I believe I understand a little more and that for each chassis (Please anyone correct me if this is incorrect), so in our example of a single F5 with 4 blades populated and 2 vCMP guests, we will have

     

    * (1) - Cluster IP Address (For Management of Chassis) = 1

     

    * (1 x of blades) - Cluster Member IP Address (For Management of the blade) = 4

     

    * (1 x of vCMP guests) - Guest Management IP Address = 2

     

    So in this single chassis configuration, we would have a total of

     

    * (7) IP Addresses for management of the chassis, blades and vCMP guests

     

    * (4) Ethernet Connections

     

    If anyone can correct me if I am wrong, I would appreciate the help. Just how the documentation talks, it says a cluster IP address for each vCMP guest. In my example, I am taking that this is the guests management IP address (not in addition to this).

     

    Also, not 100% sure of the connection for the chassis management network. Does the chassis just determine the first blades management port wiring is utilized for the chassis mgmt IP connectivity?

     

    Thanks in advance for any help.

     

  • Viprion is a chassis based system. so when you need extra power you can add blades and without much or any extra config add resources.

     

    configwise Viprion is a bit different on the chasis and blades level but it is minor.

     

  • Looks like you are on target. The management port is bridged across all blades which means you can access management IP on any management ports.

     

  • do keep in mind your just talking management now, you also need some ethernet connections for traffic.

     

  • Cluster IP would be owned by primary blade. Chassis management IP is Cluster IP. In fact you can login to secondary slot but you will be greeted with massage waring that user is logged in to secondary.

     

  • vCMP on a VIPRION vs an appliance is the same from a configuration perspective. What is different is the clustering aspect that the VIPRION allows you to implement giving you more fault tolerance and scalability.

     

    For instance if you had two physical appliances you could deploy a two node LTM cluster using two guest instances (one on each appliance). The HA capability of this device group is based on the physical capabilities of the appliance - so if the appliance suffers a fault all guest instances on that appliance suffer the same fault. In addition you're constrained by the physical resources that the appliance ships with.

     

    A VIPRION is a chasis/blade based solution. This means that scalability and fault tolerance of components is EXTREME! Want to double the compute power of your LTM guest - just add CPU's from other blades in the chasis. Did one of your DIMMS go bad? No problem - just acquiesce that blade in the cluster and do a hot swap.

     

    So you definitely pay more for the VIPRION but you get a lot more in return. F5 was smart to offer vCMP in non-VIPRION devices because many people can benefit from the virtualization that vCMP affords. Need both a production environment and a test environment? No problem - spin up more guest instances.

     

    Hope that helps!

     

  • Just a quick gotcha on Viprion chassis is that you cannot mix and match different types of blades. Blades on a single chassis must be of the same type.

     

  • All good information. In their documentation, it states that you should have 1 mgmt IP per blade that a host covers. So if you have say a guest that is 4 vCPU across 4 blades (1 vCPU on each), then you need 4 IP addresses. I am not understanding why 4 IPs, are only one of the blades active at a time/

     

  • Actually, you would need 5 IP addresses. You would also need cluster mgmt IP, the one you would normally use to connect to VCMP host. If you are stretching one vcmp guest across 4 blades, the way it works is each blade is running a virtual big-IP instance and all 4 "Virtual Machines" are working as a team processing the traffic. Instance runnig on primary blade would also act as orchestrator...