Forum Discussion

Josh_41258's avatar
Josh_41258
Icon for Nimbostratus rankNimbostratus
Jun 17, 2014

vCMP Failover Scenarios

Let's say I have (2) C2400 chassis with (2) B2100's installed in each. Network connections are as follows:

C2400-01/Slot1 - Connected to Switch_1
C2400-01/Slot2 - Connected to Switch_2
C2400-02/Slot1 - Connected to Switch_1
C2400-02/Slot2 - Connected to Switch_2

If Switch_1 dies, thus killing all network connectivity to Blade 1 on each C2400, what happens to the guest running on this blade? Will the guest still run on Blade1, but use the internal blackplane of the C2400 and use network interfaces connected to Blade2 on each chassis (hopefully)? Or will the guest try to migrate to a new blade? What if another blade doesn't have sufficient resources?

I realize that ideally each blade would be multihomed to each Switch, but I don't currently have the required port density to support such a configuration.

Thanks

5 Replies

  • I think we need a bit more information. Is each switchport configured the same? Are your LTM instances allowed to use both blades in each chassis? Some items to note: Best practice (and maybe a requirement?) is to have identical network configurations for each blade in a 2400 chassis. It doesn't necessarily matter what switch your are connecting to, but as long as the number of connections is the same and the port configuration is the same to each blade. The backplane is for supplying power, it is not designed to pass traffic. The design is to use the network for everything; even failover. If you're LTM instances are not configured to use both blades there is no automatic transfer in the event of a failure. Your HA should be between the chassis, with the possible addition to having your LTM instance use multiple blades in a chassis.
  • Hi Steve,

     

    The switches in question are Nexus 7k's. So, C2400-01/Slot1 is connected directly to Switch_1. C2400-02/Slot1 is connected to Switch_2, but there is a VPC between the two Nexus switches.

     

    I fear that if Switch_1 fails, I will lose the single slot guest on Slot1 in both chassis. Ideally, each blade needs to be multihomed to both switches to account for a switch failure like this. Currently, we have physical "internal" and "external" interfaces on each blade. I'm considering combining both internal & external into a single physical interface thus freeing up an additional 10GB switchport on the Nexus switches. This would allow me to multi-home each blade to each switch.

     

    I believe there is a best practices document somewhere for Nexus+F5, I just need to dig it up.

     

    Thanks

     

  • Thanks for the diagram. I believe I am going to end up having one VPC per chassis and multi-homing each blade to both N7K's. Prior to this design, I had separate physical interfaces (and VPCs) for internal and external traffic. I am now contemplating consolidating these into the same physical interfaces and VPCs which will free up the additional 10GB interfaces on the N7K's that I need for multihoming each blade.

     

    Each connected interface on the chassis will be a member of the same trunk. Aggregate bandwidth available to the trunk will of course be determined by the number of physical interfaces spanning the blades of the entire chassis (2 blades = 40Gbps, etc).

     

    This will provide switch redundancy for even the LTM instances that only span one blade since each blade will be connected to both N7K's.

     

    • Steve_M__153836's avatar
      Steve_M__153836
      Icon for Nimbostratus rankNimbostratus
      That sounds like a good plan. It is nice with the vCMP functionality that you can trunk all the VLANs you want over one vPC and then decide what LTM instance gets what VLAN. This devcentral thread(https://devcentral.f5.com/s/feed/0D51T00006i7MMdSAM) has a very useful piece of info at the end of it regarding what you name your vPC and how it is configured on the F5 (names must match).
    • Josh_41258's avatar
      Josh_41258
      Icon for Nimbostratus rankNimbostratus
      I have never had any issues with my names and or IDs. I simply configure a trunk on the BIG-IP and add interfaces to it. I'll have to give that thread a read.