Forum Discussion

Josh_41258's avatar
Josh_41258
Icon for Nimbostratus rankNimbostratus
Sep 18, 2013

Installation of new blade in VIPRION/vCMP System

Let's say I have a VIPRION chassis running vCMP with 2 blades installed. The vCMP guests on this host are set to "Use all slots."

 

As I understand it, if I install a new blade into the chassis, after a number of steps, the blade will be joined to the cluster and will start processing traffic immediately. What occurs if the newly installed blade doesn't have any network interfaces cabled? Will traffic be blackholed? What is the best way to handle installing a new blade to a host which has vCMP guests set to "use all slots." Can the new blade be isolated until it is ready?

 

Thanks

 

9 Replies

  • I would go provisioning network ports at switches before attach blade to chassis. After that, connect sfps and cables right after pluging the new blade on the chassis.

     

    Dont forget that it must boot and you should level it´s TMOS version to be equal as that other 2 blades.

     

    Management IPs should also be at the same network space.

     

    After all this, vcmp will start syncing the guest virtual disc and then powerup it´s part of the all-slots guests.

     

  • I went through this for the first time recently. The clustering software will take care of synchronizing the software revision of the new blade to match the master, first by rsyncing the contents of /shared/images to the new blade, and then issuing the install. The new blade will likely arrive with version 10.x software, so the software install will take a few reboots. You can track the progress via the AOM or the logs.

     

    You can successfully install the new blade without its own network interfaces; it should send traffic over the backplane. Sync traffic will also use the backplane. In my opinion, it is best to cable all your blades the same. I typically will run 2 bonded ports on each device, and then bond across the blades. (ie 2x 10gb + 2x 10gb) You obviously need to verify your switch setup can handle this; we use Nexus 5k's which are a perfect fit for Viprion.

     

    You should configure cluster IPs on the master in addition to the management IPs, and the new blade will assume its designated IP. I specify 5 IPs in blocks for each chassis for this purpose, regardless of how many blades are installed at the time. See discusses configuring the cluster addresses, and you should have them whether you're running vCMP or not.

     

    vCMP (to my knowledge) will only automatically sync an all-slots guest to the other blade, it will not sync single-slot guests unless you provision the guest on the other blade.

     

  • Hope you don't mind Josh but for me it is still not clear if without any interfaces connected is it useful to insert an extra blade. will the CPUs be used for example to handle traffic recieved via the other blades and send via the backplane?

     

  • I plan on having the switch interfaces configured and ready to be plugged in. They just won't be plugged in IMMEDIATELY as the blade is inserted.. perhaps 30-60seconds later. I just want to make sure there the blade won't try to start processing traffic before I can get the interfaces cabled.

     

    @Josh - thanks for the detailed response! Ours are cabled directly to Nexus 7k's. I typically have 4x10GB for each blade (2x10 for the internal side, 2x10 for the external). Each blade also has a switchport on the "sync" or "ha" VLAN that is used for sync and network failover traffic. I don't typically interconnect individual blades together.

     

    Thanks,

     

    Josh

     

    • JoshBecigneul's avatar
      JoshBecigneul
      Icon for MVP rankMVP
      That sounds solid. Are you splitting trunks on different line cards with VPC? Blades can connect to other blades through the chassis. To my knowledge you do not need to cable every blade to the network in order to gain their compute power, but you end up with SPOFs in certain failure scenarios. Also, you can setup trunks across slots, so you can have trunk INTERNAL running on 1/1.1 and 2/1.1 and trunk EXTERNAL on 1/1.2 and 2/1.2, and so on. This mode works great for ours, allowing us to "yield" blades for downtime. Be very careful to always yield the blade during downtime as you can otherwise end up with a traffic interruption. I found that when removing a slot 2 blade without any guests, it was still handling traffic for guests on the other blade, likely due to how LACP had organized my links.
    • JoshBecigneul's avatar
      JoshBecigneul
      Icon for MVP rankMVP
      Also, it will likely be 15-30 minutes before the new blade is patched, rebooted, firmware updated and joined to the cluster before it is ready to process traffic. Once its fully operational, then you can configure its network interfaces. Up until the network config, this happens more or less automatically.
    • Josh_41258's avatar
      Josh_41258
      Icon for Nimbostratus rankNimbostratus
      Yes, we do use VPC. So, if we are talking about a single blade: 1.1 -> internal-trunk -> NEXUS-7K-01 -> VPC 100 / 1.2 -> internal-trunk -> NEXUS-7K-02 -> VPC 100 / 1.3 -> external-trunk -> NEXUS-7K-01 -> VPC 200 / 1.4 -> external-trunk -> NEXUS-7K-02 -> VPC 200 / 1.8 -> ha-trunk -> private/non-routed VLAN I didn't know blades could connect to other blades through the chassis. I thought every blade had to be cabled individually. I will continue to do it this way anyways, as it provides higher throughput and more redundancy, I suppose. Um, yeah! I have also noticed that when I remove a blade with no active guests, I get traffic interruption. (sorry, this forum is mangling my line breaks)
  • A follow up question, on same topic, is it required to connect management ports on all blades, even if we are running the vCMPs in bridged mode.