I'm new to the whole virtual F5 instances running on ESX hosts so please bear with me. I need to get a number of virtual F5's running 11.6 to deploy their configuration on a new Viprion 2400. I've read about the .UCS file that you would load onto the physical device to do this, but why wouldn't it work if you put the virtual into the same device trust domain and try to sync it that way? I read it doesn't work but has anyone tried this and ensured it doesn't work? Does anyone know of a good guide for doing the virtualization to physical besides the one step I read on here? Either way I'd like to know the approach so I can test it out before moving production systems. Thanks!
VE to hardware platform migrations are possible with HA Sync and version 11.5 and up (We are on 11.6)
You don't want to use a UCS as that has no merge options; it completely replaces the config on the device. It's also designed to be used more as a backup for the device on which it was created (in the event of a problem or change rollback), and is tied to it in large part. A device group is not the way to go either.
You probably want to use an SCF to replicate configuration data from your BIG-IP VE systems to the VIPRION 2400, without completely replacing the existing VIPRION configuration data either. (Think VLANs, self IPs, etc.) You can take an SCF backup of the running configuration on the VEs, edit them to eliminate the configuration data you don't want on the 2400, then load/merge them onto the 2400.
Check out these articles for more info on replicating configuration data from one BIG-IP system to another:
SOL13408: Overview of single configuration files (11.x - 12.x) - https://support.f5.com/kb/en-us/solutions/public/13000/400/sol13408.html
Thank you for the answer. One question: Will this work for taking 8 VE instances and putting them on the Viprion as vCMP's? It didn't clarify the impact if other vCMP's are already running on the physical F5.
You could still use an SCF from each test VE to load elements of the config onto its production vCMP guest counterpart. Obviously you need to create the new vCMP guests first before you can load configuration data into each from their test counterpart. But network configuration elements on a vCMP guest are quite different from a VE on an ESX server. I would whittle the SCF down to just those application delivery elements that are needed on the production guest before attempting to merge it into the production guest. It's certainly not foolproof as editing errors can occur. That's why you can verify the config data before you load it using the verify option on the tmsh load sys config file... command.
As for impact on other existing guests, I'll use one of my colleague's favorite phrases: "It depends." Certainly if you have to reallocate resources from existing guests to make room for the new guests, there would be an impact. I recommend contacting F5 Technical Support on something like this.
...and don't forget to put the magic keyword "merge" into the tmsh load sys config command, because omitting this will result in an implicit "replace-all-with".
So basically, what is suggested here cuts down to those steps:
Thank you both for your help! I will follow the steps tatmotiv provided along with the advice from crodriguez regarding the SCF files. Of course this will be done on a number of test F5 builds with a backup F5 Support case opened to ensure things go as 'smoothly' as possible.