Create F5 BIG-IP Next Instance on Proxmox Virtual Environment

If you are looking to deploy a F5 BIG-IP Next instance on Proxmox Virtual Environment (henceforth referred to as Proxmox for the sake of brevity), perhaps in your home lab, here's how:

First, download the BIG-IP Next OVA File from MyF5 Downloads

Copy the OVA file to your Proxmox host. I am using SCP in the example below.

local $ scp BIG-IP-Next-20.0.1-2.139.10+0.0.136.ovf root@proxmox:~/

On the Proxmox host, extract the contents in the OVA file:

proxmox $ cd ~/
proxmox $ tar -xvf BIG-IP-Next-20.0.1-2.139.10+0.0.136.ova
BIG-IP-Next-20.0.1-2.139.10+0.0.136.ovf
BIG-IP-Next-20.0.1-2.139.10+0.0.136.mf
BIG-IP-Next-20.0.1-2.139.10+0.0.136.cert
BIG-IP-Next-20.0.1-2.139.10+0.0.136-disk1.vmdk

Then, run the command below to create a virtual machine (VM) from the extracted OVF file. <vm_id> should be an unused ID on Proxmox.

# qm importovf <vm_id> BIG-IP-Next-20.0.1-2.139.10+0.0.136.ovf local-lvm
proxmox $ qm importovf 112 BIG-IP-Next-20.0.1-2.139.10+0.0.136.ovf local-lvm
Logical volume "vm-112-disk-0" created.  
transferred 0.0 B of 80.0 GiB (0.00%)
transferred 819.2 MiB of 80.0 GiB (1.00%)
transferred 1.6 GiB of 80.0 GiB (2.00%)
   <output truncated>
transferred 80.0 GiB of 80.0 GiB (100.00%)
transferred 80.0 GiB of 80.0 GiB (100.00%)

You should now see a new VM created on the Proxmox GUI.

Before starting the VM, we need to attach a few hardware components to the VM:

  • a Network Device for the management interface
  • one or more additional Network Devices for the data plane interfaces (e.g. internal and external). Note that the data plane Network Devices must be of VirtIO model

Optionally, you could also configure CLI access with the following instructions

Spoiler
Some of you may have noticed that the instructions for deploying the OVA image on VMWare requires CLI access via the admin/admin credentials in order to run the setup utility. This credential is defined in the User Data within the OVF file, which is not read by the qm importovf command on Proxmox. To replicate this, we need to add a CloudInit Drive to the VM:

and under the Cloud-Init section, set the user credentials (admin/<password>) and IP configuration for net0 (management):

On first boot, Cloud-init will configure the admin user with the defined password.

Note that the CloudInit Drive should be be removed from the VM after the first boot. Otherwise, I have noticed CLI access no longer worked in subsequent boots after the initial onboarding.

Finally, start the VM. This will take a few minutes. If CLI access is available, open up the console and run kubectl get pods until you can see all pods are ready.

The BIG-IP Next VM is now ready to be onboarded per instructions found here

Published Jan 09, 2024
Version 1.0

Was this article helpful?