Last week we covered the basics on network configuration for BIG-IP LTM VE on the VMware Workstation/player hypervisors and the ESX/ESXi hypervisors. This week we’ll cover a couple more options at your disposal.
Option 1 – Add an Interface
We’ve already established that the LTM VE only has two interfaces. Technically, there are three, but one is reserved for the management interface, and as it doesn’t qualify for the wicked cool stuff, we’ll pretend it’s not there. The first step to adding another interface is to make sure your LTM VE is not powered up, nor in a suspended state. Once it’s down, you can edit the configuration settings.
Start your LTM VE image up, and your new interface will load! Super easy, right? Note that the example above will not work as this is the second interface added (process is the same and I already had one defined.) This is because there is a maximum of five pci slots per virtual machine, so the scsi slot and the original three interfaces makes four, and your one additional interface makes five. Trying to start the LTM VE with two additional interfaces configured leads to this error:
Delete the second additional interface and the virtual macine will fire up just fine. If you need more than three data interfaces (four interfaces total), proceed to option two below.
Like in Workstation, you need to power off the LTM VE before changing settings. Here, we’ll edit the vm properties, adding a new network adapter.
The adapter type for the other data interfaces is E1000, so we’ll select that as well and assign the DMZ Network label, then click next
Click the finish button on the next screen, then click OK in the virtual machines properties dialog. Now we’re good to power up!
Option 2 – Use VLAN Tagging
If you don’t want to add interfaces, you can consider using VLAN tagging on the LTM VE interfaces. This is documented for ESX/ESXi (more on that further down this post), but I had difficulty finding solid documentation on the possibilities for Workstation/Player. Thankfully, DevCentral MVP hwidjaja came to my rescue in the LTM VE forum earlier today with news that VMware workstation (and I imagine player, though I have not tested it) will pass vlan tags as is.
To test this, I spun up the community edition version of Vyatta Open Networking’s virtual appliance and after a quick perusal through the documentation got a virtual switch up and running (see below). From the standpoint of Workstation, you just need to choose a shared vmnet. In my setup, I chose vmnet3.
We can now validate the connectivity between both systems. A simple ping should suffice:
LTM VE –> Vyatta Switch
Vyatta Switch –> LTM VE
[root@localhost:Active] config # ping 10.10.51.1
PING 10.10.51.1 (10.10.51.1) 56(84) bytes of data.
64 bytes from 10.10.51.1: icmp_seq=1 ttl=64 time=25.3 ms
64 bytes from 10.10.51.1: icmp_seq=2 ttl=64 time=11.3 ms
64 bytes from 10.10.51.1: icmp_seq=3 ttl=64 time=10.3 ms
--- 10.10.51.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2067ms
rtt min/avg/max/mdev = 10.339/15.679/25.371/6.865 ms
[root@localhost:Active] config # ping 10.10.52.1
PING 10.10.52.1 (10.10.52.1) 56(84) bytes of data.
64 bytes from 10.10.52.1: icmp_seq=1 ttl=64 time=22.4 ms
64 bytes from 10.10.52.1: icmp_seq=2 ttl=64 time=9.02 ms
64 bytes from 10.10.52.1: icmp_seq=3 ttl=64 time=10.2 ms
--- 10.10.52.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2080ms
rtt min/avg/max/mdev = 9.022/13.932/22.476/6.064 ms
vyatta:~# ping 10.10.51.5
PING 10.10.51.5 (10.10.51.5) 56(84) bytes of data.
64 bytes from 10.10.51.5: icmp_seq=1 ttl=255 time=2.48 ms
64 bytes from 10.10.51.5: icmp_seq=2 ttl=255 time=1.56 ms
64 bytes from 10.10.51.5: icmp_seq=3 ttl=255 time=0.991 ms
--- 10.10.51.5 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2007ms
rtt min/avg/max/mdev = 0.991/1.681/4.484/0.614 ms
vyatta:~# ping 10.10.52.5
PING 10.10.52.5 (10.10.52.5) 56(84) bytes of data.
64 bytes from 10.10.52.5: icmp_seq=1 ttl=255 time=0.537 ms
64 bytes from 10.10.52.5: icmp_seq=2 ttl=255 time=1.60 ms
64 bytes from 10.10.52.5: icmp_seq=3 ttl=255 time=1.72 ms
--- 10.10.52.5 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2009ms
rtt min/avg/max/mdev = 0.537/1.287/1.724/0.533 ms
In ESX/ESXi, virtual switches are available to handle your switching needs. The key is to create a virtual machine port group under one of your virtual switches with a vlan tag id of 4095, then assign that new port group to your LTM VE interface. Notice in the red shaded area when you assign vlan tag 4095, the tag id is displayed as All:
On the LTM VE network configuration itself, you simply assign the port group to the data interface of your choosing. In my case, I chose network adapter 2, or interface 1.1:
Now that the virtual infrastructure is configured, we can configure the LTM VE. My external infrastructure is already in place, but the setup looks like this:
Again, we’ll use ping to validate the connectivity:
LTM VE –> External L3 Switch
External L3 Switch –> LTM VE
[root@localhost:NO LICENSE] config # ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=27.0 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=19.6 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=19.8 ms
--- 192.168.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2022ms
rtt min/avg/max/mdev = 19.670/22.169/27.004/3.419 ms
[root@localhost:NO LICENSE] config # ping 10.10.32.1
PING 10.10.32.1 (10.10.32.1) 56(84) bytes of data.
64 bytes from 10.10.32.1: icmp_seq=1 ttl=64 time=38.9 ms
64 bytes from 10.10.32.1: icmp_seq=2 ttl=64 time=20.0 ms
64 bytes from 10.10.32.1: icmp_seq=3 ttl=64 time=10.4 ms
--- 10.10.32.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2010ms
rtt min/avg/max/mdev = 10.452/23.130/38.924/11.831 ms
(FSM7328S) #ping 192.168.1.40
Send count=3, Receive count=3 from 192.168.1.40
(FSM7328S) #ping 10.10.32.40
Send count=3, Receive count=3 from 10.10.32.40
Adding an interface proves to be a little less cumbersome, but tagged vlans may be necessary if you need more than one additional interface. Either way, now you’re armed with a few more tricks up your sleeve for getting the most out of your LTM VE trial.