Learn F5 Technologies, Get Answers & Share Community Solutions Join DevCentral




New install of LTM VE 10.2.2 - chmand error

local/localhost emerg logger: Re-starting chmand

Getting the above error message out of the gate when I first console into my VE; it repeats about every 2 seconds.  It doesn't seem to allow me to add a management IP.

I also get a few errors below when I run the "config" utility:

1. This operation is only allowed on a primary cluster member
2. Error publishing admin_ip = 1070710
3. Error publishing mgmt route = 1070712
4. Bigpipe unknown query error:
    This operation is only allowed on a primary cluster member

See attached for screenshots... 

Thanks for looking... Leonardo

7 Answer(s):

What hypervisor are you running it on? 10.2.3 and later support ESXi 5.x and later (unsupported, but workstation 8 and later will need 10.2.3 or later)

Talked to my VMware guys and this one is running on a later version than our others so that may be the problem. I'll get a new version of LTM VE on there and let you know. Thanks!!

The new version of LTM VE did the trick... thanks qe!
This is helpful, learning from it. Thanks!

The new version (10.2.3) of LTM VE worked for me also.... Thx!

The main reason for failing VE implementations is a network adapter mismatch, imho. It´s required to stop (not hibernate) the VE and to modify the .vmx file. A .vmx file for TMOS v10 should have the following entries to emulate the expected hardware:

ethernet1.virtualDev = "vmxnet3"
ethernet2.virtualDev = "vmxnet3"

The ethernet0.virtualDev may be left untouched. Sometimes it was necessary to set it to "e1000". The ethernet0. will usually be mapped to eth0, your VE managment interface. All remaining adapters will be mapped to interfaces 1.1 and up.

I am trying to install LTM VE 10.2.1 on ESXi 6. I am getting error "emerg logger : re-starting chmand" and mcpd is not starting up saying it is waiting for chmand to release start semaphore.

Can someone help me with this please?

Your answer: