Forum Discussion

Zdenda's avatar
Zdenda
Icon for Cirrus rankCirrus
Mar 14, 2014

config lost when upgraded from 10.2.0 - 11.3.0

Hello, I made LB upgrade from old verion 10.2.0 - 11.3.0 but I ended up with message something like - "configuration not properly loaded.."

 

I checked forum before upgrade and only warning I found was about matchclass and global variables, I do not use either of these in my config. Anyway I lost it. When I run "/usr/libexec/bigpipe daol" I got "0107146e:3: Self-device unicast source address cannot reference the Self IP (/partition1/192.168.5.12%1); it is not in the /Common folder".

 

Do you know what I should do to get rid of this error and let LB to properly transform config from v10 to v11? Thanks, Zdenek

 

12 Replies

  • Hi mate!

     

    Don't worry, it's still there, but in the /config/bigpipe folder.

     

    Usually when upgrading from v10 to v11 something breaks.

     

    Try running this command to attempt to load it again and to see where it fails. /usr/libexec/bigpipe daol

     

    Let us know if you need further assistance. :)

     

    Good luck!

     

    /Patrik

     

  • Have you considered removing the route domain from the /config/bigpipe files and trying to reload the config again. Then re-creating in v11?

     

    /Patrik

     

  • Hm I was considering to upload whole configuration manualy with load sys config, but now I am not sure whether it would be good idea since syntax is a bit different I think. anyway, thanks for tip, I will try to remove route domain and then try it again.

     

  • Yeah, v10 and v11 config syntax is not compatible. I have upgraded 6 pairs this month from v10 and each one of them had problems, many of them different ones.

     

    From this experience I learned that many issues can be solved by removing the failing config and adding it again.

     

    After the config has been reloaded you might encounter sync problems.

     

    Here's some tips to save time if you encounter the same issues:

     

    Them try to reset device trust, re-add the peers, add them to the device group. If it still does not work, reload config from file: tmsh load sys config partitions all. At two occasions the config loaded but had mysterious errors regarding missing data groups in the ltm log. Then I had to force a reload the mcpd.

     

    /Patrik

     

  • Thanks a lot for your tips. BTW, we also had sometimes problems with configsync, in our case it helped just to update device group (only click to Update button) on active, then on standby unit and it started to work properly ;-)

     

  • IMHO: There really is no substitute for mocking an upgrade before you do it in production. I rather spend hours during the day figuring out what broke vs hours in the middle of the night during a change window. If your configuration is only LTM , upgrades are usually pretty easy. Its when you have other modules such as GTM,ASM,APM, that the upgrade process can break.

     

    • Enthnal_20580's avatar
      Enthnal_20580
      Icon for Nimbostratus rankNimbostratus
      If you haven't done a 10 to 11 upgrade over several devices you might say that, but after the sixteen pairs of upgrades I've done using LTM, GTM, ASM, and APM I can say that each one had a different unique problem along with a similar set of issues that did not show up in a mock lab.
  • Hi,I was able to find the root cause why I got the error: "0107146e:3: Self-device unicast source address cannot reference the Self IP (/partition1/192.168.5.12%1); it is not in the /Common folder"

     

    Reason of this was in bigip_base.conf file in /config/bigpipe. I had to remove parts in "sys device" configuration. LB tried to create object of the device which were going to be used in failover group. But it was using IP address of interface before the IP was really assigned to the interface -> error. Fixed by removing content in part "unicast address".

     

    Anyway, I got new problem after upgrade - I have all pools "blue", so availability unknown. Eventough I am in lab and they should be red as all monitors are failing. Even nodes are down, but not pools and VIPs, they are blue. Do any of you know what could be wrong?

     

    Thanks. Zdenek

     

  • Does anyone of you know why I see all pools monitoring unknown - blue? They should be down since I am using lab device and all interfaces are down except management. Is it something related to version 11.3.0? Nodes are down, but not pools.

     

  • Interesting, it seems it was enough to save configuration and then restore backup from it. Now all monitoring statuses are OK