Learn F5 Technologies, Get Answers & Share Community Solutions Join DevCentral

Filter by:
  • Solution
  • Technology
Answers

issues upgrading Big IP LTM 11.1 VE to 11.2 VE

Hi There

I am hoping someone can help me and that i have posted in the correct forum topic? Can't seem to see any where else suitable.
I'm Currently trying to upgrade Big IP LTM VE 11.1 to 11.2 in an active/standy HA pair (lb1 and lb2 respectively) but experiencing disk limited exceeded issues on booting into new image. On the standby unit I uploaded the iso image downloaded from download.f5.com in "Big-IP v11.x/Virtual Edition -> 11.2.0 -> big-ip11.2.0.2446.0.iso" and installed that to blank/empty boot location HD1.3. I also did the same for Hotfix 11.2 2451. I then made HD1.3 the active boot location and restarted the standby F5. Upon reboot when i logged into the GUI i had no config loaded so logged in on the console and looked at the LTM log and noticed errors relating to disk space and that it was not able to load config or modules due to disk limit exceeded. (I have included the relevant logs and output below.) So i ran a df to see how much disk space but this looked fine and I could not correlate this to the figures reported by mprov getDiskMiB_* in the ltm log. Also ran a df -i to check inode utilisation which was fine. At this point i tried switching boot location back to HD1.2 which is 11.1-2027 and this was able to boot fine with no disk space issues. I also tried disabling the AVR module this had no effect, i tried deleting the old 11.1 iso image that was sitting there out of desperation, i also tried re-installing 11.2 into location HD1.3 with out installing the hotfix as well. Rebooting into just this image bigip-11.2.0-2446 seemed to work. It booted fine and loaded the config. I then tried to repeat the process on lb1 the active unit. ie installing only 11.2.0-2446 with out the hotfix, reboot same error about disk space. I then disabled AVR, reboot same error, i removed the 11.1 iso image, reboot same error. The lb1 VE should now be at the same state as lb2, with only base 11.2 installed, no AVR and 11.1 iso image deleted (not that i think having this image sitting there matters) and it is still having disk space loading modules/config issues? Failover worked fine when rebooting the active lb1 from 11.1 to 11.2, and the standby lb2 11.2 VE took over traffic redirection. I made sure config was synced before attempting any upgrades and have made sure i have not performed any config sync during upgrade process. (Noticed 11.2 become disconnected for config sync purposes anyway). Plenty of free space on local disk, approx 31gb remaining out of 100 on both lb1 and lb2

Has anyone else seen this when/if upgrading 11.1 to 11.2?
Where are the figures in getDiskMiB_* ie 11460 and 15360 coming from?
When booting 11.1 i noticed both getDiskMiB_Available and getDiskMiB_Potential reported 27648,
Is someone able to explain how F5 does the lvm? (I know how lvm works and how to create pv, vg and lv) But would like to know the F5's logic behind how it allocates extents etc.

I wouldn't have thought i would have to delete iso images as they should be in /shared?
I assume that getDiskMiB_* is reporting in MB?
I did notice the difference between getDiskMiB_Available and getDiskMiB_Potential on 11.2 happens to be the same size as the avrdata application volume?
ie 15360-11460=3900 Is this just a co-incidnce?

11.2 apparently is requiring 16188 to provision but only has a max of 15360 to work with, leaving a discrepancy of 828?

When the lb2 standby unit was able to boot 11.2 successfully i checked the ltm log and looked at the getDiskMiB_Available and getDiskMiB_Potential values, these showed even less then before! But the diff still equates to 3900. It also didn't mention anything about "changing disk space"
Jul 5 15:19:58 localhost info mprov:6959:: getDiskMiB_Available: 7560
Jul 5 15:19:58 localhost info mprov:6959:: getDiskMiB_Potential: 11460


11.2 not booting with disk limit errors
--------------------------------------------------------------------------------------
Jul 4 17:39:23 localhost info mprov:6997:: getDiskMiB_Available: 11460
Jul 4 17:39:23 localhost info mprov:6997:: getDiskMiB_Potential: 15360
Jul 4 17:39:23 localhost info mprov:6997:: There are NO extra disks.
Jul 4 17:39:23 localhost warning mprov:6997:: Disk limit exceeded. 16188 MB are required to provision these modules, but only 11460 MB are available.- will mark potential space as free and retry..
Jul 4 17:39:23 localhost info mprov:6997:: Changing available disk space from 11460 to 15360
Jul 4 17:39:23 localhost err mprov:6997:: Disk limit exceeded. 16188 MB are required to provision these modules, but only 15360 MB are available.
Jul 4 17:39:23 localhost info mprov:6997:: Provisioning (validation) failed.
Jul 4 17:39:23 localhost err mcpd[6417]: 01071008:3: Provisioning failed with error 1 - 'Disk limit exceeded. 16188 MB are required to provision these modules, but only 15360 MB are available.' .
Jul 4 17:39:23 localhost err tmsh[6982]: 01420006:3: Loading configuration process failed.
Jul 4 17:39:23 localhost err load_config_files: "/usr/bin/tmsh -n -g load sys config partitions all base" - failed. -- Loading system configuration... /defaults/app_template_base.conf /defaults/config_base.conf /config/low_profile_base.conf /defaults/wam_base.conf Loading configuration... /config/bigip_base.conf /config/bigip_user.conf /config/partitions/Medway_TestDev/bigip_user.conf 01071008:3: Provisioning failed with error 1 - 'Disk limit exceeded. 16188 MB are required to provision these modules, but only 15360 MB are available.' . Unexpected Error: Loading configuration process failed.
Jul 4 17:39:24 localhost err mcpd[6417]: 01070422:3: Base configuration load failed.

--------------------------------------------------------------------
[@localhost:Offline:Standalone] log # df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/mapper/vg--db--sda-set.3.root
248 181 55 77% /
/dev/mapper/vg--db--sda-set.3._config
3024 72 2799 3% /config
/dev/mapper/vg--db--sda-set.3._usr
1686 1248 353 78% /usr
/dev/mapper/vg--db--sda-set.3._var
3024 260 2611 10% /var
/dev/mapper/vg--db--sda-dat.share.1
20159 2386 16750 13% /shared
/dev/mapper/vg--db--sda-dat.log.1
7056 195 6503 3% /var/log
none 1985 1 1984 1% /dev/shm
none 1985 6 1979 1% /var/tmstat
none 1985 2 1984 1% /var/run
prompt 4 1 4 1% /var/prompt

--------------------------------------------------------------------
@localhost:Offline:Standalone] log # df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vg--db--sda-set.3.root
65536 2958 62578 5% /
/dev/mapper/vg--db--sda-set.3._config
393216 276 392940 1% /config
/dev/mapper/vg--db--sda-set.3._usr
219520 30660 188860 14% /usr
/dev/mapper/vg--db--sda-set.3._var
393216 5880 387336 2% /var
/dev/mapper/vg--db--sda-dat.share.1
2621440 216 2621224 1% /shared
/dev/mapper/vg--db--sda-dat.log.1
917504 128 917376 1% /var/log
none 508018 52 507966 1% /dev/shm
none 508018 17 508001 1% /var/tmstat
none 508018 155 507863 1% /var/run
prompt 508018 8 508010 1% /var/prompt
2
Rate this Question

Answers to this Question

placeholder+image
USER ACCEPTED ANSWER & F5 ACCEPTED ANSWER

Just thought i would post the solution in case anyone else has this issue as well.

It seems that can't have 3 installed Boot locations (or more) with the 100GB disk on the VE (standard for the OVA deployment). Even though i had 30G free space it seems the F5 reserves a percentage of space for internal use. 

To resolve this i had to delete the HD1.3 partition or "Contained Software Volume" from System ›› Disk Management ›› HD1 

2
placeholder+image
USER ACCEPTED ANSWER & F5 ACCEPTED ANSWER
Your findings are correct. Sorry I must of missed your message in the 4th of July week. Regards Simon
0
placeholder+image
USER ACCEPTED ANSWER & F5 ACCEPTED ANSWER
FWIW when the system won't load... And all you have is CLI, the command to remove a deleted volume is



[root@localhost:/S1-red-P:Offline:Standalone] config # tmsh load sys config
Loading system configuration...
  /defaults/app_template_base.conf
  /defaults/config_base.conf
  /config/low_profile_base.conf
  /defaults/wam_base.conf
  /usr/share/monitors/base_monitors.conf
  /config/daemon.conf
  /config/profile_base.conf
  /defaults/fullarmor_gpo_base.conf
  /defaults/classification_base.conf
Loading configuration...
  /config/bigip_base.conf
  /config/bigip_user.conf
  /config/bigip.conf
01071008:3: Provisioning failed with error 1 - 'Disk limit exceeded. 16188 MB are required to provision these modules, but only 15372 MB are available.'
.
Unexpected Error: Loading configuration process failed.
[root@localhost:/S1-red-P:Offline:Standalone] config # tmsh
[root@(localhost)(cfg-sync Standalone)(/S1-red-P:Offline)(/Common)(tmos)# delete sys software volume HD1.1


Then a quick reload...


[root@localhost:/S1-red-P:Offline:Standalone] config # tmsh load sys config
Loading system configuration...
  /defaults/app_template_base.conf
  /defaults/config_base.conf
  /config/low_profile_base.conf
  /defaults/wam_base.conf
  /usr/share/monitors/base_monitors.conf
  /config/daemon.conf
  /config/profile_base.conf
  /defaults/fullarmor_gpo_base.conf
  /defaults/classification_base.conf
Loading configuration...
  /config/bigip_base.conf
  /config/bigip_user.conf
  /config/bigip.conf
[root@ddc-7-vpr1-nonprod:/S1-red-P:Offline:Disconnected] config #

0
placeholder+image
USER ACCEPTED ANSWER & F5 ACCEPTED ANSWER

Had the same issue with vCMP Guests, deleted the extra partition and all worked. Thanx

0
placeholder+image
USER ACCEPTED ANSWER & F5 ACCEPTED ANSWER

Steven and Hamish - You guys are awesome. I really needed this information today. Thank you both so much for posting the solution. It worked for me in my upgrade from 11.6.0 HF6 to 12.1.1 HF1.

0