Because of some changes in clustering, the introduction of LVM, and some improvements in statistics, the System iControl module has seen a great number of new APIs implemented. While some of these will only matter to a few of you, some are more globally applicable, and if you need one of them, you’ll be glad they’re here.

Since Joe has developed the (excellent, as usual) Wiki Pages for these APIs, I won’t recreate that document, merely offer you some tips on usage.

System::ConfigSync interfaces

Configuync gets even more configuration and file interfaces to help you manage your system. These are pretty straight-forward, so we’ll dive right in.



This API installs the entire contents of an archived configuration file – all of the configurations that were archived off when the file was created are installed on the BIG-IP you are connected to. The only parameter is a filename with path to the file.



This routine is the same as install_all_configuration, but it takes both a filename and the passphrase you used when the encrypted configuration was initially saved.



This is a file move routing that moves things around on the BIG-IP file system. Note that this is a rename, so you can’t move things to a different file system with it. The API takes the current filename and path and the new filename and path.


System::Failover interfaces


This routine takes no parameters and sets the device to the “forced offline” state.



Reverses the effects of set_offline. Takes no parameters and clears the “forced offline” state.


System::Inet interfaces

System::Inet only got one new routine, and while it’s not much to talk about, I’m thrilled that it’s available to us via iControl now.



Does just that. Same as the *NIX routine, takes a valid host name and applies it to the BIG-IP system. So now for fun you can go change the host name of your BIG-IP from code, then flip it back before the network guys can figure it out. Okay, maybe that’s a bad idea, but there are valid uses for this one.

System::SoftwareManagement interfaces

System::SoftwareManagement has a bunch of items, but most of them require either VIPRION or LVM, I’ve marked them here, be aware. Basically, anything with cluster in the name is aimed at VIPRION, and anything with hotfix in the name is aimed at LVM enabled systems.



This routine takes an array of strings that are the names of the image(s) you wish to delete from the system. Warning, this is a drastic measure that you can’t undo unless you have them backed up elsewhere, so make certain you really want to before calling this routine.


get_cluster_boot_location (VIPRION)

This routine tells you which image (volume) will boot when the cluster is next restarted. It takes no parameters and returns the string representing the short name of the volume in question.


get_software_hotfix (VIPRION)

This routine takes an array of System::SoftwareManagement::RepositoryImageIDs (which are slot number and file location pairs), and returns an array of System::SoftwareManagement::softwareRepositoryHotfix objects. Each element of the returned array indicates which element of the input array it is referencing by fields that match up with the RepositoryImageIDs object’s fields (slot number and file location), and then define a hotfix within that file for that slot.


get_software_hotfix_list (VIPRION)

Taking no parameters, this routine returns a list of System::SoftwareManagement::RepositoryImageIDs available on the system. Useful as setup for the above get_software_hotfix call.


get_software_image (VIPRION)

This routine is exactly the same as the get_software_hotfix routine except it returns information about the base image. Note that checksum is one of the fields, this is useful to make certain your image is valid.


get_software_image_list (VIPRION)

This routine is the same as get_software_hotfix list except it returns information about images available to the system. It takes no parameters.



This routine takes no parameters, and returns true if the disks are formatted with LVM, false if they are managed with partitions (prior to 10.x, this will always be false, after 10.x it depends upon how you upgraded to 10.x).


set_cluster_boot_location (VIPRION)

This routine is the same as the previously available set_boot_location, but sets the location for the VIPRION cluster to boot from. It takes the short name of a volume (HD1.1 for example) and sets that to be the boot location from this point forward. This is a useful tool for switching images during upgrades and such.


System::Statistics interfaces

System::Statistics received some additional functions to make your life easier. These are both things that were available in the past but you had to add them up yourself. Now you can get summary data direct from the BIG-IP.



Returns summary data for all hosts on the system (as opposed to get_all_host_statistics, which returns records for each host) it takes no parameters and returns a System::GlobalHostStatistics structure, which in turn contains a timestamp and an array of statistics entries. This method does not return CPU statistics, see System::SystemInfo::get_global_cpu_usage_extended_information below and System::SystemInfo::get_all_cpu_usage_extended_information on the iControl Wiki for retrieving that information.



This routine takes no parameters and returns a System::Statistics::GlobalTMMStatistics element that in turn contains a timestamp and an array of statistics elements. Like get_global_host_statistics, this routine returns summary data for all TMOS processes.

System::SystemInfo interfaces


This routine takes no parameters and returns summary information about CPU usage on the BIG-IP in the form of a System::GlobalCPUUsageExtendedInformation object, which in turn contains a timestamp and an array of Common::Statistics entries with the data in them.

System::Cluster interfaces

The System::Cluster Interfaces are all new, and are currently only supported on the VIPRION platform. If you don’t own a VIPRION, these routines will not work as described here for you, instead each will return the Common::NotImplemented exception.



This routine takes a cluster name – the cluster name is defined at the time that you first configure your VIPRION or through the management screens (okay, it should take a cluster name, but it really takes a list of cluster names to maintain our “we take lists as parameters” standard), and returns the HA state of the cluster (the cluster is the VIPRION frame in essence). It returns an array, the first element of which is one of the following:







Since these are pretty self-explanatory, I’ll skip giving you a blow-by-blow.

The best use of this routine is going to be polling to make certain you have uptime. There are other ways of achieving this goal – like SNMP – so make sure you’ve looked at all the options and chosen the best one for your management tools before writing an iControl monitoring application (though there are cases where an iControl app is definitely an option).



This routine does for members (slots) what the above routine does for a cluster. It takes the cluster name (in an array as always), and returns the states. Of the members in a multi-dimensional array, one row per entry in the cluster name (at this time, that means only row[0] will be populated on return). The possible return values are exactly the same as for get_cluster_ha_state:




Like the other min_up_members routines in the iControl API, this routine tells you how many members must be up before the action defined in min_up_members_action is fired. The routine takes the name of the cluster (again in an array) and returns the minimum number of members that should be up (in an array). In both cases, only the first element will ever be populated at the time of this writing.



This routine sets the value used to determine when too few members are up. It takes an array of cluster names and an array of longs, where cluster_names[0] is set to have a minimum up member value of longs[0].



This routine tells you what action is defined for the cluster if the number of slots in the cluster that are “up” drops below the number defined by set_min_up_members. It takes an array of cluster names (only the first of which is currently used), and returns an array of Common::HAAction elements. The return values at release of 10.0 is:










Some of these are not as intuitive as those defined for HA State, so we’ll look at a couple to make certain you have the idea. RESTART only restarts the high availability daemon on the cluster, not the cluster, FAILOVER and FAILOVER RESTART fail over to a peered cluster – which obviously won’t work when the time comes if you don’t have a peered cluster, GO_ACTIVE makes this the active cluster in a redundant pair (highly unlikely from min_up_members, but remember that these values are Common, so we use them for other actions too), RESTART_ALL restarts all daemons, and FAILOVER_ABORT_TRAFFIC_MGT stops trying to manage traffic, aborting TMM completely.


This routine takes an array of cluster names and an array of HAActions. HAAction[0] is set as the action to perform when cluster[0] falls below the minimum up number.



This routing takes an array of cluster names (currently only the first is used), and returns a corresponding array of Common::EnabledState values in the form of either STATE_ENABLED or STATE_DISABLED to indicate whether the minimum up members value will trigger the minimum up members action. If set to STATE_DISABLED, the action defined will not be performed, if STATE_ENABLED, the action returned by get_min_up_members_action will be performed when the number of slots up drops below the value returned by get_min_up_members.


This routing sets the state of the minimum up number functionality to enabled or disabled. It takes an array of cluster names and an array of Common::EnabledState values, setting the state of minimum up member functionality for clusterName[0] to enabled or disabled based on EnabledState[0]. As with all of the clustering routines, at this time only element zero of these arrays is used.