Learn F5 Technologies, Get Answers & Share Community Solutions Join DevCentral

Filter by:
  • Solution
  • Technology
Answers

Get Virtual Server from Pool (using iControlRest)

I have an automated process that cleans up unused pools. But it fails when the pool is attached to an existing Virtual Server.

I see that I can get the pool from a Virtual Server (via the "pool" property), but I don't see a way to get the Virtual Server name from the Pool.

If there is no way to do this, I can just try to delete it, and then parse the name of the Virtual Server out of the error message. But it would be much nicer to not have to fail as part of normal operations.

Is there a way to get the Virtual Server that is using a pool with only the pool's information? (Using iControlRest)

0
Rate this Question

Answers to this Question

placeholder+image
USER ACCEPTED ANSWER & F5 ACCEPTED ANSWER

I don't know of any way to get that info from the pool itself.

You would have to get the info for all the VS's and check each one to see if it had the pool you want to delete as the default pool... Or clone pool... or wan mentioned in an iRule or local traffic policy..

Maybe parsing that response will be easier :)

0
placeholder+image
USER ACCEPTED ANSWER & F5 ACCEPTED ANSWER

If you navigate to the iControlREST url for LTM pools and add "/example"

https://localhost/mgmt/tm/ltm/pool/example

You will see all the possible items for pools with a description of the item.

items: [
{
propertyDescriptions: {
    allowNat: "Specifies whether the pool can load balance NAT connections. The default value is yes.",
    allowSnat: "Specifies whether the pool can load balance SNAT connections. The default value is yes.",
    appService: "The application service to which the object belongs.",
    autoscaleGroupId: "autoscale-group id to which pool members belong to.",
    description: "User defined description.",
    gatewayFailsafeDevice: "Specifies that the pool is a gateway failsafe pool in a redundant configuration. This string identifies the device that will failover when the monitor reports the pool member down. By default the device string is empty.",
    ignorePersistedWeight: "Do not count the weight of persisted connections on pool members when making load balancing decisions.",
    ipTosToClient: "Specifies the Type of Service (ToS) level to use when sending packets to a client. 65534 (mimic) specifies that the system sets the ToS level of outgoing packets to the same ToS level of the most-recently received incoming packet. The default value is 65535 (pass-through).",
    ipTosToServer: "Specifies the Type of Service (ToS) level to use when sending packets to a server. 65534 (mimic) specifies that the system sets the ToS level of outgoing packets to the same ToS level of the most-recently received incoming packet. The default value is 65535 (pass-through).",
    linkQosToClient: "Specifies the Quality of Service (QoS) level to use when sending packets to a client. The default value is 65535 (pass-through).",
    linkQosToServer: "Specifies the Quality of Service (QoS) level to use when sending packets to a server. The default value is 65535 (pass-through).",
    loadBalancingMode: "Specifies the modes that the system uses to load balance name resolution requests among the members of this pool. See "help pool" for a description of each loading balancing mode.",
    members: "Manage the set of pool members that are associated with a load balancing pool",
    metadata: {
        appService: "",
        persist: "Specifies whether the command "tmsh save sys config" will save the metadata entry to the configuration files.",
        value: "Value of the pool metadata"
    },
    minActiveMembers: "Specifies the minimum number of members that must be up for traffic to be confined to a priority group when using priority-based activation. The default value is 0 (zero). An active member is a member that is up (not marked down) and is handling fewer connections than its connection limit.",
    minUpMembers: "Specifies the minimum number of pool members that must be up; otherwise, the system takes the action specified in the min-up-members-action option. Use this option for gateway pools in a redundant system where a unit number is applied to a pool. This indicates that the pool is only configured on the specified unit.",
    minUpMembersAction: "Specifies the action to take if the min-up-members-checking is enabled and the number of active pool members falls below the number specified in min-up-members. The default value is failover.",
    minUpMembersChecking: "Enables or disables the min-up-members feature. If you enable this feature, you must also specify a value for both the min-up-members and min-up-members-action options.",
    monitor: "Specifies the health monitors that the system uses to determine whether it can use this pool for load balancing. The monitor marks the pool up or down based on whether the monitor(s) are successful. You can specify a single monitor, multiple monitors "http and https", or a "min" rule, "min 1 of { http https }". You may remove the monitor by specifying "none".",
    profiles: "Specifies the profile to use for encapsulation. The default value is none, which indicates no encapsulation.",
    queueDepthLimit: "Specifies the maximum number of connections that may simultaneously be queued to go to any member of this pool. The default is zero which indicates there is no limit.",
    queueOnConnectionLimit: "Enable or disable queuing connections when pool member or node connection limits are reached. When queuing is not enabled, new connections are reset when connection limits are met.",
    queueTimeLimit: "Specifies the maximum time, in milliseconds, a connection will remain enqueued. The default is zero which indicates there is no limit.",
    reselectTries: "Specifies the number of times the system tries to contact a pool member after a passive failure. A passive failure consists of a server-connect failure or a failure to receive a data response within a user-specified interval. The default is 0 (zero), which indicates no reselect attempts.",
    serviceDownAction: "Specifies the action to take if the service specified in the pool is marked down. The default value is none.",
    slowRampTime: "Specifies, in seconds, the ramp time for the pool. This provides the ability to cause a pool member that has just been enabled, or marked up, to receive proportionally less traffic than other members in the pool. The proportion of traffic the member accepts is determined by how long the member has been up in comparison to the slow-ramp-time setting for the pool.For example, if the load-balancing-mode of a pool is round-robin, and it has a slow-ramp-time of 60 seconds, when a pool member has been up for only 30 seconds, the pool member receives approximately half the amount of new traffic as other pool members that have been up for more than 60 seconds. After the pool member has been up for 45 seconds, it receives approximately three quarters of the new traffic.The slow ramp time is particularly useful when used with the least-connections-member load balancing mode. The default value is 10."
    },
    allowNat: "yes",
    allowSnat: "yes",
    appService: "",
    autoscaleGroupId: "",
    description: "",
    gatewayFailsafeDevice: "",
    ignorePersistedWeight: "disabled",
    ipTosToClient: "pass-through",
    ipTosToServer: "pass-through",
    linkQosToClient: "pass-through",
    linkQosToServer: "pass-through",
    loadBalancingMode: "round-robin",
    members: {
        isSubcollection: true,
        propertyDescriptions: {
        address: "IP address of a pool member if a node by the given name does not already exist.",
        appService: "",
        connectionLimit: "Specifies the maximum number of concurrent connections allowed for a pool member. The default value is 0 (zero).",
        description: "User defined description.",
        dynamicRatio: "Specifies a range of numbers that you want the system to use in conjunction with the ratio load balancing method. The default value is 1.",
        fqdn: {
        autopopulate: "",
        tmName: ""
        },
        inheritProfile: "Specifies whether the pool member inherits the encapsulation profile from the parent pool. The default value is enabled. If you disable inheritance, no encapsulation takes place, unless you specify another encapsulation profile for the pool member using the profiles attribute.",
        metadata: {
        appService: "",
        persist: "Specifies whether the command "tmsh save sys config" will save the metadata entry to the configuration files.",
        value: "Value of the pool member metadata"
        },
        monitor: "Displays the health monitors that are configured to monitor the pool member, and the status of each monitor. The default value is default.",
        priorityGroup: "Specifies the priority group within the pool for this pool member. The priority group number specifies that traffic is directed to that member before being directed to a member of a lower priority. The default value is 0.",
        profiles: "",
        rateLimit: "Specifies the maximum number of connections per second allowed for a pool member. The default value is 'disabled'.",
        ratio: "Specifies the ratio weight that you want to assign to the pool member. The default value is 1.",
        session: "Enables or disables the pool member for new sessions. The default value is user-enabled.",
        state: "user-down forces the pool member offline, overriding monitors. user-up reverts the user-down. When user-up, this displays the monitor state."
        },
        address: "",
        appService: "",
        connectionLimit: 0,
        description: "",
        dynamicRatio: 1,
        fqdn: {
            autopopulate: "",
            tmName: ""
        },
        inheritProfile: "enabled",
        metadata: [ ],
        monitor: "default",
        priorityGroup: 0,
        profiles: [ ],
        rateLimit: "disabled",
        ratio: 1,
        session: "",
        state: ""
        },
        metadata: [ ],
        minActiveMembers: 0,
        minUpMembers: 0,
        minUpMembersAction: "failover",
        minUpMembersChecking: "disabled",
        monitor: "",
        profiles: [ ],
        queueDepthLimit: 0,
        queueOnConnectionLimit: "disabled",
        queueTimeLimit: 0,
        reselectTries: 0,
        serviceDownAction: "none",
        slowRampTime: 10,
        naturalKeyPropertyNames: [
        "name",
        "partition",
        "subPath"
  ]

From the output provided we can see that virtual servers are not a subcollection of the pool collection. This means you can't retrieve the virtual server from the pool collection.

I would create logic (Python) in my iControlREST call to grab the pool name you want to verify and then loop through all the virtual servers to find if at least one is using this pool.

Hope this helps Joel

0
Comments on this Answer
Comment made 30-Aug-2017 by THi 1154

Note that looping through virtual servers is not enough to explicitly reveal orphaned pools. Pools can be selected using irules and LTM policies without any reference to it in the virtual server definition itself.

You might do a shell script or similar, to browse through all bigip.conf files to find if there are any additional references to each pool name, than the one in the pool definition itself.

So if there is only a single instance of the pool name, and that one is "ltm pool <pool_path/pool-name> ...", then that pool may be orphan.

Even simple grep <pool_name> bigip.conf may give you a list of lines..

The bigip.conf files are in /config/ and additional ones in each partition /config/partitions/<partition_name>/ (At least in sw v11 & 12).

0
Comment made 30-Aug-2017 by THi 1154

Also alternative may be to try to do a "brute force" delete, As I assume you are doing. If one tries to delete a pool, for which there are dependencies in the configuration, the system will give an error when trying to load the new config.

You might do a delete pool command and check with verify (eg. tmsh load sys config file /config/bigip.conf verify). If you get an error, there are dependencies for the pool. I think you should add error handling in to your automation scripts?

0