Learn F5 Technologies, Get Answers & Share Community Solutions Join DevCentral

Filter by:
  • Solution
  • Technology
Clear all filters
Answers

Error Running Ansible tasks on the active BIG-IP K10531487

I'm getting an error running code from K10531487: Running Ansible tasks on the active BIG-IP in a device group.

This appears to be an auth error on the active device, however the play runs cleanly getting facts on the device, and this environment runs other ansible scripts on the same F5's as well. Any ideas for next steps? I appreciate your help.

Error:

TASK [Display bigip facts f5bm.express-scripts.com] ******************************************
ok: [f5bm.express-scripts.com] => {}

MSG:

[u'Hostname: f5bm.express-scripts.com', u'Status: HA_STATE_ACTIVE']

TASK [Create pool] ************************************************************************************
fatal: [f5bm.express-scripts.com -> localhost]: FAILED! => {
    "changed": false
}

MSG:

Unable to connect to f5bm.express-scripts.com on port 443. The reported error was "Unexpected **kwargs: {'verify': False}".

        to retry, use: --limit @/home/eh7305/scripts/ansible/f5tst.retry

PLAY RECAP ********************************************************************************************
f5am.express-scripts.com : ok=2    changed=0    unreachable=0    failed=0
f5bm.express-scripts.com : ok=2    changed=0    unreachable=0    failed=1
`</pre>

Playbook:

<pre>`---
- name: "Syncing F5 Active config to group"
  hosts: "drhaf5"
  serial: 1
  vars_files:
    - "vars/main.yml"
    - "vars/vault.yml"
  gather_facts: "no"
#  roles:
#    - "f5syncactive"

  tasks:
    - name: "Get bigip facts"
      bigip_facts:
        server: "{{inventory_hostname}}"
        user: "admin"
        password: "{{adminpass}}"
        include:
          - "device"
          - "system_info"
        validate_certs: False
      check_mode: no
      delegate_to: "localhost"

    - name: "Display bigip facts {{inventory_hostname}}"
      debug:
        msg:
          - "Hostname: {{ system_info.system_information.host_name }}"
          - "Status: {{ device['/Common/' + system_info.system_information.host_name].failover_state }}"
    - name: "Create pool"
      bigip_pool:
        server: "{{inventory_hostname}}"
        user: "admin"
        password: "{{adminpass}}"
        lb_method: "round-robin"
        monitors: http
        name: "pool1"
        validate_certs: False
      notify:
        - "Save the running configuration to disk"
        - "Sync configuration from device to group"
      delegate_to: "localhost"
      when: device['/Common/' + system_info.system_information.host_name].failover_state == "HA_STATE_ACTIVE"

  handlers:
    - name: "Save the running {{inventory_hostname}} configuration to disk"
      bigip_config:
        save: "yes"
        server: "{{inventory_hostname}}"
        user: "admin"
        password: "{{adminpass}}"
        validate_certs: False
      delegate_to: localhost

    - name: "Handler Sync configuration from {{inventory_hostname}} to group"
      bigip_configsync_action:
        device_group: "sync-failover-group"
        sync_device_to_group: "yes"
        server: "{{inventory_hostname}}"
        user: "admin"
        password: "{{adminpass}}"
        validate_certs: False
      delegate_to: localhost
0
Rate this Question
Comments on this Question
Comment made 2 months ago by DennisJann 213

You didn't mention the versions of Ansible and BIG-IP OS used in your environment. That information would be helpful for someone to reproduce and diagnose the issue you reported.

For example, using Ansible 2.4 and BIG-IP version 12.1.3, I get the following error about a missing monitor_type parameter during the pool creation task:

fatal: [bigip.localdomain -> localhost]: FAILED! => {"changed": false, "msg": "The 'monitor_type' parameter cannot be empty when 'monitors' parameter is specified."}

Once I added the monitor_type parameter, the pool creation task ran successfully.

Try checking the bigip_pool.py module in your Ansible distribution for parameter requirements.

0
Comment made 2 months ago by KernelPanic 169

Excellent point sir! TMOS 12.1.3.6 ansible 2.7.5 python version = 2.7.5 GCC 4.8.5 20150623 Red Hat 4.8.5-36

I added a monitor type and have the same error. I think it has something to do with the handler but could be wrong.

Using module_utils file /usr/lib/python2.7/site-packages/ansible/module_utils/network/common/__init__.py
`</pre>

Using module_utils file /usr/lib/python2.7/site-packages/ansible/module_utils/network/common/utils.py
Using module file /usr/lib/python2.7/site-packages/ansible/modules/network/f5/bigip_pool.py
<localhost> PUT /home/eh7305/.ansible/tmp/ansible-local-6428wf53d0/tmpYBaaD6 TO /home/eh7305/.ansible/tmp/ansible-tmp-1548950000.6-199856288208530/AnsiballZ_bigip_pool.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/eh7305/.ansible/tmp/ansible-tmp-1548950000.6-199856288208530/ /home/eh7305/.ansible/tmp/ansible-tmp-1548950000.6-199856288208530/AnsiballZ_bigip_pool.py &amp;&amp; sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python2 /home/eh7305/.ansible/tmp/ansible-tmp-1548950000.6-199856288208530/AnsiballZ_bigip_pool.py &amp;&amp; sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/eh7305/.ansible/tmp/ansible-tmp-1548950000.6-199856288208530/ &gt; /dev/null 2&gt;&amp;1 &amp;&amp; sleep 0'
The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
  File "/tmp/ansible_bigip_pool_payload_etue6Y/__main__.py", line 947, in main
    results = mm.exec_module()
  File "/tmp/ansible_bigip_pool_payload_etue6Y/__main__.py", line 709, in exec_module
    changed = self.present()
  File "/tmp/ansible_bigip_pool_payload_etue6Y/__main__.py", line 757, in present
    if self.exists():
  File "/tmp/ansible_bigip_pool_payload_etue6Y/__main__.py", line 836, in exists
    return self.client.api.tm.ltm.pools.pool.exists(
  File "/tmp/ansible_bigip_pool_payload_etue6Y/ansible_bigip_pool_payload.zip/ansible/module_utils/network/f5/bigip.py", line 61, in api
    raise F5ModuleError(error)</localhost></localhost></localhost></localhost>

fatal: [haf5b.express-scripts.com -&gt; localhost]: FAILED! =&gt; {
    "changed": false,
    "invocation": {
        "module_args": {
            "auth_provider": null,
            "description": null,
            "lb_method": "round-robin",
            "metadata": null,
            "monitor_type": "single",
            "monitors": [
                "http"
            ],
            "name": "pool1",
            "partition": "Common",
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "priority_group_activation": null,
            "provider": {
                "auth_provider": null,
                "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                "server": "f5bm.express-scripts.com",
                "server_port": null,
                "ssh_keyfile": null,
                "timeout": null,
                "transport": "rest",
                "user": "admin",
                "validate_certs": false
            },
            "quorum": null,
            "reselect_tries": null,
            "server": "f5bm.express-scripts.com",
            "server_port": null,
            "service_down_action": null,
            "slow_ramp_time": null,
            "state": "present",
            "transport": null,
            "user": "admin",
            "validate_certs": false
        }
    }
}

MSG:

Unable to connect to f5bm.express-scripts.com on port 443. The reported error was "Unexpected **kwargs: {'verify': False}".

<pre>`    to retry, use: --limit @/home/eh7305/scripts/ansible/f5tst.retry

PLAY RECAP *************************************************************************************************************************************************** f5am.express-scripts.com : ok=2 changed=0 unreachable=0 failed=0 f5bm.express-scripts.com : ok=2 changed=0 unreachable=0 failed=1

0

Answers to this Question

placeholder+image
USER ACCEPTED ANSWER & F5 ACCEPTED ANSWER

From the error and looking at the documentation my guess is the validate_certs valid is not valid. Although it is a boolean parameter the documentations does stat the value should be yes (default if omitted) or no.

Guessing the bigip_pool module is stricter on this value than other modules which may be happy to accept True or False values.


On a side note I would look to use a provider which can be set as a variable and simply referenced within each F5 BIG-IP module in a single line. As an example see the following playbook 'AnsibleF5Archiver playbook f5Archiver.yml'

0
Comments on this Answer
Comment made 2 months ago by KernelPanic 169

Thank you for the provider suggestion, I change the validate_certs to "no" and it still prints out as false in the output, and still failing with the kwargs error. Which I read somewhere is an auth error. But I see not failed auth in the rest, secure, or audit log files.

0
Comment made 2 months ago by Andy McGrath 2563

Is your user assigned the Administrator role on the F5 you are connecting to?

0
Comment made 2 months ago by KernelPanic 169

Yes, the user get's facts in the previous play. I took out the pool addition and created a profile instead and it works without error. So there is something wrong with the pool play.

0
placeholder+image
USER ACCEPTED ANSWER & F5 ACCEPTED ANSWER

Give the following, or something similar a try. This uses a provider variable for connection details.

I remember I had an issue with one BIGIP module that didn't work correctly with the provider so is you get an error with one might need to change it back but found got less issues with the BIGIP modules that work with the provider.

Also another question question is what verion of the F5 Python SDK are you running?

- name: "Syncing F5 Active config to group"
  hosts: "drhaf5"
  serial: 1
  vars_files:
    - "vars/main.yml"
    - "vars/vault.yml"
  vars:
    f5Provider:
      server: "{{ inventory_hostname }}"
      server_port: 443
      user: admin
      password: "{{adminpass}}"
      validate_certs: no
      transport: rest
  gather_facts: "no"
#  roles:
#    - "f5syncactive"

  tasks:
    - name: "Get bigip facts"
      bigip_facts:
        provider: "{{f5Provider}}"
        include:
          - "device"
          - "system_info"
      check_mode: no
      delegate_to: "localhost"

    - name: "Display bigip facts {{inventory_hostname}}"
      debug:
        msg:
          - "Hostname: {{ system_info.system_information.host_name }}"
          - "Status: {{ device['/Common/' + system_info.system_information.host_name].failover_state }}"
    - name: "Create pool"
      bigip_pool:
        provider: "{{f5Provider}}"
        lb_method: "round-robin"
        monitors: http
        name: "pool1"
      notify:
        - "Save the running configuration to disk"
        - "Sync configuration from device to group"
      delegate_to: "localhost"
      when: device['/Common/' + system_info.system_information.host_name].failover_state == "HA_STATE_ACTIVE"

  handlers:
    - name: "Save the running {{inventory_hostname}} configuration to disk"
      bigip_config:
        save: "yes"
        provider: "{{f5Provider}}"
      delegate_to: localhost

    - name: "Handler Sync configuration from {{inventory_hostname}} to group"
      bigip_configsync_action:
        device_group: "sync-failover-group"
        sync_device_to_group: "yes"
        provider: "{{f5Provider}}"
      delegate_to: localhost
0
placeholder+image
USER ACCEPTED ANSWER & F5 ACCEPTED ANSWER

I found through testing that this was a software issue on the ansible host, causing instability in the various modules in the script. I moved to another server and virtual environment and the script worked flawlessly. Lesson learned, always build ansible F5 in a virtualenv!

0