Forum Discussion

Michael_61029's avatar
Michael_61029
Icon for Nimbostratus rankNimbostratus
Oct 01, 2014

F5 BigIP LTM VE and OpenStack LBaaS integratation

I have followed various bits of documentation from F5 on how to get the LBaaS agent and driver installed and configured in OpenStack. However, when I try to add a pool, I get the error message:

 

"No eligible backend for pool pool_id"

 

and in the logs:

 

"No active lbaas agents for pool pool_id"

 

The neutron lbaas agent doesn't show up in the output of 'neutron agent-list', even though it is installed.

 

I am running this on a multi-host DevStack build running Icehouse and the F5 LTM VE pair are running BIG-IP 11.5.1 Build 4.0.128 Hotfix HF4. They are correctly configured as a HA pair.

 

8 Replies

  • Additional log file info, from the F5 agent log: 2014-10-02 23:30:38.197 INFO neutron.services.loadbalancer.drivers.f5.bigip.agent_manager [-] Initializing LbaasAgentManager with conf 2014-10-02 23:30:38.197 DEBUG neutron.services.loadbalancer.drivers.f5.bigip.agent_manager [-] Initializing LogicalServiceCache version 0.1.1 from (pid=19679) __init__ /opt/stack/neutron/neutron/services/loadbalancer/drivers/f5/bigip/agent_manager.py:93 2014-10-02 23:30:38.232 DEBUG neutron.services.loadbalancer.drivers.f5.bigip.icontrol_driver [-] physical_network default = interface 1.1, tagged False from (pid=19679) __init__ /opt/stack/neutron/neutron/services/loadbalancer/drivers/f5/bigip/icontrol_driver.py:278 2014-10-02 23:30:38.232 INFO neutron.services.loadbalancer.drivers.f5.bigip.icontrol_driver [-] Opening iControl connections to lbaas-adm @ bigipeagcloudqa2.int.thomsonreuters.com 2014-10-02 23:30:38.939 DEBUG neutron.services.loadbalancer.drivers.f5.bigip.icontrol_driver [-] DEBUGG1 None from (pid=19679) _init_connection /opt/stack/neutron/neutron/services/loadbalancer/drivers/f5/bigip/icontrol_driver.py:3012 2014-10-02 23:30:38.939 ERROR neutron.services.loadbalancer.drivers.f5.bigip.icontrol_driver [-] Could not communicate with all iControl devices: device bigipeagcloudqa2.int.thomsonreuters.com BIG-IP not provisioned for management LARGE. extramb=0 2014-10-02 23:30:38.940 INFO neutron.services.loadbalancer.drivers.f5.bigip.icontrol_driver [-] iControlDriver initialized to 0 hosts with username:lbaas-adm 2014-10-02 23:30:38.940 INFO neutron.services.loadbalancer.drivers.f5.bigip.icontrol_driver [-] iControlDriver dynamic agent configurations:{'tunnel_types': ['vxlan', 'gre'], 'bridge_mappings': {'default': '1.1'}} 2014-10-02 23:30:38.940 DEBUG neutron.services.loadbalancer.drivers.f5.bigip.agent_manager [-] DEBUGG None from (pid=19679) __init__ /opt/stack/neutron/neutron/services/loadbalancer/drivers/f5/bigip/agent_manager.py:168 2014-10-02 23:30:38.940 ERROR neutron.services.loadbalancer.drivers.f5.bigip.agent_manager [-] Agent host attribute is not configured by the driver. Fix the driver config and restart the agent. 2014-10-02 23:30:38.947 DEBUG neutron.openstack.common.service [-] f5_bigip_lbaas_device_driver = neutron.services.loadbalancer.drivers.f5.bigip.icontrol_driver.iControlDriver from (pid=19679) log_opt_values /usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941 2014-10-02 23:30:38.948 DEBUG neutron.openstack.common.service [-] f5_loadbalancer_pool_scheduler_driver = neutron.services.loadbalancer.drivers.f5.agent_scheduler.TenantScheduler from (pid=19679) log_opt_values /usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941 2014-10-02 23:30:38.963 DEBUG neutron.openstack.common.service [-] service_providers.service_provider = ['LOADBALANCER:F5:neutron.services.loadbalancer.drivers.f5.plugin_driver.F5PluginDriver:default', 'VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default'] from (pid=19679) log_opt_values /usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1949
  • Hi Michael,

     

    Can you open a case with F5 Support on this?

     

    Thanks, Aaron

     

  • Hi Aaron,

     

    Thanks for the response, but I have already raised a support case (reference: 1-671237432) and was told that the plugin wasn't supported and I'd have to post to DevCentral to get a response...

     

    Any help would be very appreciated.

     

    Regards, Michael

     

  • It turns out that the key error message here was:

     

    2014-10-02 23:30:38.939 ERROR neutron.services.loadbalancer.drivers.f5.bigip.icontrol_driver [-] Could not communicate with all iControl devices: device bigipeagcloudqa2.int.thomsonreuters.com BIG-IP not provisioned for management LARGE. extramb=0

     

    The following can be done to rectify this, perform either:

     

    1) Under resource provisioning (in the UI) change Management (MGMT) to 'Large' 2) Run 'tmsh modify sys db provision.extramb value 500' (from the advanced shell not tmos)

     

    It seems that performing both steps allows the plugin to work as well. Hope this helps someone else

     

  • linjing_54779's avatar
    linjing_54779
    Historic F5 Account

    Hi All, I got this error when I start f5-agent: I got this error when I am trying to start the agent:

    2015-01-07 12:10:59.712 42967 INFO neutron.services.loadbalancer.drivers.f5.bigip.agent_manager [-] Initializing LbaasAgentManager with conf 
    2015-01-07 12:10:59.714 42967 DEBUG neutron.services.loadbalancer.drivers.f5.bigip.agent_manager [-] Initializing LogicalServiceCache version 0.1.1 __init__ /usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/drivers/f5/bigip/agent_manager.py:93
    Error importing loadbalancer device driver: neutron.services.loadbalancer.drivers.f5.bigip.icontrol_driver.iControlDriver
    

    I run 3 nodes in my vmware workstation: one controller, one compute and one network node I installed the driver in controller node, and installed both agent and driver in network node. Then follow the readme file which in the downloaded file.

    I can see "f5" in horizon when I add pool in the Load balancer. Of course, it will be failed as f5-agent is not running.. below is my f5-agent ini file. Anybody can help me?

    here is my f5-agent conf:
    
    root@network:/etc/neutron egrep -v "^|^$" f5-bigip-lbaas-agent.ini 
    [DEFAULT]
    debug = True
    periodic_interval = 10
    f5_static_agent_configuration_data = name1:value1, name1:value2, name3:value3
    f5_device_type = external
    f5_ha_type = standalone 
    sync_mode = replication
    f5_external_physical_mappings = default:1.3:True
    f5_vtep_folder = 'Common'
    f5_vtep_selfip_name = 'vtep'
    advertised_tunnel_types = gre
    l2_population = True
    f5_global_routed_mode = False 
    use_namespaces = True
    f5_route_domain_strictness = False
    f5_snat_mode = True
    f5_snat_addresses_per_subnet = 1
    f5_common_external_networks = True
    f5_bigip_lbaas_device_driver = neutron.services.loadbalancer.drivers.f5.bigip.icontrol_driver.iControlDriver
    icontrol_hostname = 192.168.232.245
    icontrol_username = admin
    icontrol_password = admin
    icontrol_connection_retry_interval = 10
    
    here is my neutron.conf, and there is haproxy setting, it works in my openstack.  
    
    
    root@controller:/home/mycisco egrep -v "^|^$" /etc/neutron/neutron.conf 
    [DEFAULT]
    state_path = /var/lib/neutron
    lock_path = $state_path/lock
    core_plugin = ml2
    service_plugins = router,lbaas
    f5_loadbalancer_pool_scheduler_driver = neutron.services.loadbalancer.drivers.f5.agent_scheduler.TenantScheduler
    auth_strategy = keystone
    allow_overlapping_ips = True
    rabbit_host = 192.168.232.138
    rpc_backend = neutron.openstack.common.rpc.impl_kombu
    notification_driver = neutron.openstack.common.notifier.rpc_notifier
    notify_nova_on_port_status_changes = True
    notify_nova_on_port_data_changes = True
    nova_url = http://192.168.232.138:8774/v2
    nova_admin_username = nova
    nova_admin_tenant_id = f0ef0312929d433b9b1dcc3d030d0634
    nova_admin_password = service_pass
    nova_admin_auth_url = http://192.168.232.138:35357/v2.0
    [quotas]
    [agent]
    root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
    [keystone_authtoken]
    auth_host = 192.168.232.138
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = neutron
    admin_password = service_pass
    signing_dir = $state_path/keystone-signing
    [database]
    connection = mysql://neutron:NEUTRON_DBPASS@192.168.232.138/neutron
    [service_providers]
    service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
    service_provider=LOADBALANCER:F5:neutron.services.loadbalancer.drivers.f5.plugin_driver.F5PluginDriver
    service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
    
    • linjing_54779's avatar
      linjing_54779
      Historic F5 Account
      ubuntu 14.04, icehouse. Install manually by apt. Does the agent support this ?
  • Hi All, I got this error when I start f5-agent: I got this error when I am trying to start the agent:

    2015-01-07 12:10:59.712 42967 INFO neutron.services.loadbalancer.drivers.f5.bigip.agent_manager [-] Initializing LbaasAgentManager with conf 
    2015-01-07 12:10:59.714 42967 DEBUG neutron.services.loadbalancer.drivers.f5.bigip.agent_manager [-] Initializing LogicalServiceCache version 0.1.1 __init__ /usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/drivers/f5/bigip/agent_manager.py:93
    Error importing loadbalancer device driver: neutron.services.loadbalancer.drivers.f5.bigip.icontrol_driver.iControlDriver
    

    I run 3 nodes in my vmware workstation: one controller, one compute and one network node I installed the driver in controller node, and installed both agent and driver in network node. Then follow the readme file which in the downloaded file.

    I can see "f5" in horizon when I add pool in the Load balancer. Of course, it will be failed as f5-agent is not running.. below is my f5-agent ini file. Anybody can help me?

    here is my f5-agent conf:
    
    root@network:/etc/neutron egrep -v "^|^$" f5-bigip-lbaas-agent.ini 
    [DEFAULT]
    debug = True
    periodic_interval = 10
    f5_static_agent_configuration_data = name1:value1, name1:value2, name3:value3
    f5_device_type = external
    f5_ha_type = standalone 
    sync_mode = replication
    f5_external_physical_mappings = default:1.3:True
    f5_vtep_folder = 'Common'
    f5_vtep_selfip_name = 'vtep'
    advertised_tunnel_types = gre
    l2_population = True
    f5_global_routed_mode = False 
    use_namespaces = True
    f5_route_domain_strictness = False
    f5_snat_mode = True
    f5_snat_addresses_per_subnet = 1
    f5_common_external_networks = True
    f5_bigip_lbaas_device_driver = neutron.services.loadbalancer.drivers.f5.bigip.icontrol_driver.iControlDriver
    icontrol_hostname = 192.168.232.245
    icontrol_username = admin
    icontrol_password = admin
    icontrol_connection_retry_interval = 10
    
    here is my neutron.conf, and there is haproxy setting, it works in my openstack.  
    
    
    root@controller:/home/mycisco egrep -v "^|^$" /etc/neutron/neutron.conf 
    [DEFAULT]
    state_path = /var/lib/neutron
    lock_path = $state_path/lock
    core_plugin = ml2
    service_plugins = router,lbaas
    f5_loadbalancer_pool_scheduler_driver = neutron.services.loadbalancer.drivers.f5.agent_scheduler.TenantScheduler
    auth_strategy = keystone
    allow_overlapping_ips = True
    rabbit_host = 192.168.232.138
    rpc_backend = neutron.openstack.common.rpc.impl_kombu
    notification_driver = neutron.openstack.common.notifier.rpc_notifier
    notify_nova_on_port_status_changes = True
    notify_nova_on_port_data_changes = True
    nova_url = http://192.168.232.138:8774/v2
    nova_admin_username = nova
    nova_admin_tenant_id = f0ef0312929d433b9b1dcc3d030d0634
    nova_admin_password = service_pass
    nova_admin_auth_url = http://192.168.232.138:35357/v2.0
    [quotas]
    [agent]
    root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
    [keystone_authtoken]
    auth_host = 192.168.232.138
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = neutron
    admin_password = service_pass
    signing_dir = $state_path/keystone-signing
    [database]
    connection = mysql://neutron:NEUTRON_DBPASS@192.168.232.138/neutron
    [service_providers]
    service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
    service_provider=LOADBALANCER:F5:neutron.services.loadbalancer.drivers.f5.plugin_driver.F5PluginDriver
    service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
    
    • linjing's avatar
      linjing
      Icon for Employee rankEmployee
      ubuntu 14.04, icehouse. Install manually by apt. Does the agent support this ?