Note: This is not a primer on Tempest testing. For more information on Tempest, see their official documentation.

Much of the information that follows was taken from the Tempest Test Plugin Interface guide. The content here is focused on how to extend Tempest tests for testing F5 projects such as F5 LBaaSv2 Agent, F5 LBaaSv2 driver, etc.... The prototype for this code is currently hosted in the F5 LBaaSv2 Agent code on GitHub. It will be fleshed out over time and the lessons learned from iterations of that code-base will be reflected there.

Tempest Plugin Structure:

A plugin contains an installable setuptools entry (in setup.py) and this tree houses the plugin code itself, means of test discovery, and the tests themselves. We have chosen, initially, to house the plugin with the F5 LBaaSv2 Agent repo, yet that is not necessary. One could house all plugin code outside of the code-under-test completely, managing a separate repo for only Tempest plugin and test code. We may migrate to the external repo in time, but for now, the Tempest tests will live with all of the pre-existing tests.

To create our plugin, we made the following changes. We created a tests directory directly within the distribution of the agent code. This means the test code lives as close as it can to the code-under-test. We then migrated the pre-existing functional tests into this directory as well, although these may be phased out over time and converted into Tempest tests. This makes the directory structure look like so:

f5_openstack_agent/
   tests/
      functional/
      tempest/
         config.py
         plugin.py
         services/
         tests/
            api/
            scenario/

 

The config.py module contains configuration options specific to the F5 LBaaSv2 Agent plugin, such as icontrol_hostname, icontrol_username, and icontrol_password. This allows us to drop those parameters into the tempest.conf file under a new section (one that we've also defined in the config.py file) called icontrol.

tempest.conf

[icontrol]
icontrol_hostname = 10.190.3.40
icontrol_username = admin
icontrol_password = admin

 

The plugin.py module defines the actual plugin along with some basic methods defined in the abstract base class it inherits from. These tell Tempest where to load the tests from and what options to register from the configuration file.

plugin.py

class F5LBaaSv2AgentTempestPlugin(plugins.TempestPlugin):
    def load_tests(self):
        base_path = os.path.split(os.path.dirname(os.path.abspath(__file__)))[0]
        test_dir = "f5_lbaasv2_agent_tempest_plugin/tests"
        full_test_dir = os.path.join(base_path, test_dir)
        return full_test_dir, base_path

 

In the services directory lives all of the relevant clients needed to run our particular brand of Tempest tests. An example is the listener_client.py, which helps manage setup, teardown, and update of listeners in tests. In addition to the OpenStack controlling clients (such as listeners, loadbalancers, pools etc...), we also manage the BIG-IP client used for backend validation. It is important to note that a significant amount of effort in our future Tempest tests will go into validating that the BIG-IP device or devices has the appropriate configuration after some inciting event happens in Neutron and the plugin driver.

Another important distinction about the clients located in the services directory is that they are not the standard OpenStack project clients, such as python-glanceclient. They are thin wrappers of logic around simple REST queries to the OpenStack project's API endpoints. In addition to the wrapping of requests, some of these clients also present methods to make your test flow more event driven, such as:

Tempest Client Wrapper

def is_resource_deleted(self, resource_type, id):
    method = 'show_' + resource_type
    try:
        getattr(self, method)(id)
    except AttributeError:
        raise Exception("Unknown resource type %s " % resource_type)
    except lib_exc.NotFound:
        return True
    return False

 

Tempest Test Structure:


Now let's look at a Tempest test to understand the common flow. Like most testing frameworks, our tests will follow the same guidelines of setup, test, and teardown. There are a few types of tests in Neutron and Neutron LBaaS: API, scenario, stress, and unit. Here's the snippet about API tests, taken from Tempest documentation. It highlights the reason why they chose not to use the OpenStack python clients as well:

API tests are validation tests for the OpenStack API. They should not use the existing python clients for OpenStack, but should instead use the tempest implementations of clients. Having raw clients let us pass invalid JSON to the APIs and see the results, something we could not get with the native clients.

When it makes sense, API testing should be moved closer to the projects themselves, possibly as functional tests in their unit test frameworks.

As for scenario tests, these are end-to-end tests which validate the inputs, outputs, and all points possible in between. For the LBaaSv2 agent, API tests will be critical to testing proper handling of requests and responses in the API endpoint. For validating end-to-end functionality, we will implement a small scenario test here, using the new plugin structure.

Load Balancer Test

class LoadBalancersTestJSON(base.BaseAdminTestCase):

    @classmethod
    def resource_setup(cls):
        super(LoadBalancersTestJSON, cls).resource_setup()
        if not test.is_extension_enabled('lbaas', 'network'):
            msg = "lbaas extension not enabled."
            raise cls.skipException(msg)
        network_name = data_utils.rand_name('network')
        cls.network = cls.create_network(network_name)
        cls.subnet = cls.create_subnet(cls.network)
        cls.create_lb_kwargs = {'tenant_id': cls.subnet['tenant_id'],
                                'vip_subnet_id': cls.subnet['id']}
        cls.load_balancer = \
            cls._create_active_load_balancer(**cls.create_lb_kwargs)
        cls.load_balancer_id = cls.load_balancer['id']

    @test.attr(type='smoke')
    def test_create_load_balancer_with_tenant_id_field_for_admin(self):
        """Test create load balancer with tenant id field from subnet.

        Verify tenant_id matches when creating loadbalancer vs.
        load balancer(admin tenant)
        """

        load_balancer = self.load_balancers_client.create_load_balancer(
            tenant_id=self.subnet['tenant_id'],
            vip_subnet_id=self.subnet['id'])
        self.addCleanup(self._delete_load_balancer, load_balancer['id'])
        admin_lb = self.load_balancers_client.get_load_balancer(
            load_balancer.get('id'))

        assert load_balancer.get('tenant_id') == admin_lb.get('tenant_id')
        folder_name = "Project_%s" % admin_lb.get('tenant_id')
        self._wait_for_load_balancer_status(load_balancer['id'])
        assert self.bigip_client.bigip.tm.sys.folders.folder.exists(
            name=folder_name)

 

We have overridden the base.BaseAdminTestCase's version of resource_setup to implement our own setup logic. Then we created a test, identified with the 'smoke' test attribute. The test creates a load balancer, assures it was create successfully, then validates through the bigip_client that the particular tenant folder was created on the BIG-IP device. The bigip_client does not, and probably should not, live in the agent's repo, but it is here for now for simplicity. This will likely move up into the f5-openstack-test project so that any Tempest test writer can have the benefits of the client. Then the test writer could subclass the client to use for more specific purposes in their own tests.

Here we've shown how to write a new test, and similar logic could be applied to amend existing Tempest tests. This way, we could leverage what already exists in the Tempest test library for something like the Neutron LBaaS project, and have those tests each make calls out to the bigip_client, when appropriate, to validate some event did or did not occur.

Further Reading:

Tempest Test Field Guide: http://docs.openstack.org/developer/tempest/field_guide/index.html