BIG-IP deployment options with Openshift

NOTE: this article has been superseded by these updated articles:

NOTE: outdated content next

This article is meant to be an agnostic overview of the possibilities on how to use BIG-IP with RedHat Openshift:

 

  • either onprem or in the cloud,
  • either in 1-tier or in 2-tier arrangements, possibly alongside NGINX+.

 

This blog is structured as follows:

 

  • Introduction
  • BIG-IP platform flexibility: deployment, scalability and multi-tenancy options
  • Openshift networking options
  • BIG-IP networking options
  • 1-tier arrangement
  • 2-tier arrangement
  • Publishing the applications: BIG-IP CIS Kubernetes resource types
  • Service type Load Balancer
  • Ingress and Route resources, the extensibility problem.
  • Full flexibility & advanced services with AS3 Configmaps.
  • F5 Custom Resource Definitions (CRDs).
  • Installing Container Ingress Services (CIS) for Openshift & BIG-IP integration
  • Conclusion

 

Introduction

 

When using BIG-IP with RedHat Openshift Kubernetes a container component named Container Ingress Services (CIS from now on) is used to plug the BIG-IP APIs with the Kubernetes APIs. When a user configuration is applied or when a status change has occurred in the cluster then CIS automatically updates the configuration in the BIG-IP using the AS3 declarative API.

 

CIS supports IP Address Management (IPAM from now on) by making use of F5 IPAM Controller (FIC from now on), which is deployed as container as well. The FIC IPAM controller can have it's own address database or be connected to an external provider such as Infoblox.

 

It can be seen how these components fit together in the next picture.

 

 

 

 

 

 

A single BIG-IP cluster can manage both VM and container workloads in the same cluster and separation between these can be set at administrative level with partitions and at network level with routing domains if required.

 

BIG-IP offers a wide range of options to be used with RedHat Openshift. Often these have been driven by customer's requests. In the next sections we cover these options and the considerations to be taken into account to choose between them. The full documentation can be found in F5 clouddocs.

 

F5 BIG-IP container integrations are Open Source Software (OSS) and can be found in this github repository where you wlll be find additional technical details.

 

Please comment below if you have any question about this article.

 

BIG-IP platform flexibility: deployment, scalability and multi-tenancy options

 

First of all, it is needed to clarify that regardless of the deployment option chosen, this is independent of the BIG-IP being an appliance, a scale-out chassis or a Virtual Edition. The configuration is always the same.

 

This platform flexibility also opens the possibilities of using different options of scalability, multi-tenancy, hardware accelerators or HSMs/NetHSMs/SaaS-HSMs to keep secure the SSL/TLS private keys in a FIPS compliant manner.

 

The following options apply to a single BIG-IP cluster:

 

  • A single BIG-IP cluster can handle several Openshift clusters. This requires at least a CIS instance per Openshift cluster instance.
  • It is also possible that a given CIS instance manages a selected set of namespaces. These namespaces can be specified with a list or a label selector.

 

In the BIG-IP each CIS instance will typically write in a dedicated partition, isolated from other CIS instances. When using AS3 ConfigMaps a single CIS can manage several BIG-IP partitions.

 

 

As indicated in picture, a single BIG-IP cluster can scale-up horizontally with up to 8 BIG-IP instances, this is referred as Scale-N in BIG-IP documentation.

 

When hard tenant isolation is required, then using a single BIG-IP cluster or a vCMP guest instance should be used. vCMP technology can be found in larger appliances and scale-out chassis. vCMP allows to run several independent BIG-IP instances as guests, allowing to run even different versions of BIG-IP. The guest can get allocated different amounts of hardware resources. In the next picture, guests are shown in different colored bars using several blades (grey bars).

 

 

 

 

Openshift networking options

 

Kubernetes' networking is provided by Container Networking Interface plugins (CNI from now on) and Openshift supports the following:

 

  • OpenshiftSDN - supported since Openshift 3.x and still the default CNI. It makes use of VXLAN encapsulation.
  • OVNKubernetes - supported since Openshift 4.4. It makes use of Geneve encapsulation.

 

Feature wise these CNIs we can compare them from the next table from the Openshift documentation.

 

 

Besides the above features, performance should also be taken into consideration. The NICs used in the Openshift cluster should do encapsulation off-loading, reducing the CPU load in the nodes. Increasing the MTU is recommended specially for encapsulating CNIs, this is suggested in Openshift's documentation as well, and needs to be set at installation time in the install-config.yaml file, see this link for details.

 

BIG-IP networking options

 

The first thing that needs to be decided is how we want the BIG-IP to access the PODs: do we want that the BIG-IP access the PODs directly or do we want to use the typical arrangement of using a 2-tier Load Balancing with an in-cluster Ingress Controller?

 

Equally important is to decide how we want to do NetOps/DevOps separation. CI/CD pipelines provide a management layer which allow several teams to approve or block changes before committing. We are going to takle how to achieve this separation without such an additional management layer.

 

BIG-IP networking option - 1-tier arrangement

 

In this arrangement, the BIG-IP is able to reach the PODs without any address translation . By only using a 1-tier of Load Balancing (see the next picture) the latency is reduced (potentially also increasing client's session performance). Persistence is handled easily and the PODs can be directly monitored, providing an accurate view of the application's health.

 

 

As it can be seen in the picture above, in a 1-tier arrangement the BIG-IP is part of the CNI network. This is supported for both OpenshiftSDN and OVNKubernetes CNIs.

 

Configuration for BIG-IP with OpenshiftSDN CNI can be found in clouddocs.f5.com. Currently, when using the OVNKubernetes CNI the hybrid-networking option has to be used. In this later case the Openshift cluster will extend its CNI network towards the BIG-IPs using VXLAN encapsulation instead of Geneve used internally within the Openshift nodes. BIG-IP configuration steps for OVNKubernetes in hybrid mode can be followed in this repository created by F5 PM Engineer Mark Dittmer until this is published in clouddocs.f5.com.

 

With a 1-tier configuration there is a fine demarcation line between NetOps (who traditionally managed the BIG-IPs) and DevOps that want to expose their services in the BIG-IPs. In the next diagram it is proposed a solution for this using the IPAM cotroller.

 

 

The roles and responsibilities would be as follows:

 

  • The NetOps team would be responsible of setting up the BIG-IP along its basic configuration, up to the the network connectivity towards the cluster including the CNI overlay.
  • The NetOps team would be also responsible of setting up the IPAM Controller and with it the assignment of the IP addresses for each DevOps team or project.
  • The NetOps team would also setup the CIS instances. Each DevOps team or set of projects would have their own CIS instance which would be fed with IP addresses from the IPAM controller.
  • Each CIS instance would be watching each DevOps or project's namespaces. These namespaces are owned by the different DevOps teams. The CIS configuration will specify the partition in the BIG-IP for the DevOps team or project.
  • The DevOps team, as expected, deploys their own applications and create Kubernetes Service definitions for CIS consumption.
  • Moreover, the DevOps team will also define how the Services will be published. These means creating Ingress, Route or any other CRD definition for publishing the services which are constrained by NetOps-owned IPAM controller and CIS instances.

 

BIG-IP networking option - 2-tier arrangement

 

This is the typical way in which Kubernetes clusters are deployed. When using a 2-tier arrangement the External Load Balancer doesn't need to have awareness of the CNI and points to the NodePort addresses of the Ingress Controller inside the Kubernetes cluster. It is up to the infrastructure how to send the traffic to the Ingress Controllers. A 2-tier arrangement sets a harder line of the demarcation between the NetOps and DevOps teams. This type of arrangement using BIG-IP can be seen next.

 

 

Most External Load Balancers can only perform L4 functionalities but BIG-IP can perform both L4 and L7 functionalities as we will see in the next sections.

 

Note: the proxy protocol mentioned in the diagram is used to allow persistence based on client's IP in the Ingress Controller, regardless the traffic is sent encrypted or not.

 

Publishing the applications: BIG-IP CIS Kubernetes resource types

 

Service type Load Balancer

 

This is a Kubernetes built-in mechanism to expose Ingress Controllers in any External Load Balancer. In other words, this method is meant for 2-tier topologies. This mechanism is very feature limited feature and extensibility is done by means of annotations. F5 CIS supports IPAM integration in this resource type. Check this link for all options possible.

 

In general, a problem or limitation with Kubernetes annotations (regardless the resource type) is that annotations are not validated by the Kubernetes API using a chema therefore allowing the customer to set in Kubernetes bad configurations. The recommended practice is to limit annotations to simple configurations. Declarations with complex annotations will tend to silently fail or not behave as expected. Specially in these cases CRDs are recommended. These will be described further down.

 

Ingress and Route resources, the extensibility problem.

 

Kubernetes and Openshift provide the following resource types for publishing L7 routes for HTTP/HTTPS services:

 

  • Routes: Openshift exclusive, eventually going to be deprecated.
  • Ingress: Kubernetes standard.

 

Although these are simple to use, they are very limited in functionality and more often than not the Ingress Controllers require the use of annotations to agument the functionality. F5 available annotations for Routes can be checked in this link and for Ingress resources in this link.

 

As mentioned previously, complex annotations should be avoided. When publishing L7 routes, annotation's limitations are more evident and CRDs are even more recommended.

 

Route and Ingress resources can be further augmented by means of using the CIS feature named Override AS3 ConfigMap which allows to specify an AS3 declaration and attach it to a Route or Ingress definition. This gives access to use almost all features & modules available in BIG-IP as exhibit in the next picture.

 

 

Although Override AS3 ConfigMap eliminates the annotations extensibility limitations it shares the problem that these are not validated by the Kubernetes API using the AS3 schema. Instead, it is validated by CIS but note that ConfigMaps are not capable of reporting the status the declaration. Thus the ConfigMap declaration status can only be checked in CIS logs.

 

Override AS3 ConfigMaps declarations are meant to be applied to the all the services published by the CIS instance. In other words, this mechanism is useful to apply a general policy or shared configuration across several services (ie: WAF, APM, elaborated monitoring).

 

Full flexibility and advanced services with AS3 ConfigMap

 

The AS3 ConfigMap option is similar to Override AS3 ConfigMap but it doesn't rely in having a pre-existing Ingress or a Route resource. The whole BIG-IP configuration is setup in the ConfigMap. Using Full AS3 ConfigMaps with the --hubmode CIS option allows to define the services in a DevOps' owned namespaces and the VIP and associated configurations (ie: TLS settings, IP intelligence, WAF policy, etc...) in a namespace owned by the DevOps team. This provides independence between the two teams.

 

Override AS3 ConfigMaps tend to be small because these are just used to patch the Ingress and Route resources. In other words, extending Ingress and Route-generated AS3 configuration. On the other hand, using full AS3 ConfigMaps require creating a large AS3 JSON declaration that Ingress/Route users are not used to.

 

Again, the AS3 definition within the ConfigMap is validated by BIG-IP and not by Kubernetes which is a limitation because the status of the configuration can only be fully checked in CIS logs.

 

F5 Custom Resource Definitions (CRDs)

 

Above we've seen the Kubernetes built-in resource types and their advanced services & flexibility limitations. We've also seen the swiss-army knife that AS3 ConfigMaps are and the limitation of it not being Kubernetes schema-validated.

 

Kubernetes allows API augmentation by allowing Custom Resource Definitions (CRDs) to define new resource types for any functionality needed.

 

F5 has created the following CRDs to provide the easiness of built-in resource types but with greater functionality without requiring annotations. Each CRD is focused in different use cases:

 

  • IngressLink aims to simplify 2-tier deployments when using BIG-IP and NGINX+. By using IngressLink CRD instead of a Service of type LoadBalancer. At present the IngressLink CRD provides the following features :
  • Proxy Protocol support or other customizations by using iRules.
  • Automatic health check monitoring of NGINX+ readiness port in BIG-IP.
  • It's possible to link with NGINX+ either using NodePort or Cluster mode, in the later case bypassing any kube-proxy/iptables indirection.
  • More to come...

 

When using IngressLink it automatically exposes both ports 443 and port 80 sending the requests to NGINX+ Ingress Controller.

 

  • TransportServer is meant to expose non-HTTP traffic configuration, it can be any TCP or UDP traffic on any traffic and it offers several controls again, without requiring using annotations.
  • VirtualServer has L7 routes oriented approach analogous to Ingress/Route resources but providing advanced configurations whilst avoiding using annotations or override AS3 ConfigMaps. This can be used either in a 1 tier or 2-tier arrangement as well. In the later case the BIG-IP would take the function of External LoadBalancer of in-cluster Ingress Controllers yet providing advanced L7 services.

 

All these new CRDs support IPAM.

 

Summary of BIG-IP CIS Kubernetes resource types

 

So what resource types should It be used? The next tables try to summarize the features, strengths and usability of them.

 

Ease of use

 

 

 

 

Network topology and overall suitability

 

 

Comparing CRDs, Ingress/Routes and ConfigMaps

 

Please note that the features of the different resources is continuously changing please check the latest docs for more up to date information.

 

 

Installing Container Ingress Services (CIS) for Openshift & BIG-IP integration

 

CIS Installation can be performed in different ways:

 

  • Using Kubernetes resources (named manual in F5 clouddocs) - this approach is the most low level one and allows for ultimate customization.
  • Using Helm chart. This provides life-cycle management of the CIS installation in any Kubernetes cluster.
  • Using CIS Operator. Built on top of the Helm chart it additionally provides Openshift integrated management. In the screenshots below we can see how the Openshift Operator construct allows for automatic download and updates. We can also see the use of the F5BigIpCtlr resource type to configure the different instances

 

 

 

 

 

 

 

 

 

At present IPAM controller installation is only done using Kubernetes resources.

 

After these components are created it is needed to create the VxLAN configuration in the BIG-IP, this can be automated using using any of BIG-IP automations, mainly Ansible and Terraform.

 

Conclusion

 

F5 BIG-IPs provides several options for deployment in Openshift with unmatched functionality either used as External Load Balancer as Ingress Controller achieving a single Tier setup.

 

Three components are used for this integrator:

 

  • The F5 Container Ingress Services (CIS) for plugging the Kubernetes API with BIG-IP.
  • The F5 ConOpenshift Operator for installing and managing CIS.
  • The F5 IPAM controller.

 

Resource types are the API used to define Services or Ingress Controllers publishing in the F5 BIG-IP. These are constantly being updated and it is recommended to check F5 clouddocs for up to date information.

 

We are driven by your requirements. If you have any, please provide feedback through this post's comments section, your sales engineer, or via our github repository.

Updated Dec 13, 2023
Version 2.0

Was this article helpful?