F5 BIG-IP deployment with OpenShift - multi-cluster architectures

This is a continuation of the previous articles on OpenShift and BIG-IP:

Introduction

During last quarter of 2023, multi-cluster support has been added to Container Ingress Services (CIS). This functionality enables to load balance services that are spread across multiple clusters. It is important to remark that this functionality is Service oriented: the load balancing decisions are independent for each Service. Moreover, the different clusters don´t have to be mirror copies of each other, applications can be spread heterogeneously across the clusters. These clusters can also have different versions of OpenShift. This is shown in the next picture



Please also note that although the picture might suggest a 1-tier deployment, this functionality can be used in either 1-tier or 2-tier deployments, in both cases in a Layer 7/application/service oriented manner.

You can watch an overview of this functionality in the next video

 

Scenarios and use cases

This CIS multi-cluster feature is meant to be used in the following scenarios:

  • multiple clusters within the same data centre
  • multiple clusters in stretched data centers
  • multiple clusters in different Availability Zones of the same cloud region

This multi-cluster feature allows to expose in a single VIP services from multiple clusters regardless where they are hosted, this allows:

  • OpenShift migrations (threfore this applies to single cluster deployments too)
  • Increase capacity
  • Increase applications availability, ie: during OpenShift platform maintenance windows
  • Blue-green (A/B testing) across clusters
  • Route sharding across clusters (spreading the applications across the clusters arbitrarily)
  • Split large clusters into smaller ones

Per-service multi-cluster architecture

In the Kubernetes ecosystem, many external load balancers just load balance across the different Ingress Controllers instances without any application awareness. In the case of BIG-IP this is not the case. BIG-IP is Kubernetes Service aware and can make load balancing decisons (and apply BIG-IP functionalities) in a per application basis, depending on the L7 route. This BIG-IP and CIS functionality also applies when using CIS in a multi-cluster deployment. This allows applications to be spread across the different clusters in an heterogeneous manner. The BIG-IP can health check them individually.

It is also worth to remark that the CIS multi-cluster functionality is:

  • agnostic whether the deployment uses a 1-tier or 2-tier approach
  • independent of the ingress controller(s) used and individual L7 routes can always be health checked

In the next picture it is shown how CIS is deployed. We can see in the picture that there are two CIS instances running in different clusters for redundancy purposes. 


Notice in the picture that clusters other than 1 and 2 don´t have a CIS instance in them. When using more than 2 clusters there is no need for additional CIS instances. The additional clusters will be externally managed by CIS.

Notice as well that the picture only reflects a single BIG-IP. When using a BIG-IP HA pair, each BIG-IP will have its own CIS instances. This means that a multi-cluster setup will use 4 CIS instances usually. In the CIS High Availability section we will see how this can be reduced to 2 instances.

Only the clusters with CIS instances will have the manifests that define the L7 routes (NextGen Routes or VirtualServer CRDs). On the other hand, all clusters that host application´s PODs wil need to have the Service manifests, these will be used by CIS to discover the workloads.

CIS multi-cluster support works with Services of type either ClusterIP or NodePort. This article prefers the use of ClusterIP type as described in the article F5 BIG-IP deployment with OpenShift - platform and networking options. When using ClusterIP mode it is important to realize that the addressing of each cluster´s POD network cannot overlap with any other cluster. This must be taken into account prior to build the OpenShift clusters given that it is not possible to change the POD network of an OpenShift cluster after built.  

Multi cluster load balancing modes

In order to adapt to all customer use cases, the CIS can operate in the following modes:
  • Active-Active: services in all clusters are treated equally.
  • Ratio: clusters receive a different share of traffic which can be changed on the fly.
  • Active-Standby: meant to be used when applications in the clusters cannot be simulateneously Active.

These are ellaborated next.

Active-Active mode

In this mode a single pool is created for each application regardless in which cluster it is. In other words, each application has its own pool and its members are all the endpoints of the applications' Kubernetes Service in all clusters. Note in the next figures how all the PODs in all clusters are always used regardless the CIS HA state.

Ratio mode

Likewise with Active-Active mode, in this mode all the PODs in all clusters are always used regardless the CIS HA state. But unlike Active-Active mode, in this mode a given application will have a separate pool for each cluster which is the key for this multi-cluster method. For a given application, the load balancing is a two step process:

  1. The admin set ratio configured in CIS' global ConfigMap is used to select the service (a pool) from all clusters (set of pools).

  2. The LTM LB algorithm specified for the application in the manifests is used to LB within the selected pool. 

In other words, multi-cluster Ratio load balancing prioritizes the administrative decision first, then a regular dynamic load balancing is applied, for example round-robin, least-connections or application response time.

Active-Standby mode

This mode requires to have two CIS instances for each BIG-IP (in other modes only one CIS instance per BIG-IP is mandatory). In Active-Standby mode the workload PODs (applications) from the cluster where CIS is Standby are not considered. This mode could be used in Disaster Recovery deployments or when only applications from one of the clusters can be Active in a given moment (single-master applications).

Although this mode is expected to be used typically for 2 cluster scenarios, it does support handling more than than 2 clusters. In such a case, the workloads in the additional clusters are always considered Active workloads. This is useful when there are non single-master applications as well (always-active or multi-master). The single-master applications would be placed in the clusters where the CIS instances are deployed and the always-active applications would be placed in all other clusters. This is shown in the next picture.
 

CIS High Availability details

When using CIS in multi-cluster mode it is possible to run two CIS instances in HA mode instead of the typical single CIS instance used in single cluster setups. These two CIS instances use the same image but they have Primary and Secondary roles (specified with the --multi-cluster-mode=primary or =secondary parameter). Normally, the Primary CIS is the instance that writes the configuration in the BIG-IP. Only when the Primary CIS instance fails the Secondary will take over and will take the role of writting the configuration in the BIG-IP. 

In order to detect CIS failures, two mechanisms are provided:

  • The /ready health check endpoint in CIS. This endpoint returns HTTP 200 OK code if CIS can access to BIG-IP´s AS3 API and access to local cluster´s Kubernetes API . 
  •  The Secondary CIS with the primaryEndPoint parameter is used to monitor Primary CIS' /ready endpoint.

In order to expose the /ready endpoint this article recommends the use a NodePort type Service. This allows to monitor the /ready endpoint regardless in which OpenShift node the CIS instances are running. One Service for each CIS instance is required.

Note that the above mechanisms doesn´t take into account if the BIG-IPs are in Active or Standby mode. 

In the rest of this section it will be described an optional HA-Groups configuration in the BIG-IP. This configuration would handle the corner case when two CIS instances are unable to push the configuration to BIG-IP

The next pictures outline the resulting configuration with HA Groups when using redundant CIS instances for each BIG-IP:

In this setup when the Primary CIS fails, the secondary CIS will take over and will be in charge of the configuration changes of the BIG-IP.  The CIS instances of the peer BIG-IP work independently and continuously update its BIG-IP regardless it is Active or Standby. If the two CIS instances of the Active BIG-IP fail, then the BIG-IP HA Group setup will fail over the peer BIG-IP, if it has a CIS instance available. This is achieved with a simple HA Group configuration in the BIG-IP as follows, where each BIG-IP monitors its own CIS instances in a single pool and the values indicated in the sample HA Group configuration table of the previous picture.

Notice from the screenshot above that it is showing the condition where one of its CIS instances is down, reducing the HA score to 10 yet this doesn´t produce a failover event thanks to the Active Bonus. More information on the BIG-IP HA-Group feature can be found here.

The flexibility of BIG-IP and CIS HA mechanisms allow alternative configurations like monitoring a custom POD or having an external health endpoint to monitor the availability of a whole data centre. Another consideration to be taken into account is that when deploying CIS across different Availability Zones in a public cloud the F5 BIG-IP Cloud Failover should be used.

2-tier deployments

2-tier deployents were introduced in the article F5 BIG-IP deployment with OpenShift - platform and networking options. In these type of deployments, the BIG-IP sends the traffic to an ingress controller inside the cluster. This could be OpenShift´s Router (HA Proxy), NGINX+, Istio, or any other ingress like an API manager or a combination of these. BIG-IP can send the traffic to the appropiate ingress controller on a per-HTTP request basis using the oneconnect profile. To effectively accomplish this and monitor each service individually it is necessary to define the same L7 routes twice: in the tier-2 (the ingress controllers) and in the tier-1 (the BIG-IP). This is outlined in the next figure:

To acomplish the above, the L7 routes in the BIG-IP are defined using the F5 VirtualServer CRD (as example) and it is also defined a separate Service for each L7 route, even if the ingress controller is the same for all L7 routes. This separate Service for each L7 route defines a pool in the BIG-IP for each Service and ultimately its own monitoring. 

Using the CIS multi-cluster feature and next steps

To continue your journey with this unique feature, please check the official CIS documentation on this topic and official examples or the start-to-finish examples created by me in this GitHub repository.

Conclusion and closing remarks

As more applications are moved into OpenShift, more clusters will be needed in the enterprises and a flexible model to deploy the applications is required in which it is transparent the cluster in which a given application is being hosted. The CIS multi-cluster feature is a unique in the market and allows either complex scenarios such as clusters in stretched data centres or simplified OpenShift migrations with no application down time where upgrade in place is no longer required. Overall this feature is going to stay and there is no better time than now to be ready.

 

 

Updated Feb 15, 2024
Version 3.0

Was this article helpful?

No CommentsBe the first to comment