Welcome to week three of the Kubernetes and BIG-IP series. In the previous article we learned how easy it is to deploy complex applications into Kubernetes running on Google Container Engine (GKE). As you might imagine, that ease could quickly lead to large numbers of applications running in an environment. But what if you need application services on those applications? Suppose that you want a centralized TLS policy for all applications, including those deployed into Kubernetes? What if you plan to implement DDoS protection at an operational level, and not within the application? Suppose that your organization intends to deploy applications using more sophisticated approaches, such as blue/green deployments or canary releases made possible by iRules? Perhaps you need other advanced traffic management capabilities. If only there were a way to bring all of the power of advanced application delivery controllers into Kubernetes, then Kubernetes applications could have the same assurances that you give on-premises and cloud applications. Up until recently, blending BIG-IP with containers was impossible, but now that ability is available, and this article will walk you through it. This article walks through deploying an application with multiple instances then ties into a BIG-IP for application delivery.


In order to perform the steps in this article, you will need a few things.

  • Access to Google Cloud and familiarity with using it (see the previous article for details)
  • A BIG-IP license

As long as you have the above two items, you are ready to go. The next section gives an overview of F5’s Container Connector.

Container Connector

Container Connector is a containerized application that you install into Kubernetes that enables a BIG-IP to control a pool of pods. Once configured, as pods are created and destroyed, the corresponding BIG-IP pool members are also created and destroyed. This allows the BIG-IP to manage traffic for the pods, while letting the developers continue to deploy applications into Kubernetes. In the next section you will walk through the deployment.


Deployment falls roughly into three sections: BIG-IP, Container Connector, and the actual application.

Deploy BIG-IP

To Deploy BIG-IP in Google Cloud, go to the launcher page at From there choose the “F5 BIG-IP ADC Best -BYOL” option.

Screen Shot 2017 06 08 at 2 11 10 PM min

Next, launch the BIG-IP.

Screen Shot 2017 06 08 at 2 10 58 PM min

The next page provides default settings for several virtual machine parameters. At the bottom of the page are some firewall defaults and a Deploy button. Click Deploy to deploy the BIG-IP.

Screen Shot 2017 06 08 at 2 16 54 PM min

It will take three or four minutes for the deployment to complete. Once the BIG-IP image boots, it will have a dynamic external IP address that changes on every reboot. In a real deployment we would take steps to obtain a static IP address but for this exercise, the external IP address is fine. Just be aware that the external address will change when the BIG-IP reboots. The next step is to set the admin password on the BIG-IP. To set the password, click on the SSH button.

Screen Shot 2017 06 08 at 2 22 23 PM min

You will see a message about Google Cloud trying to transfer keys to the VM.

Screen Shot 2017 06 08 at 2 24 23 PM min

After a few seconds you may see an error message.

Screen Shot 2017 06 08 at 2 24 35 PM min

Do nothing. Instead, wait for another 10 seconds or so and the SSH session will be established.

At the prompt, enter the command to modify the admin password.

modify auth password admin

You will be prompted for a new password and asked to confirm that password. Try to avoid characters that might create problems for BASH or other command shells. For example, avoid the bang (exclamation point) character and the question mark. For this exercise, I have changed the password to “nimda5873” and will use that password below. Close the SSH browser tab.

The final step is to log into the BIG-IP instance and license it. Click on the instance name.

Screen Shot 2017 06 08 at 2 32 01 PM min

The next page shows details about the instance, including its external IP address. In my case, the external IP address is Make note of this address.

Screen Shot 2017 06 08 at 2 38 34 PM min

With the external IP address, log into the BIG-IP by entering the URL into the browser address bar.


For my instance, the URL looks like this.

Your browser may show a warning. Log into the device using the password you set above, then provide a license. Your BIG-IP is now provisioned and licensed. The final step is to create a partition called kubernetes. The name is case sensitive. Accept the default parameters.

Screen Shot 2017 06 08 at 2 56 43 PM min

Note that there are no virtual servers defined yet.

Screen Shot 2017 06 08 at 3 00 39 PM min

You’re all done with the BIG-IP. In the next section we will install the Container Connector.

Deploy Container Connector

This section installs and configures the Container Connector software that controls the BIG-IP. First, create a cluster as described in the previous article. All of the following commands are typed into the Google Cloud Shell, as described in the previous article. Deploying Container Connector involves two steps. The first step installs the software and configures communication with the BIG-IP. The second step configures the software to interact with a particular Kubernetes service (app).

Install Container Connector Software

Allow the Google Cloud Shell to interact with Kubernetes.

gcloud container clusters get-credentials cluster-1 --zone us-central1-a

Next, create a Kubernetes secret that will hold the BIG-IP credentials in a secure fashion. Substitute your password for nimda5873 in the following command.

kubectl create secret generic bigip-login --namespace kube-system --from-literal=username=admin --from-literal=password=nimda5873

Get a reference deployment file for the Container Connector.


Edit the file to change parameters. This is a YAML file and is sensitive to column position. In other words, do not alter the whitespace in front of parameters.

  • Change the bigip-to external IP address:8443, example:
  • Adjust the file to point to the new beta build (this is currently necessary but should be unnecessary soon):
image: "f5networks/k8s-bigip-ctlr:1.1.0-beta.1"
  • If you want more detailed logs, place this command in the section with the bigip-url parameter.

Save the edited file, then run the following command to install Container Connector.

kubectl create -f f5-k8s-bigip-ctlr_image-secret.yaml -n kube-system

You should see:

deployment "k8s-bigip-ctlr-deployment” created

The pod will be deployed within the kube-system namespace of Kubernetes. As a result, the pod is not normally visible, but you can monitor the status of all kube-system pods by typing the following command.

kubectl get $(kubectl get pods -o name -n kube-system | grep k8s-bigip-ctlr-deployment) -n kube-system -w

Within a 30 seconds or so, you should see the status of the pod as running. If the pod is crashing, you can see the logs with this command.

kubectl logs $(kubectl get pods -o name -n kube-system | grep k8s-bigip-ctlr-deployment) -n kube-system

If all is well, in the logs you will see that Container Connector has communicated with the BIG-IP and wrote no configuration (because no app has been deployed yet).

2017/06/08 23:21:59 [INFO] Wrote 0 Virtual Server configs

Next, we need to configure Container Connector to watch for an application.

Configure Container Connector to Watch for an Application

With Container Connector installed, we need to configure it to watch for an application. This step is done through a Kubernetes ConfigMap, which is a configuration file. You will have one ConfigMap per application.

First, download a reference ConfigMap


As with the above file, we need to edit the file to make changes. Change the following in the above file.

  • Set bindAddr to the internal (NOT external) IP address of the BIG-IP. My bindAddr line reads:
"bindAddr": “"
    • Note: Using the internal IP address may seem counterintuitive since the browser will connect to the virtual server using the external address. Google uses SNAT to remap the destination address from the external address to the internal address before the BIG-IP sees the traffic. If the BIG-IP has a virtual server expecting traffic to the external address, it will never see that IP destination and will refuse the connections.
  • Change seviceName from myService to demo-app. This is the name of the Kubernetes service (app) to monitor.
  • Change servicePort from 3000 to 80. This is the port of the app where BIG-IP will send requests.

Create the ConfigMap

kubectl create -f f5-resource-vs-example.configmap.yaml -n default

After the ConfigMap is successfully created, you should see:

configmap "k8s.vs" created

Your container connector is now fully configured, watching for a service named demo-app. The next section creates that service.

Deploy an App

Now all we need to do is deploy an app. The below command deploys a demo app listening on port 80 with two replicas.

kubectl run demo-app --replicas=2 --image f5devcentral/f5-demo-app --port=80

You can see the app running and that it has two replicas.

kubectl get pods

There is just one more step to deploy the app. We need to expose the pods as a service. A Kubernetes service is a component that encompasses the app regardless of which node has the app, or which pods are running. The service is the app boundary, while the node, IP, port, and pod can all change during the lifetime of the app.

kubectl expose deployment demo-app --port=80 --target-port=80 --type=NodePort

Look at the BIG-IP. In the Network Map of the kubernetes partition, you can see that a virtual server, pool, and nodes have been created. Yours should similar to this.

Big ip network map min

There are two things of note. First, all of the objects are in an unknown state. That is because health monitors have not been defined. In the interest of simplicity, health monitors are not covered in this article. The second thing you might notice is that there are three pool nodes, but only two pods. The reason is that the BIG-IP manages traffic to the nodes in the cluster, while the nodes themselves have a load balancer to balance between pods within a node. In the next article I will discuss the load balancer at the node level. For now, it is sufficient to know that the traffic now moves through the BIG-IP and it handles all of the balancing across nodes. To recap, there are three nodes in the cluster and that is what is listed on the BIG-IP. There is one more step before we can test this. Google has a firewall policy that by default does not allow port 80 traffic to anything, including the BIG-IP.

Update the Firewall Rules

The firewall needs to allow port 80 traffic to the BIG-IP. The simplest approach is to allow port 80 traffic to all external IP addresses. In our test environment we can do that, but in a production environment you would want to be more precise in which hosts are allowed to receive port 80 traffic.

gcloud beta compute firewall-rules create default-allow-http --allow tcp:80

Run the App

To see that this is actually working, point your browser to the external IP of the BIG-IP. For example, my URL is:

If all goes well, you will see the demo-app splash page.

Demo app min

Notice the Server IP and Server Port. Refresh the page to see the values change as the requests are balanced across the nodes. That is a different IP address than we have seen before. That’s because Kubernetes has a node load balancer (as noted above) that remaps the destination IP address and port to that expected by the container. The layers of virtualization consist of several IP addresses, but the key point to remember is that all traffic now for the application is going through the BIG-IP. That means that any advanced services are now available for this app. In front of the app you can put a Web Application Firewall, SSL offload, iRules, and anything else that can be placed on a BIG-IP. As the backend pods scale up and down and deployments change, the BIG-IP can still provide advanced application services.

Clean Up

Before shutting down this demonstration, some clean up is in order. As before, delete the cluster. You should also either stop or delete the BIG-IP virtual machine. Finally, remove the firewall rule we added to allow port 80 access to the BIG-IP.

gcloud compute firewall-rules delete default-allow-http


This article started with the question of how to give Kubernetes the advanced application delivery services necessary for production workloads in the real world. You have successfully deployed a demo application running on a cluster and delivered it through a BIG-IP, making available the power of iRules, SSL offload, and many other capabilities. With this approach, you can continue to leverage skills for network operations in real time for Kubernetes workloads. The ability to deliver applications and make operational decisions can continue to be decoupled from the development cycle, ensuring that applications can remain available at all times. In the next (and final) article of this series, we will explore how to gain visibility into the traffic flowing between pods.

Series Index

Deploy an App into Kubernetes in less than 24 Minutes

Deploy an App into Kubernetes Even Faster (Than Last Week)

Deploy an App into Kubernetes Using Advanced Application Services

What's Happening Inside my Kubernetes Cluster?