Introduction

Containers are all the rage these days. Docker is a popular environment for running containers, and according to a recent survey (https://www.datadoghq.com/docker-adoption), Docker use has increased by 40% over the past year. Orchestration systems manage those containers. One popular orchestration system named Kubernetes was created by Google and is quickly growing in popularity. Systems like Kubernetes enable the rapid rise in containers by wrangling hundreds or even thousands of containers in a production setting. This article provides a high-level overview of containers and Kubernetes, followed by a real-world walkthrough of installing a simple application into Kubernetes through the Google Container Engine (GKE). Using GKE costs some money, but $20 should be sufficient to perform the examples listed here. I am assuming that you, the reader, have a background in traditional data center IT, that you are familiar with IT concepts such as virtual computers and basic IP networking. If so, this article will provide a gentle introduction into containers and Kubernetes, giving you a basic understanding of what is driving the hype. Later articles in this series will dive deeper into Kubernetes and integration with BIG-IP.

Containers

Before talking about Kubernetes, it is important to define containers. If you are familiar with virtual machines, then containers will seem similar. Containers behave like lightweight virtual machines. Note that containers are not virtual machines, and are based on technology completely different from virtual machines, but they do behave as if they are lightweight virtual machines. Containers create the illusion of an independent machine, but all containers on a node share a common kernel and set of device drivers. This architecture gives containers some advantages over virtual machines. Containers start and stop much faster than virtual machines can boot and shutdown. Some containers can be started in mere seconds. Containers also have no hypervisor and therefore do not suffer the performance overhead of a hypervisor. In addition, containers use RAM as needed instead of having RAM pre-allocated. As a result, containers consume fewer resources than equivalent virtual machines, using less hardware for the same workload. Docker is used and referenced in this article. Docker means several things: it is the name of a company, a set of technologies, and a command used to manage and run containers. The Docker logo is a whale carrying cargo containers. Containers form the foundation of Kubernetes.

Kubernetes Concepts

Kubernetes introduces several concepts. Some concepts are new to the data center IT professional, and some are older concepts used in a new way. What follows is a high-level discussion of the main concepts necessary to deploy a simple application.

Components

Several components of Kubernetes need definition. The definitions supplied are generalizations and simplifications; virtually every definition given here will have some exceptions. In the interest of digestibility for the professional coming from a data center background, these concepts have been simplified. As you become more familiar with Kubernetes, you will realize that the below definitions are not 100% true in all cases. Also, not every Kubernetes concept is listed. Even so, a rudimentary understanding will aid in becoming proficient in Kubernetes. Table 1 provides a glossary.

Table 1: Simplified Kubernetes Glossary

Concept

Definition

Cluster

A collection of nodes for use in Kubernetes, provided by software outside of Kubernetes

Node

A physical or virtual operating system instance

Pod

A collection of one or more containers that comprise an application, running on a single node

Container

A container instance of an operating system running part (or all) of an application

Deployment

A collection of Kubernetes components, such as pods and containers, to produce an application

Replication Set

A set of replicated pods

With the above glossary, we can discuss how the various concepts relate to each other. Two views emerge of the concept relationships: the cluster view and the deployment view. The cluster view examines the concepts from the perspective of the infrastructure components, while the deployment view examines the concepts from the perspective of the application. The two views overlap.

Cluster View

The cluster view looks at the infrastructure components and how they relate. See Figure 1, where the Docker logo represents a running container. The high-level concept is the cluster, which is provided to Kubernetes from other software. The cluster itself consists of one or more nodes. A cluster is a collection of nodes. The nodes are either physical or virtual operating system instances. Each node will have zero or more pods. A pod is a collection of containers that comprise an application.

Figure 1: Kubernetes Cluster View
Cluster View min

If viewed from a bottom-up perspective, each application will require one or more containers, and those containers should all run on the same node. The collection of containers for the application can be treated as single unit running on a node: a pod. Each pod runs on a node. Again, a pod is not split across nodes, ensuring that inter-container networking within a pod takes place on the same node. The collection of nodes is a cluster.

This collection of infrastructure components makes an application available but does not define the application itself. The deployment view does that.

Deployment View

While the cluster view examines the infrastructure components, the deployment view examines the application. The high-level concept is a deployment, which describes all of the components necessary for an application. This description does not cover infrastructure minutiae, such as on which node to run a container. It is the job of Kubernetes to figure out that. Instead, the deployment describes higher level application parameters. Each deployment describes what containers are in a pod, and how many replicas to maintain (replication sets) for that pod. Kubernetes will then ensure that the pods are deployed on a node (within a cluster). Kubernetes will also ensure that the proper number of replica pods are running. Should a pod (or container within a pod) fail, Kubernetes will stop that pod and deploy another. That way, only fully functioning pods are running at any moment in time, and the correct number of pods are always running.

Figure 2: Kubernetes Deployment View
Deployment View min

Kubernetes concepts can be viewed from two perspectives: the cluster view and the deployment view. Both views demonstrate how the various Kubernetes components work together to run applications.

Simple Application

With the above concepts in mind, let’s deploy a simple application. This example will be using the Google Container Engine: https://cloud.google.com/. This service costs money, but is very inexpensive for experimenting. Expect to spend perhaps $20 doing all of the examples in this article series. Since cloud providers regularly raise and lower prices, an exact cost estimate will vary.

Console

Let’s get started. Once at the container engine web site, click on the console link in upper right-hand corner.

Console min

The console is the place where you can manage a large portion of deployment. First, we need to create a project.

If this is a new account, you may need to enable some features.

  1. Enable billing for your project.
    Enable Billing min 
  2. Enable the Container Registry API.
    Enable API min 

Create Project

Projects create a level of isolation between various, well, projects. Click on the CREATE PROJECT link in the top center of the page:

Create Project min

Give it the simple name of “example-project” and click the CREATE button. It may take a few seconds for the project to be created. Watch the spinning notification icon in the upper right corner. The actual project name will have a dash and number suffix, for example my project was named example-project-164520. Remember the actual name of your project and substitute it for the project name in the examples below. Once done, the next step is to create a cluster.

Create Cluster

Choose the container clusters menu option from the menu on the left.

Left Menu min

You might see a notice that the Container Engine is initializing. If so, just wait until it finishes.

Getting Ready min

Once initialized, name the cluster “cluster-1” and create the cluster by clicking on the “Create a container cluster” button.

Create Cluster min

Take the defaults and click the “Create” button. It will take several minutes for the cluster to be created. From here on out, we can use the command line to deploy a quick application.

Google Cloud Shell

The next commands will be entered into the Google Cloud Shell. This provides a Linux command-line environment. Click on the Cloud Shell icon in the upper right corner.

Cloud Shell min

It may take a few minutes for the shell to initialize.

Once the shell initializes, make a directory for this project.

mkdir hello_world

And change to it.

cd hello_world

Next get a hello world example written in JavaScript from Google’s GitHub account.

wget https://raw.githubusercontent.com/GoogleCloudPlatform/nodejs-docs-samples/master/containerengine/hello-world/server.js

Get the Dockerfile too (this tells Docker how to build the container).

wget https://raw.githubusercontent.com/GoogleCloudPlatform/nodejs-docs-samples/master/containerengine/hello-world/Dockerfile

Build the Docker image. Note the trailing space and dot at the end of the command. Also note that the number in your project name will be different, so substitute your number where highlighted.

docker build -t gcr.io/example-project-164520/hello_world .

The last two lines of the output should look similar to:

Removing intermediate container ea29143f9b9f
Successfully built bd714f81a698

Next we need to give the command line authorization for kubectl commands.

gcloud container clusters get-credentials cluster-1 --zone us-central1-a

Next we need to put the image in the container registry so that other nodes can access it. Note that the number in your project name will be different.

gcloud docker -- push gcr.io/example-project-164520/hello_world

Deploy the application. Note that the number in your project name will be different.

kubectl run hello-node --image=gcr.io/example-project-164520/hello_world --port=8080

Expose the app to a public IP.

kubectl expose deployment hello-node --type=LoadBalancer

List the EXTERNAL-IP. If the IP is pending, rerun the command until it appears.

kubectl get service hello-node

Note value for EXTERNAL-IP. Point browser to http://EXTERNAL-IP:8080 e.g. http://130.211.122.50:8080 and see the message.

Hello K8s min

Whew. All of that just for a simple message. There are several things you can see now using variants of the kubectl get command. Try the following commands.

kubectl get pods
kubectl get deployments
kubectl get nodes

Clean Up

There are several things that need to be removed, effectively in the reverse order of installing. Note that the number in your project name will be different.

kubectl delete service hello-node
kubectl delete deployment hello-node
gcloud beta container images delete gcr.io/example-project-164520/hello_world

Make sure to press Y when prompted

docker rmi gcr.io/example-project-164520/hello_world
cd ..
rm -r hello_world/

Finally, through the GUI, remove the cluster.

Left Menu min

You may have noticed that we left the project defined. The GUI seems to have problems with all projects deleted, so leave that one there for now.

Conclusion

If you have made it this far, you now understand some of the concepts behind Kubernetes. In addition, you have deployed a simple application into containers in Kubernetes. At this point you might be wondering what the hype is all about, given the amount of commands needed just to deploy an application. Fear not. In the next installment, you will learn about automation tools to simplify and automate complex deployments that have multiple containers per pod and multiple replicas.