Automate Application Delivery with F5 and HashiCorp Terraform and Consul

Written by HashiCorp guest author Lance Larsen

Today, more companies are adopting DevOps approach and agile methodologies to streamline and automate the application delivery process. HashiCorp enables cloud infrastructure automation, providing a suite of DevOps tools which enable consistent workflows to provision, secure, connect, and run any infrastructure for any application. Below are a few you may have heard of:

 

In this article we will focus on HashiCorp Terraform and Consul, and how they accelerate application delivery by enabling network automation when used with F5 BIG-IP (BIG-IP). Modern tooling, hybrid cloud computing, and agile methodologies have our applications iterating at an ever increasing rate. The network, however, has largely lagged in the arena of infrastructure automation, and remains one of the hardest areas to unbottleneck. F5 and HashiCorp bring NetOps to your infrastructure, unleashing your developers to tackle the increasing demands and scale of modern applications with self-service and resilience for your network.

Terraform allows us to treat the BIG-IP platform “as code”, so we can provision network infrastructure automatically when deploying new services. Add Consul into the mix, and we can leverage its service registry to catalog our services and enable BIG-IPs service discovery to update services in real time. As services scale up, down, or fail, BIG-IP will automatically update the configuration and route traffic to available and healthy servers. No manual updates, no downtime, good stuff!

When you're done with this article you should have a basic understanding of how Consul can provide dynamic updates to BIG-IP, as well as how we can use Terraform for an “as-code” workflow. I’d encourage you to give this integration a try whether it be in your own datacenter or on the cloud - HashiCorp tools go everywhere!

Note: This article uses sample IPs from my demo sandbox. Make sure to use IPs from your environment where appropriate.

What is Consul?

Consul is a service networking solution to connect and secure services across runtime platforms. We will be looking at Consul through the lens of its service discovery capabilities for this integration, but it’s also a fully fledged service mesh, as well as a dynamic configuration store. Head over to the HashiCorp learn portal for Consul if you want to learn more about these other use cases.

The architecture is a distributed, highly available system. Nodes that provide services to Consul run a Consul agent. A node could be a physical server, VM, or container. The agent is responsible for health checking the service it runs as well as the node itself. Agents report this information to the Consul servers, where we have a view of services running in the catalog.

Agents are mostly stateless and talk to one or more Consul servers. The consul servers are where data is stored and replicated. A cluster of Consul servers is recommended to balance availability and performance. A cluster of consul servers usually serve a low latency network, but can be joined to other clusters across a WAN for multi-datacenter capability.

Let’s look at a simple health check for a Nginx web server. We’d typically run an agent in client mode on the web server node. Below is the check definition in json for that agent.

 

{
  "service": {
    "name": "nginx",
    "port": 80,
    "checks": [
      {
        "id": "nginx",
        "name": "nginx TCP Check",
        "tcp": "localhost:80",
        "interval": "5s",
        "timeout": "3s"
      }
    ]
  }
}

 

We can see we’ve got a simple TCP check on port 80 for a service we’ve identified as Nginx. If that web server was healthy, the Consul servers would reflect that in the catalog.

 

The above example is from a simple Consul datacenter that looks like this.

$ consul members
Node           Address          Status  Type    Build  Protocol  DC   Segment
consul         10.0.0.100:8301  alive   server  1.5.3  2         dc1  <all>
nginx          10.0.0.109:8301  alive   client  1.5.3  2         dc1  <default>

 

BIG-IP has an AS3 extension for Consul that allows it to query Consul’s catalog for healthy services and update it’s member pools. This is powerful because virtual servers can be declared ahead of an application deployment, and we do not need to provide a static set of IPs that may be ephemeral or become unhealthy over time. No more waiting, ticket queues, and downtime. More on this AS3 functionality later.

Now, we’ll explore a little more below on how we can take this construct and apply it “as code”.

What about Terraform?

Terraform is an extremely popular tool for managing infrastructure. We can define it “as code” to manage the full lifecycle. Predictable changes and a consistent repeatable workflow help you avoid mistakes and save time.

The Terraform ecosystem has over 25,000 commits, more than 1000 modules, and over 200 providers. F5 has excellent support for Terraform, and BIG-IP is no exception.

Remember that AS3 support for Consul we discussed earlier? Let’s take a look at an AS3 declaration for Consul with service discovery enabled. AS3 is declarative just like Terraform, and we can infer quite a bit from its definition. AS3 allows us to tell BIG-IP what we want it to look like, and it will figure out the best way to do it for us.

 

{
  "class": "ADC",
  "schemaVersion": "3.7.0",
  "id": "Consul_SD",
  "controls": {
    "class": "Controls",
    "trace": true,
    "logLevel": "debug"
  },
  "Consul_SD": {
    "class": "Tenant",
    "Nginx": {
      "class": "Application",
      "template": "http",
      "serviceMain": {
        "class": "Service_HTTP",
        "virtualPort": 8080,
        "virtualAddresses": [
          "10.0.0.200"
        ],
        "pool": "web_pool"
      },
      "web_pool": {
        "class": "Pool",
        "monitors": [
          "http"
        ],
        "members": [
          {
            "servicePort": 80,
            "addressDiscovery": "consul",
            "updateInterval": 15,
            "uri": "http://10.0.0.100:8500/v1/catalog/service/nginx"
          }
        ]
      }
    }
  }
}


 

We see this declaration creates a partition named “Consul_SD”. In that partition we have a virtual server named “serviceMain”, and its pool members will be queried from Consul’s catalog using the List Nodes for Service API. The IP addresses, the virtual server and Consul endpoint, will be specific to your environment. I’ve chosen to compliment Consul’s health checking with some additional monitoring from F5 in this example that can be seen in the pool monitor.

Now that we’ve learned a little bit about Consul and Terraform, let’s use them together for an end-to-end solution with BIG-IP.

Putting it all together

This section assumes you have an existing BIG-IP instance, and a Consul datacenter with a registered service. I use Nginx in this example. The HashiCorp getting started with Consul track can help you spin up a healthy Consul datacenter with a sample service.

Let’s revisit our AS3 declaration from earlier, and apply it with Terraform. You can check out support for the full provider here.

Below is our simple Terraform file. The “nginx.json” contains the declaration from above.

 

provider "bigip" {
  address  = "${var.address}"
  username = "${var.username}"
  password = "${var.password}"
}


resource "bigip_as3" "nginx" {
  as3_json    = "${file("nginx.json")}"
  tenant_name = "consul_sd"
}

 

If you are looking for a more secure way to store sensitive material, such as your BIG-IP provider credentials, you can check out Terraform Enterprise.

We can run a Terraform plan and validate our AS3 declaration before we apply it. 

 

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.




------------------------------------------------------------------------


An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create


Terraform will perform the following actions:


  # bigip_as3.nginx will be created
  + resource "bigip_as3" "nginx" {
      + as3_json    = jsonencode(
            {
              + Consul_SD     = {
                  + Nginx = {
                      + class       = "Application"
                      + serviceMain = {
                          + class            = "Service_HTTP"
                          + pool             = "web_pool"
                          + virtualAddresses = [
                              + "10.0.0.200",
                            ]
                          + virtualPort      = 8080
                        }
                      + template    = "http"
                      + web_pool    = {
                          + class    = "Pool"
                          + members  = [
                              + {
                                  + addressDiscovery = "consul"
                                  + servicePort      = 80
                                  + updateInterval   = 5
                                  + uri              = "http://10.0.0.100:8500/v1/catalog/service/nginx"
                                },
                            ]
                          + monitors = [
                              + "http",
                            ]
                        }
                    }
                  + class = "Tenant"
                }
              + class         = "ADC"
              + controls      = {
                  + class    = "Controls"
                  + logLevel = "debug"
                  + trace    = true
                }
              + id            = "Consul_SD"
              + schemaVersion = "3.7.0"
            }
        )
      + id          = (known after apply)
      + tenant_name = "consul_sd"
    }


Plan: 1 to add, 0 to change, 0 to destroy.


------------------------------------------------------------------------


Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

 

That output looks good. Let’s go ahead and apply it to our BIG-IP.

 

bigip_as3.nginx: Creating...
bigip_as3.nginx: Still creating... [10s elapsed]
bigip_as3.nginx: Still creating... [20s elapsed]
bigip_as3.nginx: Still creating... [30s elapsed]
bigip_as3.nginx: Creation complete after 35s [id=consul_sd]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed

 

Now we can check the Consul server and see if we are getting requests. We can see log entries for the Nginx service coming from BIG-IP below.

 

consul monitor -log-level=debug
2019/09/17 03:42:36 [DEBUG] http: Request GET /v1/catalog/service/nginx (104.222µs) from=10.0.0.200:43664
2019/09/17 03:42:41 [DEBUG] http: Request GET /v1/catalog/service/nginx (115.571µs) from=10.0.0.200:44072
2019/09/17 03:42:46 [DEBUG] http: Request GET /v1/catalog/service/nginx (133.711µs) from=10.0.0.200:44452
2019/09/17 03:42:50 [DEBUG] http: Request GET /v1/catalog/service/nginx (110.125µs) from=10.0.0.200:44780

 

Any authenticated client could make the catalog request, so for our learning, we can use cURL to produce the same response. Notice the IP of the service we are interested in. We will see this IP reflected in BIG-IP for our pool member.

 

$ curl http://10.0.0.100:8500/v1/catalog/service/nginx | jq
[
  {
    "ID": "1789c6d6-3ae6-c93b-9fb9-9e106b927b9c",
    "Node": "ip-10-0-0-109",
    "Address": "10.0.0.109",
    "Datacenter": "dc1",
    "TaggedAddresses": {
      "lan": "10.0.0.109",
      "wan": "10.0.0.109"
    },
    "NodeMeta": {
      "consul-network-segment": ""
    },
    "ServiceKind": "",
    "ServiceID": "nginx",
    "ServiceName": "nginx",
    "ServiceTags": [],
    "ServiceAddress": "",
    "ServiceWeights": {
      "Passing": 1,
      "Warning": 1
    },
    "ServiceMeta": {},
    "ServicePort": 80,
    "ServiceEnableTagOverride": false,
    "ServiceProxyDestination": "",
    "ServiceProxy": {},
    "ServiceConnect": {},
    "CreateIndex": 9,
    "ModifyIndex": 9
  }
]

 

The network map of our BIG-IP instance should now reflect the dynamic pool.

 

 

Last, we should be able to verify that our virtual service actually works. Let’s try it out with a simple cURL request. 

 

$ curl http://10.0.0.200:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>


<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>


<p><em>Thank you for using nginx.</em></p>
</body>
</html>

 

That’s it! Hello world from Nginx!

You’ve successfully registered your first dynamic BIG-IP pool member with Consul, all codified with Terraform! 

Summary

In this article we explored the power of service discovery with BIG-IP and Consul. We added Terraform to apply the workflow “as code” for an end-to-end solution.

Check out the resources below to dive deeper into this integration, and stay tuned for more awesome integrations with F5 and Hashicorp!

 

References

Published Sep 20, 2019
Version 1.0

Was this article helpful?

No CommentsBe the first to comment