Ready to Go! Deploying F5 Infrastructure Using Terraform

This article describes how using Terraform enables you to rapidly deploy F5 infrastructure. Having something that is "ready to go" is what building infrastructure with Terraform is all about!

The article also describes how you can customize your Terraform code to meet your particular needs. Once you have your specific design pattern, you have an automated way of rapidly creating, modifying, or destroying the network/application infrastructure over and over again in minutes, rather than hours or days.

My Chosen Environment

I will be using:

  • Google Cloud Platform
  • Terraform
  • Github for source control
  • VS Code for Editing Terraform

 

There are also templates in the same repository that will work just as easily in AWS and Azure.

What Is Terraform?

Terraform is a tool that is produced by Hashicorp.

Terraform is a solution for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter.

How Can You Deploy F5 Using Terraform?

There are many articles about how to install Terraform. This article assumes you have already installed Terraform and are ready to start deploying F5 infrastructure.

In this article we will show you how easy it is to:

  • Deploy an Example F5 Terraform Template
  • Modify the Vanilla Terraform Template to Add a Jump-Box

 

Why Would You Want to Modify the Generic Template to Add a Jump-Box?

Well, don’t put your management interface on the internet. That is not a good idea.

The Terraform example that I will use sets the management interface up with direct access to the internet. There are ACLs that you can configure to only allow connections from specific source IP addresses, which you should definitely employ even if you don’t use a jump-box.  

An additional layer of security is to add a jump-box so that you have to connect to the jump-box prior to accessing the management interface. From there you could also go ahead and smart card enable your jump-box or provide other two-factor authentication in order to further increase the security of the environment.

Using a jump-box is a good best practice, period.  For example, CVE-2020-5902 is a critical vulnerability that allowed attackers to actively exploit F5 management interfaces to do things like install coin-miners and malware or to gain administrative access to the hacked device. If your management interface had been internet facing, then it is safe to assume that you would have been breached.

Also there were reports from the FBI that state-sponsored organizations were also trying to exploit this flaw.

https://www.securityweek.com/iranian-hackers-target-critical-vulnerability-f5s-big-ip

By using a jump-box you would not be placing your F5 management interfaces directly on the internet; you would have to access the F5 management interfaces via an RDP connection. Note that you should also harden your jump-box and implement ACLs and two-factor authentication in order to improve the security of the jump-box itself, as it presents a means of access. In this article we build the jump-box, but further hardening (which could also be implemented in Terraform) should be a best practice to make access to your management infrastructure more secure.

Deploy a Terraform Example That Deploys F5 Infrastructure

1)   Fork template

In this example, my starting point is to fork templates published by a fellow F5er Jeff Giroux. This way I can keep my own copy and also make changes as appropriate for my environment.

 

 


2) Use git pull to make a local copy of the Terraform code.

git clone https://github.com/dudesweet/f5_terraform.git

 

This will pull a local copy of the template using the "git" command that will pull your forked version from github.

3) Explore the code with VS code.

I am using VS code as my local editor.

You can see that the template has directories for Azure, AWS, and GCP, and has different implementations of high availability; plus there are also auto-scale use cases. Your design pattern of choice will depend upon your requirements. In my case I am going to choose HA via load balancing.

4) Build your network infrastructure, as per the readme.

This solution uses a Terraform template to launch a new networking stack. It will create three VPC networks with one subnet each: mgmt, external, internal. Use this terraform template to create your Google VPC infrastructure, and then head back to the [BIG-IP GCP Terraform folder](../) to get started!

So navigate to the below directory.

~/f5_terraform/GCP/Infrastructure-only

 

And you are going to want to customize the terraform.tfvars.example file and then re-name that file to terraform.tfvars

So fill this out according to you specific environment. These are self explanatory, but:

  • The prefix is used to prefix the infrastructure naming.
  • adminsrcAddr - this is is your friend. This is how you restrict management access from the internet.
  • gcp_project_id - this is your Google project Identifier.
  • Region - your region where you would like the infrastructure to be built.
  • Zone - your zone where you would like the infrastructure to be built.

 

# Google Environment
prefix         = "mydemo123"
adminSrcAddr   = "0.0.0.0/0"
gcp_project_id = "xxxxx"
gcp_region     = "us-west1"
gcp_zone       = "us-west1-b"

 

Also, in the variables.tf you can customize the subnets to your own requirements, but in this case you need three VPCs with subnets (this is GCP so we have one 3 VPCs and a Subnet Per VPC).

And then build out the network infrastructure.

In the infrastructure directory:

~/f5_terraform/GCP/Infrastructure-only

 

Run the following command:

terraform plan

"terraform plan" will show you the changes that are going to be made.

 

And then run the command:

terraform apply

"terraform apply" will build the network infrastructure.

"terraform apply" will prompt you with a yes/no to confirm if you want to go ahead and make the changes.

 

Once you have built out your network infrastructure, you should be able to see the infrastructure that you have created inside of Google.

Once you have built your networks and firewall rules etc., you can go ahead and build out your F5 infrastructure.

6) Build your F5 infrastructure.

As mentioned before, the Terraform template that we are using allows access to the management interfaces from the internet - and you can limit access to the management interface via source IP.

In my case, I want to add an additional layer of security by adding a jump-box. So I need to add a separate file with a few lines of Terraform code to instantiate the jump-box in the following directory:

~/f5_terraform/GCP/HA_via_lb

 

After creating a file called jumpbox.tf, in my case I then added the following code to create a jump-box instance and associate it with the management subnet.

#creates an ipV4 address to associate with the interface
resource "google_compute_address" "static" {
  name = "ipv4-address"
}

#Define the type of instance tht you want. I am choosing a windows server.
resource "google_compute_instance" "jumphost" {
  count                     = 1
  name                      = "myjumphost1"
  project                   = var.gcp_project_id
  machine_type              = "n1-standard-8"
  zone                      = var.gcp_zone
  allow_stopping_for_update = true
  boot_disk {
    initialize_params {
      image = "windows-server-2016-dc-v20200714"
    }
  }

  #Define the network interface and then associate the IP with the network interface.
  network_interface {
    network    = "${var.prefix}-net-mgmt"
    subnetwork = "${var.prefix}-subnet-mgmt"
    subnetwork_project = var.gcp_project_id
    network_ip         = var.jumphost_private_ip
    access_config {
      nat_ip = google_compute_address.static.address
    }
  }

  #Service account and permissions (how much access the service account has to the Google Meta data service).
  service_account {
    scopes = ["cloud-platform", "compute-rw", "storage-ro", "service-management", "service-control", "logging-write", "monitoring"]
  }
  
}

 

Then I will need to modify the terraform.tfvars.example file to suit my environment, and re-name to terraform.tfvars.

# BIG-IP Environment
uname    = "admin"
usecret   = "my-secret"
gceSshPubKey = "ssh-rsa xxxxx"
prefix    = "mydemo123"
adminSrcAddr = "0.0.0.0/0"
mgmtVpc   = "xxxxx-net-mgmt"
extVpc    = "xxxxx-net-ext"
intVpc    = "xxxxx-net-int"
mgmtSubnet  = "xxxxx-subnet-mgmt"
extSubnet  = "xxxxx-subnet-ext"
intSubnet  = "xxxxx-subnet-int"
dns_suffix  = "example.com"

# BIG-IQ Environment
bigIqUsername = "admin"

# Google Environment
gcp_project_id = "xxxxx"
gcp_region   = "us-west1"
gcp_zone    = "us-west1-b"
svc_acct    = "xxxxx@xxxxx.iam.gserviceaccount.com"
privateKeyId  = "abcdcba123321"
ksecret    = "svc-acct-secret"

 

I also added a line into the file called outputs.tf.

output "JumpBoxIP" { value = google_compute_instance.jumphost.0.network_interface.0.access_config.0.nat_ip}

 

This line will print out the jump-box IP address that I will use to RDP to the jump-box after a "terraform apply".

Note that these templates rely upon the use of Google's secret manager in order to store the admin password.

You will need to create a secret that by default is called "my-secret" (but you can call it anything you want), and this is where the Terraform code will pull the admin password from. Using a vault or a secrets manager to store sensitive values for reference in code is a good security best practice as you are only referencing the secrets vault in code and not the literal values themselves.

And then build out the f5 infrastructure that will use the network infrastructure that you created earlier.

In the HA_via_lb directory:

~/f5_terraform/GCP/HA_via_lb

 

Run the following command:

terraform plan

"terraform plan" will show you the changes that are going to be made.

 

And then:

terraform apply

"terraform apply" will add the F5 infrastructure and the jump-boxes.

"terraform apply" will prompt you with a yes/no to confirm that you want to go ahead and make the changes.

Remove Access to Port 443 on the Management Plane

Because this Terraform template uses F5 declarative on boarding (DO) and AS3 to

  • Place the BIG-IPs in an active standby pair and
  • Create an example application on the BIG-IP...

 

...the example declarations in the Terraform rely on access to the management interface on port 443 as they POST the declarations to the BIG-IP in order to create the configuration.

In your case this may present a too much of a risk, but if you use the source IP-based filtering mechanism properly and you use a very strong admin password for the management interface, then you can mitigate this risk for the brief period of time that the management interface would be exposed on the internet for Infrastructure Creation.

Again, I deny port 443 after creating the infrastructure. If you can’t do this, you could build a jump-box first and then run the Terraform code from the jump-box.

That being said, in my case I go back into the "Infrastructure Only" section and remove port 443 under allowed ports.

You can simply edit the networks.tf file in the "Infrastructure Only" directory and re-run the template again.

This is the stanza for the firewall rules on the management VPC:

resource "google_compute_firewall" "mgmt" {
  name          = "${var.prefix}-allow-mgmt"
  network       = google_compute_network.vpc_mgmt.name
  source_ranges = [var.adminSrcAddr]
  allow {
    protocol = "icmp"
  }
  allow {
    protocol = "tcp"
    #remove access to port 443 here an re-apply
    ports    = ["22","3389"]
  }
}

 

When you run this "terraform apply" again you will note that changes will only be made to the infrastructure that are modified. Terraform maintains state. It keeps a copy of what has been deployed and therefore will only make a change to the objects that require changes.

Ready to Go!

When this is all done, you will have a pair of BIG-IPs clustered in (Active/Standby) in Google GCP configured with three NICS. One for management, one for the "external" traffic interface, and one for the "internal" traffic interface. Traffic will ingress from from the Google Load Balancer to the BIG-IP VE, which will the process traffic to the applications that would reside on the "Internal" traffic side.

There is now a jump-box that will be used to access the management interfaces to make changes to the BIG-IP configuration. You could also place further DevOps infrastructure on the jump-box in order to automate your application delivery configuration.

From here you should be able to:

  • Navigate to your jump-box. In my case, I set a strong password on the jump-box from the Google console. No doubt this could also be automated in the Terraform.
  • Access your Infrastructure via the jump-box. You will be able to access the management IP on the internal IP address on NIC1.


You can view a video based overview below.

Links and References

 

Published Nov 03, 2020
Version 1.0

Was this article helpful?