When cloud-native meets monolithic

According to CNCF’s Cloud Native Survey 2020 published on 17 Nov 2020, containers in production jump 300% from the first survey in 2016. Year 2020 itself increased to 92% from 84% in 2019. (https://www.cncf.io/cncf-cloud-native-survey-2020). In addition, according to F5's 2020 State of Application Services Report (https://www.f5.com/state-of-application-services-report#get-the-report), 80% of organisations are executing on digital transformation and of this organisation, there are more likely to deploy modern app architecture and app services at higher rate. So, cloud-native modern application architecture is gaining great momentum. Industries and majority organisation embracing and pivoting toward cloud-native technologies. Cloud-native provides multitude of benefits – which is not the subject of this article. F5’s BIG-IP (a.k.a classic BIG-IP) is not cloud-native. How F5’s classic BIG-IP be relevant in the cloud-native world? This article demonstrates how cloud-native meets and needs classic BIG-IP (monolithic).


FYI: F5’s BIG-IP SPK (Service Proxy for Kubernetes) is BIG-IP delivered in a containerized form factor. It is cloud-native (https://www.f5.com/products/service-proxy-for-kubernetes). BIG-IP SPK will be discussed in future article.


How both of them need each other?

Typically, it take years for organisation who embracing cloud-native to move into a fully cloud-native technologies/infrastructure. There are use cases where modern cloud-native application needs to integrate with traditional or existing monolithic applications. Modern apps living along with traditional apps or infrastructure are common for most enterprises. F5’s classic BIG-IP can bridge those gaps. This article about use cases that we solved in one of our customer environment where how we leverage classic BIG-IP to bridge gaps between cloud-native and monolithic apps.


First, let’s be clear on what cloud-native really means. To set the record straight, cloud-native doesn’t just mean running workload in the cloud, although it is partially true. There are many definition and perspective on what cloud-native really means. For the sake of this article, I will base on the official definition of cloud-native from the CNCF (Cloud Native Computing Foundation), which defines


“Cloud native technologies empower organisations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil”


My takeaway (characteristic of cloud-native):

  • Scalable apps that design to run in dynamic environment (public, private and hybrid clouds).
  • Typically delivered in the form of microservices/containers which loosely coupled systems.
  • Easily adapted and integrated into automation system.
  • CI/CD part of it ecosystem – frequent release, patch and updates cycle.
  • Immutable (cattle service model) instead of mutable (Pets service model)
  • Declarative API

Kubernetes is one of the example of a cloud-naive technologies.


What is the Problem Statement?

Uniquely identify apps/workload/containers deployed in Kubernetes platform and apply appropriate external security control (e.g. network/application firewall) for containerized apps when it communicates with existing legacy applications deployed outside of Kubernetes (egress from Kubernetes).


What are those challenges?

  • Containers that egress off Kubernetes are by design source network address translated (SNAT) from Kubernetes nodes. External security control system such as network firewall may not be able to identify the right authorised source apps as it is hidden behind NATing. How to ensure that only authorised apps deployed in Kubernetes environment authorised to access critical legacy apps (e.g. billing or financial system) protected by network/application firewall?
  • For multi-tenant environment with multiple namespaces in Kubernetes, how to ensure pods or namespaces have a unique identity and enforce control access to egress endpoints (outside of Kubernetes).
  • Unique workload identity is important for end-to-end correlation, audit and traceability. How to provide end-to-end and correlated view from source to target apps.


How F5 solved this with classic BIG-IP ADC and Aspen Mesh Service Mesh

Architecture Overview


This solution article is an extension to the original article – “Expanding Service Mesh without Envoy” published by my colleague Eric Chen. For details of that article please refer to https://aspenmesh.io/expanding-service-mesh-without-envoy/


Aspen Mesh, an innovation from F5, is an enterprise-ready service mesh built on Istio. It is tested and hardened distribution of Istio with complete support by F5. For details, please refer to https://aspenmesh.io. For the purpose of this solution, Aspen Mesh and Istio will be used interchangeably.


Solution in a nutshell

  1. Each pod had its own workload identity. Part of native capabilities of Aspen Mesh (AM). The identity is in a form of client certificate managed by AM (istiod/Citadel) and generated from an organisation Intermediate CA loaded onto Istio control plane.
  2. BIG-IP on-boarded with workload identity (client certificate) and signed from the same organisation Intermediate CA (or root CA). This client certificate NOT managed by AM.
  3. F5 Virtual Server (VS) configured with client-side profile to perform mutual TLS (mTLS).
  4. F5 VS is registered onto AM. F5 VS service can be discovered from internal service registry.
  5. On egress, pod will perform mTLS with F5 VS. As F5 client certificate issues from same organisation Intermediate CA, both parties will negotiate and mutually trust each other and exchange mTLS key.
  6. An optional iRule can be implemented on BIG-IP to inspect pod identity (certificate SAN) upon successful mTLS and permit/reject request.
  7. BIG-IP implement SNAT and present a unique network identifier (e.g IP address) to network firewall.


Environment

  • BIGIP LTM (v14.x)
  • Aspen Mesh - v1.16.x
  • Kubernetes 1.18.x


Use Case

Permit microservices apps (e.g. bookinfo) to use organisation forward proxy (tinyproxy) to get to Internet which sit behind enterprise network firewall and reject all other microservices apps on the same Kubernetes cluster.


Classic BIG-IP

Only vs_aspenmesh-bookinfo-proxy-srv-mtls-svc configuration will be demonstrated. Similar configuration can be applied on other VS.


F5 Virtual Server configuration


F5's VS client profile configuration.

"Client Certificate = require" require pods deployed inside AM present a valid trusted client certificate. An optional iRule to only permit pods from bookinfo namespace.


Optional irule_bookinfo_spiffee to permit bookinfo apps and reject other apps.

when CLIENTSSL_CLIENTCERT {
  set client_cert [SSL::cert 0]
  #log local0. "Client cert extensions - [X509::extensions $client_cert]"
  #Split the X509::extensions output on each newline character and log the values
  foreach item [split [X509::extensions [SSL::cert 0]] \n] {
    log local0. "$item"
  }
  if {[SSL::cert 0] ne ""} {
     set santemp [findstr [X509::extensions [SSL::cert 0]] "Subject Alternative Name" 43 " "]
     set spiffe [findstr $santemp "URI" 4]
     log local0. "Source SPIFFEE-->$spiffe"
     if { ($spiffe starts_with "spiffe://cluster.local/ns/bookinfo/") } {
         log local0. "Aspen Mesh mTLS: PEMITTED==>$spiffe"
         # Allow and SNAT from defined SNAT Pool
     } else {
         log local0. "Aspen Mesh mTLS: REJECTED==>$spiffe"
         reject
     }
 }
}

Note:

As of Istio version 1.xx, client-side envoy (istio sidecar) will start a mTLS handshake with server-side BIG-IP VS (F5's client side profile). During the handshake, the client-side envoy also does a secure naming check to verify that the service account presented in the server certificate is authorised to run the target service. Then only the client-side envoy and server-side BIG-IP will establish a mTLS connection. Hence, the client certificate generated loaded onto BIG-IP have to conform to the secure naming information, which maps the server identities to the service names. 


For details on secure naming, please refer to https://istio.io/latest/docs/concepts/security/#secure-naming


Example to generate a SPIFFE friendly certificate

openssl req -new -out bookinfo.istio-spiffee-req.pem -subj "/C=AU/ST=Victoria/L=Melbourne/O=F5/OU=SE/CN=bookinfo.spiffie" -keyout bookinfo.istio-spiffee-key.pem  -nodes


cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
DNS.1=spiffe://cluster.local/ns/bookinfo/sa/default
EOF


openssl x509 -req -sha512 -days 365 \
    -extfile v3.ext \
    -CA ../ca1/ca-cert.pem -CAkey ../ca1/ca-key.pem  -CAcreateserial \
    -in bookinfo.istio-spiffee-req.pem \
    -out bookinfo.istio-spiffee-cert.pem

where ca1 is the intermediate CA use for Aspen Mesh.


Aspen Mesh

Pods and Services before registration of F5 VS

$ kubectl -n bookinfo get pod,svc
NAME                                                READY   STATUS    RESTARTS   AGE
pod/details-v1-78d78fbddf-4vmdr                     2/2     Running   0          4d1h
pod/productpage-v1-85b9bf9cd7-f6859                 2/2     Running   0          4d1h
pod/ratings-v1-6c9dbf6b45-9ld6f                     2/2     Running   0          4d1h
pod/reviews-v1-564b97f875-bjx2r                     2/2     Running   0          4d1h
pod/reviews-v2-568c7c9d8f-zzn8r                     2/2     Running   0          4d1h
pod/reviews-v3-67b4988599-pdk25                     2/2     Running   0          4d1h
pod/traffic-generator-productpage-fc97f5595-pdhvv   2/2     Running   0          6d11h


NAME                                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/details                         ClusterIP   10.235.14.186   <none>        9080/TCP   6d11h
service/productpage                     ClusterIP   10.235.37.112   <none>        9080/TCP   6d11h
service/ratings                         ClusterIP   10.235.40.239   <none>        9080/TCP   6d11h
service/reviews                         ClusterIP   10.235.1.21     <none>        9080/TCP   6d11h
service/traffic-generator-productpage   ClusterIP   10.235.17.158   <none>        80/TCP     6d11h


Register bigip-proxy-svc onto Aspen Mesh

$ istioctl register -n bookinfo bigip-proxy-svc 10.4.0.201 3128 --labels apps=bigip-proxy
 
2020-12-15T23:14:33.286854Z      warn     Got 'services "bigip-proxy-svc" not found' looking up svc 'bigip-proxy-svc' in namespace 'bookinfo', attempting to create it

2020-12-15T23:14:33.305890Z      warn     Got 'endpoints "bigip-proxy-svc" not found' looking up endpoints for 'bigip-proxy-svc' in namespace 'bookinfo', attempting to create them


$ kubectl -n bookinfo get svc
NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
bigip-proxy-svc                 ClusterIP   10.235.45.250   <none>        3128/TCP   26s
details                         ClusterIP   10.235.14.186   <none>        9080/TCP   6d11h
productpage                     ClusterIP   10.235.37.112   <none>        9080/TCP   6d11h
ratings                         ClusterIP   10.235.40.239   <none>        9080/TCP   6d11h
reviews                         ClusterIP   10.235.1.21     <none>        9080/TCP   6d11h
traffic-generator-productpage   ClusterIP   10.235.17.158   <none>        80/TCP     6d11h

$ kubectl -n bookinfo describe svc bigip-proxy-svc
Name:              bigip-proxy-svc
Namespace:         bookinfo
Labels:            apps=bigip-proxy
Annotations:       alpha.istio.io/kubernetes-serviceaccounts: default
Selector:          <none>
Type:              ClusterIP
IP:                10.235.45.250
Port:              3128  3128/TCP
TargetPort:        3128/TCP
Endpoints:         10.4.0.201:3128
Session Affinity:  None
Events:            <none>


To test egress from bookinfo pod to external forward proxy (tinyproxy).

Run "curl" accessing to Internet (www.f5.com) pointing to bigip-proxy-svc registered on Aspen Mesh. Example below shown executing curl binary inside "traffic-generator-productpage" pod.

$ kubectl -n bookinfo exec -it $(kubectl -n bookinfo get pod -l app=traffic-generator-productpage -o jsonpath={.items..metadata.name}) -c traffic-generator  -- curl -Ikx bigip-proxy-svc:3128 https://www.f5.com

HTTP/1.0 200 Connection established
Proxy-agent: tinyproxy/1.8.3

HTTP/1.1 200 OK
Content-Type: text/html;charset=utf-8
Content-Length: 132986
Connection: keep-alive
Accept-Ranges: bytes
Cache-Control: no-cache="set-cookie"
Content-Security-Policy: frame-ancestors 'self' *.cybersource.com *.salesforce.com *.force.com ; form-action *.cybersource.com *.salesforce.com *.force.com 'self'
Date: Wed, 16 Dec 2020 06:19:48 GMT
ETag: "2077a-5b68b3c0c5be0"
Last-Modified: Wed, 16 Dec 2020 02:00:07 GMT
Strict-Transport-Security: max-age=16070400;
X-Content-Type-Options: nosniff
X-Dispatcher: dispatcher1uswest2
X-Frame-Options: SAMEORIGIN
X-Vhost: publish
Via: 1.1 sin1-bit21, 1.1 24194e89802a1a492c5f1b22dc744e71.cloudfront.net (CloudFront)
Vary: Accept-Encoding
X-Cache: Hit from cloudfront
X-Amz-Cf-Pop: MEL50-C2
X-Amz-Cf-Id: 7gE6sEaBP9WonZ0KjngDsr90dahHWFyDG0MwbuGn91uF7EkEJ_wdrQ==
Age: 15713


Logs shown on classic BIG-IP

Classic BIG-IP successfully authenticate with bookinfo with mTLS and permit access.


Logs shown on forward proxy (tinyproxy).

Source IP is SNATed to IP configured on classic BIG-IP. IP also allowed on network firewall.


From other namespace (e.g. sm-apigw-a), try to access bigip-proxy-svc. Attempt shown rejected by classic BIG-IP. Example below shown executing curl binary in "nettools" pod.

$ kubectl -n sm-apigw-a get pod
NAME                           READY   STATUS    RESTARTS   AGE
httpbin-api-78bdd794bd-hfwkj   2/2     Running   2          22d
nettools-9497dcc86-nhqmr       2/2     Running   2          22d
podinfo-bbb7bf7c-j6wcs         2/2     Running   2          22d
sm-apigw-a-85696f7455-rs9zh    3/3     Running   0          7d21h
fbchan@logos:~/k8s-clusterX/k8s$

$ kubectl -n sm-apigw-a exec -it $(kubectl -n sm-apigw-a get pod -l app=nettools -o jsonpath={.items..metadata.name}) -c nettools  -- curl -kIx bigip-proxy-svc.bookinfo.svc.cluster.local:3128 https://devcentral.f5.com
curl: (56) Recv failure: Connection reset by peer
command terminated with exit code 56


Classic BIG-IP Logs

Classic BIG-IP reject sm-apigw-a namespace from using bigip-proxy-svc service.


Summary

Aspen Mesh is cloud-native Enterprise Ready Istio service mesh. Classic BIG-IP is a features rich application delivery controller (ADC). With Aspen Mesh, microservices are securely authenticated with mTLS with classic BIG-IP. Classic BIG-IP able to securely authenticate microservices apps and deliver application services based on your business and security requirement.


This article addresses egress use cases. What about Ingress to Kubernetes cluster? How classic BIG-IP or cloud-native SPK coherently work together with Aspen Mesh to provides secure and consistent multi-cloud, multi-cluster application delivery services to your Kubernetes environment. This will be shared in future article. Stay tune.

Published Jan 21, 2021
Version 1.0

Was this article helpful?

2 Comments

  • Thank you Foo-Bang, this was a great read. Looking forward to the next episode with ingress. I have a question: does the service mesh have to be AspenMesh, or could a customer use a different service mesh, still Istio-based?

  • Thanks. Preparing for the ingress episode. The service mesh doesn't need to be Aspen Mesh. Any Istio based service mesh will works. Technically, it is a mTLS between pod and F5. Hence, in theory (I hasn't test it), other service mesh (non istio) technology should works as well.