Kubernetes Deployment Guide

Deployment Guides

ThreatX WAF sensors can be easily deployed in Kubernetes environments. In this paper Google Cloud Platform (GCP) using Google Kubernetes Engine (GKE) will be used to illustrate design options but any Kubernetes provider or on-premises Kubernetes deployment can be used.

Two options for Kubernetes deployment will be described:

  1. Adding application security as another Kubernetes service
  2. Adding application security using a sidecar pattern within the web server pods
Figure 1: Security Layer Design
Figure 2: Sidecar Pattern Design

Note: The sidecar pattern deployment requires the ThreatX WAF Sensor container and the application

container to be listening on different ports within the pod.

A simple web application shown in Figure 3 will be used as an example.

Figure 3: Simple Web Application in a Kubernetes Cluster

In this design, Google Cloud DNS is providing name resolution to www.threatx.guru, which points to a public IP address of a Google Load Balancer that front-ends the web application. The web application is a simple Kubernetes Deployment of Pods exposed as a Service.

Here is the YAML file (web-app.yaml) that describes the Deployment and Service:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: web-app
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
        - name: myip
          image: cloudnativelabs/whats-my-ip
          ports:
            - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: webapp
  name: web-app
spec:
  externalTrafficPolicy: Local
  ports:
    - port: 80
      targetPort: 8080
selector:
  app: webapp
sessionAffinity: None
type: LoadBalancer

We are using a nice little web application from cloudnativelabs called whats-my-ip. This application is hosted on the Docker Hub repo at https://hub.docker.com/r/cloudnativelabs/whats-my-ip/. The web application simply returns the hostname and IP of the web server and runs on port 8080 within the container.

Our deployment spins up three Pods running this application container and then exposes the application as a LoadBalancer Service listening on port 80.

Notice we use externalTrafficPolicy: Local in the Service configuration so the original source IP address is

preserved when traffic is sent to the web application.

Our three Pods are up and running:

$ kubectl get pods
NAME                       READY  STATUS   RESTARTS  AGE
web-app-75f8f55c6c-dqgkx   1/1    Running  0         8m
web-app-75f8f55c6c-m5tq7   1/1    Running  0         8m
web-app-75f8f55c6c-m9g9s   1/1    Running  0         8m

Our LoadBalancer Service is up and running:

$ kubectl get service
NAME          TYPE           CLUSTER-IP     EXTERNAL-IP PORT(S)         AGE
kubernetes    ClusterIP      10.31.240.1    <none> 443/TCP              1d
web-app       LoadBalancer   10.31.244.56   35.233.226.3 80:31953/TCP   9m

Our Google Cloud DNS is configured to point www.threatx.guru to the Load Balancer IP:

$ gcloud dns record-sets list -z=threatx-guru
NAME TYPE TTL DATA
<snip>
www.threatx.guru. A 60 35.233.226.3
 

When we run multiple requests against the site we can see them being load balanced across the web servers:

$ for i in {1..10}; do curl www.threatx.guru; done
HOSTNAME:web-app-75f8f55c6c-m9g9s IP:10.28.0.11
HOSTNAME:web-app-75f8f55c6c-m9g9s IP:10.28.0.11
HOSTNAME:web-app-75f8f55c6c-dqgkx IP:10.28.0.10
HOSTNAME:web-app-75f8f55c6c-m9g9s IP:10.28.0.11
HOSTNAME:web-app-75f8f55c6c-m9g9s IP:10.28.0.11
HOSTNAME:web-app-75f8f55c6c-dqgkx IP:10.28.0.10
HOSTNAME:web-app-75f8f55c6c-m5tq7 IP:10.28.0.9
HOSTNAME:web-app-75f8f55c6c-m5tq7 IP:10.28.0.9
HOSTNAME:web-app-75f8f55c6c-m5tq7 IP:10.28.0.9
HOSTNAME:web-app-75f8f55c6c-dqgkx IP:10.28.0.10

In the next two sections we’ll see two different methods we can use to add an application security layer to this existing web application.

Application Security Service

In this option a web application security layer is added to the Kubernetes cluster as a new Deployment and Service. The new LoadBalancer Service sends requests to www.threatx.guru to the ThreatX WAF Sensor Deployment instead of sending them directly to the whats-my-ip web application. Google Cloud DNS is configured to resolve www.threatx.guru to the newly created Google Cloud Load Balancer external IP. The ThreatX WAF Sensors then reverse proxy the connections to the backend application via the internal Service DNS name of myip-app.default.svc.cluster.local. Figure 4 illustrates this design.

Figure 4: Kubernetes Application Security Service

Configuration Steps

In this section we will perform the steps necessary to create the application security layer and insert the layer into the existing cluster with no downtime. The steps include:

  1. Create the Authentication Secrets: Use kubectl to create the encrypted secrets for the Sensor
  2. CUSTOMER and API_KEY environment variables.
  3. Create the new Deployments and Services: Create and deploy a new YAML configuration file that
  4. describes the new external txwaf and internal whats-my-ip Services and Deployments. We’ll make sure to use different names for the Deployment and Service attributes so there is no conflict with the existing ones.
  5. Test connectivity to the ThreatX WAF Sensors: We’ll use the external IP of the protected-app service to test connectivity to the Sensors.
  6. Create the Site in the ThreatX Dashboard UI: Now that the txwaf and whats-my-ip Deployments and Services are up and running we can configure the Site to allow traffic to proxy through the Sensors to the backend.
  7. Test Traffic Through the Sensors: Perform an end-to-end test on a laptop to make sure the entire path is routing the traffic correctly. This is done by adding a host header to the curl request.
  8. Modify the Public DNS record: After the site testing runs clean, the Google Cloud DNS Resource
  9. Record can be changed to point to the new protected-app load balancer. This will effectively migrate the site to the secured environment. It will take some time for the migration to complete while distributed DNS caches expire, but there will be no downtime as new connections are migrated.
  10. Delete the Original Deployment and Services: After all traffic is migrated to the secured environment, the old external Load Balancer, Service, and Deployment can be removed. With this step the setup and migration are complete.

1. Create the Authentication Secrets

First, we will create the secret authentication information for the ThreatX WAF Sensor containers using
kubectl. Activate the Cloud Shell from within the Google Cloud web interface and run the following command using the appropriate values for the CUSTOMER and API_KEY environment variables:

$ kubectl create secret generic txauth \
--from-literal=customer=lab \
--from-literal=apikey=1234567890abcdefg
secret "txauth" created

2. Create the New Deployments and Services

Next, create a new file called protected-app.yaml with the following content within your Google Cloud Shell to create the new Deployments and Services:

# Backend whats-my-ip web application Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: myip-app
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: myip
    spec:
      containers:
        - name: myip
          image: cloudnativelabs/whats-my-ip
          ports:
            - containerPort: 8080
---
# Frontend ThreatX WAF Sensor Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: threatx
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: txwaf
    spec:
      containers:
        - name: threatx
          image: threatx/txwaf
          ports:
            - containerPort: 80
          env:
            - name: CUSTOMER
              valueFrom:
                secretKeyRef:
                  name: txauth
                  key: customer
        - name: API_KEY
          valueFrom:
            secretKeyRef:
              name: txauth
              key: apikey
        - name: RESOLVER
          value: local
        - name: SENSOR_TAGS
          value: k8s,myip-app
---
# Backend whats-my-ip web application Service
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myip
  name: myip-app
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app: myip
  sessionAffinity: None
---
# Frontend ThreatX WAF Sensor Service
apiVersion: v1
kind: Service
metadata:
  labels:
    app: txwaf
  name: protected-app
spec:
  externalTrafficPolicy: Local
  ports:
    - port: 80
      targetPort: 80
  selector:
    app: txwaf
sessionAffinity: None
type: LoadBalancer

Some highlights from this configuration file:

  • Different names for the Deployment and Service attributes were used so there is no conflict with the existing one.
  • The txwaf deployment is configured to use the previously set secrets for authentication and local DNS for name resolution within the cluster.
  • The new web application internal Service is the default ClusterIP type and will have an internal DNS name of myip-app.default.svc.cluster.local. (see the Kubernetes documentation for more details on how internal DNS names are automatically provisioned) We’ll use this as the backend in the Site configuration for the Sensors in an upcoming step.
  • The new Service created for the ThreatX WAF Sensors is an external LoadBalancer type and will be assigned a public IP address.

Apply the configuration file and verify the new Deployments and Services are up and running in Google Cloud Shell:

$ kubectl create -f protected-app.yaml
deployment "myip-app" created
deployment "threatx" created
service "myip-app" created
service "protected-app" created

To view the Pods:

$ kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
myip-app-75f8f55c6c-96v5h    1/1     Running   0          16m
myip-app-75f8f55c6c-hvgxq    1/1     Running   0          16m
myip-app-75f8f55c6c-qgs7v    1/1     Running   0          16m
threatx-567d94b45-f58qg      1/1     Running   0          16m
threatx-567d94b45-mh9lg      1/1     Running   0          16m
threatx-567d94b45-zvsgs      1/1     Running   0          16m
web-app-646b48c885-dmrcs     1/1     Running   0          28m
web-app-646b48c885-ptghg     1/1     Running   0          28m
web-app-646b48c885-vlcnr     1/1     Running   0          28m

To view the Deployments:

$ kubectl get deploy
NAME       DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
myip-app   3        3        3           3          18m
threatx    3        3        3           3          18m
web-app    3        3        3           3          30m

To view the Services:

$ kubectl get services
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
kubernetes     ClusterIP      10.31.240.1     <none>         443/TCP        1d
myip-app       ClusterIP      10.31.245.83    <none>         80/TCP         20m
protected-app  LoadBalancer   10.31.248.190   35.197.37.55   80:30700/TCP   20m
web-app        LoadBalancer   10.31.249.38    35.233.226.3   80:30900/TCP   32m

3. Test Connectivity to the ThreatX WAF Sensors

Now that the Deployments and Services are up, let’s test connectivity to the ThreatX WAF Sensors with the curl command. We’ll use the External-IP of the new external LoadBalancer Service. A request to the Sensor that doesn’t match a configured Site will return a “Nothing to see here” message:

$ for i in {1..10}; do curl 35.197.37.55; done
<html><head></head><body>Nothing to see here</body></html><html><head></head><body>Nothing to see
here</body></html><html><head></head><body>Nothing to see here</body></html><html><head></head><bo
dy>Nothing to see here</body></html><html><head></head><body>Nothing to see here</body></html><htm
l><head></head><body>Nothing to see here</body></html><html><head></head><body>Nothing to see here
</body></html><html><head></head><body>Nothing to see here</body></html><html><head></head><body>N
othing to see here</body></html><html><head></head><body>Nothing to see here</body></html> 

Now that we have verified connectivity to the Sensors we can continue on to migrating the site.

4. Create the Site in the ThreatX Dashboard UI

Log into the ThreatX Dashboard and configure the Site under Settings > Sites > Add Site. We’ll enter www.threatx.guru into the Hostname field and myip-app.default.svc.cluster.local into the Backends field.

5. Test Traffic Through the Sensors

To test end-to-end from our laptop, we’ll craft a curl command with the correct host header so the Sensor will route the traffic to the backend:

$ for i in {1..10}; do curl -H "host: www.threatx.guru" 35.197.37.55; done
HOSTNAME:myip-app-75f8f55c6c-sxwql IP:10.28.1.13
HOSTNAME:myip-app-75f8f55c6c-sxwql IP:10.28.1.13
HOSTNAME:myip-app-75f8f55c6c-9qkk8 IP:10.28.0.24
HOSTNAME:myip-app-75f8f55c6c-2r2vw IP:10.28.0.23
HOSTNAME:myip-app-75f8f55c6c-9qkk8 IP:10.28.0.24
HOSTNAME:myip-app-75f8f55c6c-sxwql IP:10.28.1.13
HOSTNAME:myip-app-75f8f55c6c-9qkk8 IP:10.28.0.24
HOSTNAME:myip-app-75f8f55c6c-9qkk8 IP:10.28.0.24
HOSTNAME:myip-app-75f8f55c6c-sxwql IP:10.28.1.13
HOSTNAME:myip-app-75f8f55c6c-2r2vw IP:10.28.0.23

The web application is testing successfully through the sensor, so now we can move on to the migration.

6. Modify the Public DNS Record

Now we’ll reconfigure the www.threatx.guru DNS record in Google Cloud DNS so the traffic path will change from going to the old Service and Deployment to the new ones.

First, we’ll open a terminal window and run an infinite loop of curl requests against the site so we can verify the site does not lose availability during the DNS record change:

$ while true; do curl www.threatx.guru; done
HOSTNAME:web-app-646b48c885-vlcnr IP:10.28.0.14
HOSTNAME:web-app-646b48c885-ptghg IP:10.28.0.15
HOSTNAME:web-app-646b48c885-ptghg IP:10.28.0.15
HOSTNAME:web-app-646b48c885-vlcnr IP:10.28.0.14
HOSTNAME:web-app-646b48c885-dmrcs IP:10.28.0.16
HOSTNAME:web-app-646b48c885-ptghg IP:10.28.0.15
...

Now we can change the DNS Record Set for www.threatx.guru from the old Service to the new one using the Google Cloud Shell:

Find the current A record for www.threatx.guru:

$ gcloud dns record-sets list --name www.threatx.guru. -z=threatx-guru
NAME                TYPE    TTL    DATA
www.threatx.guru.   A       60     35.233.226.3

Get the external load balancer IP for the new protected-app service:

$ kubectl get services protected-app
NAME            TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
protected-app   LoadBalancer   10.31.253.49   35.197.37.55   80:30031/TCP   54m

Start the record-set transaction:

$ gcloud dns record-sets transaction start -z=threatx-guru
Transaction started [transaction.yaml].

Remove the original record for www.threatx.guru:

$ gcloud dns record-sets transaction remove -z=threatx-guru \
>         --name="www.threatx.guru." \
>         --type=A \
>         --ttl=60 \
>         "35.233.226.3"
Record removal appended to transaction at [transaction.yaml].
Add the new record for www.threatx.guru:
$ gcloud dns record-sets transaction add -z=threatx-guru \
>         --name="www.threatx.guru." \
>         --type=A \
>         --ttl=60 \
>         "35.197.37.55"
Record addition appended to transaction at [transaction.yaml].

Execute the record set change:

$ gcloud dns record-sets transaction execute -z=threatx-guru
Executed transaction [transaction.yaml] for managed-zone [threatx-guru].
Created
[https://www.googleapis.com/dns/v1/projects/txlab-216104/managedZones/threat-guru/changes/6].
ID   START_TIME                 STATUS
6    2018-09-13T00:46:09.776Z   pending

View the changed record set:

$ gcloud dns record-sets list --name www.threatx.guru. -z=threatx-guru
NAME                TYPE    TTL    DATA
www.threatx.guru.   A       60     35.197.37.55

In another terminal window, ping www.threatx.guru a few times until you notice the IP address change. In the other terminal window you should not notice any interruption in connectivity to the website:

$ ping www.threatx.guru
PING www.threatx.guru (35.233.226.3): 56 data bytes
64 bytes from 35.233.226.3: icmp_seq=0 ttl=44 time=40.020 ms
64 bytes from 35.233.226.3: icmp_seq=1 ttl=44 time=40.921 ms
^C
$ ping www.threatx.guru
PING www.threatx.guru (35.233.226.3): 56 data bytes
64 bytes from 35.233.226.3: icmp_seq=0 ttl=44 time=39.500 ms
64 bytes from 35.233.226.3: icmp_seq=1 ttl=44 time=40.419 ms
^C
$ ping www.threatx.guru
PING www.threatx.guru (35.197.37.55): 56 data bytes
64 bytes from 35.197.37.55: icmp_seq=0 ttl=44 time=39.926 ms
64 bytes from 35.197.37.55: icmp_seq=1 ttl=44 time=39.049 ms
^C

In the other terminal window running the curl loop you should notice the traffic flip to the new protected app after the DNS information changes. There should be no traffic interruption to the site during the transition:

HOSTNAME:web-app-646b48c885-vlcnr IP:10.28.0.14
HOSTNAME:web-app-646b48c885-ptghg IP:10.28.0.15
HOSTNAME:web-app-646b48c885-ptghg IP:10.28.0.15
HOSTNAME:web-app-646b48c885-ptghg IP:10.28.0.15
HOSTNAME:web-app-646b48c885-vlcnr IP:10.28.0.14
HOSTNAME:myip-app-75f8f55c6c-9qkk8 IP:10.28.0.24
HOSTNAME:myip-app-75f8f55c6c-2r2vw IP:10.28.0.23
HOSTNAME:myip-app-75f8f55c6c-9qkk8 IP:10.28.0.24
HOSTNAME:myip-app-75f8f55c6c-9qkk8 IP:10.28.0.24
HOSTNAME:myip-app-75f8f55c6c-9qkk8 IP:10.28.0.24

7. Delete the Original Deployment and Service

At this point the application is successfully migrated and we can delete the original Deployment and Service to clean up the cluster in Cloud Shell:

$ kubectl delete -f web-app.yaml
deployment "web-app" deleted
service "web-app" deleted

We can confirm that only the new Deployments and Services are running:

$ kubectl get deploy
NAME       DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
myip-app   3        3        3           3          1h
threatx    3        3        3           3          1h

$ kubectl get services
NAME            TYPE          CLUSTER-IP     EXTERNAL-IP   PORT(S)       AGE
kubernetes      ClusterIP     10.31.240.1    <none>        443/TCP       2d
myip-app        ClusterIP     10.31.252.51   <none>        80/TCP        1h
protected-app   LoadBalancer  10.31.253.49   35.197.37.55  80:30031/TCP  1h

Sidecar Pattern

In this option the ThreatX WAF Sensor container and the web application container are installed in the same Pod. The ThreatX Sensors are configured to reverse proxy user traffic destined to www.threatx.guru to the web application via localhost (e.g. localhost:8080). Figure 5 illustrates this design.

Figure 5: Sidecar Service Pattern Design

Note: The sidecar pattern deployment requires the ThreatX WAF Sensor container and the application container to be listening on different ports within the pod.

A nice benefit of the sidecar pattern design is the fact that the security layer and the application layer are coupled so they will both scale in-sync. There is no need to manage a separate security layer Deployment and Service.

Sidecar Configuration Steps

In this section we will perform the steps necessary to implement the sidecar pattern with no downtime. The steps include:

  1. Create the Authentication Secrets: Use kubectl to create the encrypted secrets for the Sensor CUSTOMER and API_KEY environment variables.
  2. Create the new Deployment and Service: Create and deploy a new YAML configuration file that describes the new txwaf and whats-my-ip Deployment and exposes the Deployment as a LoadBalancer Service. We’ll make sure to use different names for the Deployment and Service attributes so there is no conflict with the existing one.
  3. Test Connectivity to the ThreatX WAF Sensors: We’ll use the external IP of the protected-app service to test connectivity to the Sensors.
  4. Create the Site in the ThreatX Dashboard UI: Now that the txwaf and whats-my-ip Deployment and Service is up and running we can configure the Site to allow traffic to proxy through the Sensors to the backend.
  5. Test Traffic Through the Sensors: Perform an end-to-end test on a laptop to make sure the entire path is routing the traffic correctly. This is done by adding a host header to the curl request.
  6. Modify the Public DNS Record: After the site testing runs clean, the Google Cloud DNS Resource Record can be changed to point to the new protected-app load balancer. This will effectively migrate the site to the secured environment. It will take some time for the migration to complete while distributed DNS caches expire, but there will be no downtime as new connections are migrated.
  7. Delete the Original Deployment and Service: After all traffic is migrated to the secured environment, the old external Load Balancer, Service, and Deployment can be removed. With this step the setup andmigration are complete.

1. Create the Authentication Secrets

First, we will create the secret authentication information for the ThreatX WAF Sensor containers using
kubectl. Activate the Cloud Shell from within the Google Cloud web interface and run the following command using the appropriate values for the CUSTOMER and API_KEY environment variables:

$ kubectl create secret generic txauth \
      --from-literal=customer=lab \
      --from-literal=apikey=1234567890abcdefg

secret "txauth" created

2. Create the New Deployment and Service

Create a new file called sidecar-app.yaml with the following content within your Google Cloud Shell to create the new Deployment and Service.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tx-myip-app
spec:
replicas: 3
template:
metadata:
labels:
app: tx-myip
spec:
containers:
- name: myip
image: cloudnativelabs/whats-my-ip
- name: threatx
image: threatx/txwaf
ports:
- containerPort: 80
env:
- name: CUSTOMER
valueFrom:
secretKeyRef:
name: txauth
key: customer
- name: API_KEY
valueFrom:
secretKeyRef:
name: txauth
key: apikey
- name: SENSOR_TAGS
value: k8s,myip-app

---
apiVersion: v1
kind: Service
metadata:
labels:
app: tx-myip
name: protected-app
spec:
externalTrafficPolicy: Local
ports:
- port: 80
targetPort: 80
selector:
app: tx-myip
sessionAffinity: None
type: LoadBalancer

Some highlights from this configuration file:

  • Different names for the Deployment and Service attributes were used so there is no conflict with the existing one.
  • There are two containers in the Deployment Pod spec. This is how we create the sidecar pattern. Since containers in the same Pod can communicate over localhost, the ThreatX WAF Sensor container will be able to easily reverse proxy to the web application container. Applications in both containers within the pod must be listening on different ports so there are no conflicts.
  • We are not exposing any ports on the web application container so the only way to access it is through the Sensor container.
  • The new Service created for the Deployment is an external LoadBalancer type and will be assigned a public IP address.

Apply the configuration file and verify the new Deployments and Services are up and running in Google Cloud Shell:

$ kubectl create -f sidecar-app.yaml
deployment "tx-myip-app" created
service "protected-app" created

To view the Pods:

$ kubectl get pods
NAME                           READY  STATUS    RESTARTS  AGE
tx-myip-app-67ccf4d6c4-5rt94   2/2    Running   0         49s
tx-myip-app-67ccf4d6c4-7jrnk   2/2    Running   0         49s
tx-myip-app-67ccf4d6c4-z8tp6   2/2    Running   0         49s
web-app-646b48c885-9ckgt       1/1    Running   0         12m
web-app-646b48c885-j92sc       1/1    Running   0         12m
web-app-646b48c885-n969w       1/1    Running   0         12m

To view the Deployments:

$ kubectl get deployments
NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
tx-myip-app   3         3         3            3           1m
web-app       3         3         3            3           13m

To view the Services:

$ kubectl get services
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S) AGE
kubernetes     ClusterIP      10.31.240.1     <none>         443/TCP 2d
protected-app  LoadBalancer   10.31.241.249   35.197.37.55   80:32758/TCP 2m
web-app        LoadBalancer   10.31.245.163   35.233.226.3   80:30610/TCP 14m

3. Test Connectivity to the ThreatX WAF Sensors

Now that the Deployment and Service is up, let’s test connectivity to the ThreatX WAF Sensors with the Now that the Deployment and Service is up, let’s test connectivity to the ThreatX WAF Sensors with the curl command. We’ll use the External-IP of the new external LoadBalancer Service. A request to the Sensor that doesn’t match a configured Site will return a “Nothing to see here” message:

$ for i in {1..10}; do curl 35.197.37.55; done
<html><head></head><body>Nothing to see here</body></html><html><head></head><body>Nothing to see
here</body></html><html><head></head><body>Nothing to see here</body></html><html><head></head><bo
dy>Nothing to see here</body></html><html><head></head><body>Nothing to see here</body></html><htm
l><head></head><body>Nothing to see here</body></html><html><head></head><body>Nothing to see here
</body></html><html><head></head><body>Nothing to see here</body></html><html><head></head><body>N
othing to see here</body></html><html><head></head><body>Nothing to see here</body></html>

4. Create the Site in the ThreatX Dashboard UI

Log into the ThreatX Dashboard UI and configure the Site under

Settings > Sites > Add Site. We’ll enter www.threatx.guru into the Hostname field and 127.0.0.1 into the Backends field and set the HTTP Backend Port to 8080.

5. Test Traffic Through the Sensors

To test end-to-end from our laptop, we’ll craft a curl command with the correct host header so the Sensor will route the traffic to the backend:

$ for i in {1..10}; do curl -H "host: www.threatx.guru" 35.197.37.55; done
HOSTNAME:tx-myip-app-67ccf4d6c4-z8tp6 IP:10.28.1.15
HOSTNAME:tx-myip-app-67ccf4d6c4-5rt94 IP:10.28.0.28
HOSTNAME:tx-myip-app-67ccf4d6c4-7jrnk IP:10.28.2.9
HOSTNAME:tx-myip-app-67ccf4d6c4-7jrnk IP:10.28.2.9
HOSTNAME:tx-myip-app-67ccf4d6c4-7jrnk IP:10.28.2.9
HOSTNAME:tx-myip-app-67ccf4d6c4-z8tp6 IP:10.28.1.15
HOSTNAME:tx-myip-app-67ccf4d6c4-5rt94 IP:10.28.0.28
HOSTNAME:tx-myip-app-67ccf4d6c4-5rt94 IP:10.28.0.28
HOSTNAME:tx-myip-app-67ccf4d6c4-7jrnk IP:10.28.2.9
HOSTNAME:tx-myip-app-67ccf4d6c4-z8tp6 IP:10.28.1.15

The web application is testing successfully through the sensor, so now we can move on to the migration.

6. Modify the Public DNS Record

Now we’ll reconfigure the www.threatx.guru DNS record in Google Cloud DNS so the traffic path will change from going to the old Service and Deployment to the new one.

First, we’ll open a terminal window and run an infinite loop of curl requests against the site so we can verify the site does not lose availability during the DNS record change:

$ while true; do curl www.threatx.guru; done
HOSTNAME:web-app-646b48c885-n969w IP:10.28.0.26
HOSTNAME:web-app-646b48c885-n969w IP:10.28.0.26
HOSTNAME:web-app-646b48c885-j92sc IP:10.28.0.27
HOSTNAME:web-app-646b48c885-n969w IP:10.28.0.26
HOSTNAME:web-app-646b48c885-9ckgt IP:10.28.0.25
HOSTNAME:web-app-646b48c885-9ckgt IP:10.28.0.25
HOSTNAME:web-app-646b48c885-n969w IP:10.28.0.26
...

Now we can change the DNS Record Set for www.threatx.guru from the old Service to the new one using the Google Cloud Shell: Find the current A record for www.threatx.guru:

$ gcloud dns record-sets list --name www.threatx.guru. -z=threatx-guru
NAME TYPE TTL DATA
www.threatx.guru. A 60 35.233.226.3

Get the external load balancer IP for the new protected-app service:

$ kubectl get services protected-app
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
protected-app LoadBalancer 10.31.253.49 35.197.37.55 80:30031/TCP 54m

Start the record-set transaction:

$ gcloud dns record-sets transaction start -z=threatx-guru
Transaction started [transaction.yaml].

Remove the original record for www.threatx.guru:

$ gcloud dns record-sets transaction remove -z=threatx-guru \
> --name="www.threatx.guru." \
> --type=A \
> --ttl=60 \
> "35.233.226.3"
Record removal appended to transaction at [transaction.yaml].

Add the new record for www.threatx.guru:

$ gcloud dns record-sets transaction add -z=threatx-guru \
> --name="www.threatx.guru." \
> --type=A \
> --ttl=60 \
> "35.197.37.55"
Record addition appended to transaction at [transaction.yaml].

Execute the record set change:

$ gcloud dns record-sets transaction execute -z=threatx-guru
Executed transaction [transaction.yaml] for managed-zone [threatx-guru].
Created
[https://www.googleapis.com/dns/v1/projects/txlab-216104/managedZones/threat-guru/changes/6].
ID START_TIME STATUS
6 2018-09-13T00:46:09.776Z pending

View the changed record set:

$ gcloud dns record-sets list --name www.threatx.guru. -z=threatx-guru
NAME TYPE TTL DATA
www.threatx.guru. A 60 35.197.37.55

In another terminal window, ping www.threatx.guru a few times until you notice the IP address change. In the other terminal window you should not notice any interruption in connectivity to the website:

$ ping www.threatx.guru
PING www.threatx.guru (35.233.226.3): 56 data bytes
64 bytes from 35.233.226.3: icmp_seq=0 ttl=44 time=40.020 ms
64 bytes from 35.233.226.3: icmp_seq=1 ttl=44 time=40.921 ms
^C
$ ping www.threatx.guru
PING www.threatx.guru (35.233.226.3): 56 data bytes
64 bytes from 35.233.226.3: icmp_seq=0 ttl=44 time=39.500 ms
64 bytes from 35.233.226.3: icmp_seq=1 ttl=44 time=40.419 ms
^C
$ ping www.threatx.guru
PING www.threatx.guru (35.197.37.55): 56 data bytes
64 bytes from 35.197.37.55: icmp_seq=0 ttl=44 time=39.926 ms
64 bytes from 35.197.37.55: icmp_seq=1 ttl=44 time=39.049 ms
^C

In the other terminal window running the curl loop you should notice the traffic flip to the new protected app after the DNS information changes. There should be no traffic interruption to the site during the transition:

HOSTNAME:web-app-646b48c885-n969w IP:10.28.0.26
HOSTNAME:web-app-646b48c885-j92sc IP:10.28.0.27
HOSTNAME:web-app-646b48c885-n969w IP:10.28.0.26
HOSTNAME:web-app-646b48c885-9ckgt IP:10.28.0.25
HOSTNAME:web-app-646b48c885-n969w IP:10.28.0.26
HOSTNAME:web-app-646b48c885-n969w IP:10.28.0.26
HOSTNAME:tx-myip-app-67ccf4d6c4-7jrnk IP:10.28.2.9
HOSTNAME:tx-myip-app-67ccf4d6c4-7jrnk IP:10.28.2.9
HOSTNAME:tx-myip-app-67ccf4d6c4-7jrnk IP:10.28.2.9
HOSTNAME:tx-myip-app-67ccf4d6c4-7jrnk IP:10.28.2.9
HOSTNAME:tx-myip-app-67ccf4d6c4-5rt94 IP:10.28.0.28
HOSTNAME:tx-myip-app-67ccf4d6c4-7jrnk IP:10.28.2.9
HOSTNAME:tx-myip-app-67ccf4d6c4-5rt94 IP:10.28.0.28
HOSTNAME:tx-myip-app-67ccf4d6c4-z8tp6 IP:10.28.1.15

7. Delete the Original Deployment and Service

At this point the application is successfully migrated and we can delete the original Deployment and Service to clean up the cluster in Cloud Shell:

$ kubectl delete -f web-app.yaml
deployment "web-app" deleted
service "web-app" deleted

We can confirm that only the new Deployments and Services are running:

$ kubectl get deploy
NAME          DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
tx-myip-app   3        3        3           3          1h
$ kubectl get services
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)       AGE
kubernetes      ClusterIP      10.31.240.1     <none>         443/TCP       2d
protected-app   LoadBalancer   10.31.241.249   35.197.37.55   80:32758/TCP  1h

Appendix

Using Local Certificates

Kubernetes has great functionality for securely storing secrets such as API keys and passwords and we utilized this feature in the configurations above. When protecting SSL/TLS encrypted sites you may also want to use Kubernetes Secrets to store the certificates and keys locally instead of transferring them to ThreatX.

To do this you will need three files: a PEM encoded certificate bundle, PEM encoded private key file, and a small text configuration file for each site added to a Kubernetes secret made available as a local volume.

  1. The PEM certificate bundle filename will follow the format of <site-name>.crt.
  2. The PEM private key filename will follow the format of <site-name>.key.
  3. The configuration snippet filename will follow the format of <site-name>.conf. The configuration snippet will have the following content (don’t forget the trailing semicolons):
ssl_certificate /etc/threatx/localcerts/<site-name>.crt;
ssl_certificate_key /etc/threatx/localcerts/<site-name>.key;

Once the files are created, you can add them to a Kubernetes secret. You can add multiple site files to a single secret:

$ kubectl create secret generic txcerts \
--from-file=./www.site.com.crt \
--from-file=./www.site.com.key \
--from-file=./www.site.com.conf

The ThreatX sensor will expect to find the files under the /etc/threatx/localcerts/ directory. Add the following line to the YAML Deployment configuration to make the site certificates and keys available to the sensor container:

spec:
  containers:
    - name: threatx
      image: threatx/txwaf
      volumeMounts:
        - name: txcerts-vol
          mountPath: /etc/threatx/localcerts
          readOnly: true
volumes:
  - name: txcerts-vol
    secret:
      secretName: txcerts

Note: You will need to work with the ThreatX SOC to enable the local certificates feature.

Recommended Memory and CPU Resources

Per the minimum memory and CPU requirements recommended in the Sensor Deployment Guide of 2 GB of RAM and 2 CPU cores, you can request the minimum resource requirements within the pod container spec:

spec:
  containers:
    - name: threatx
      image: threatx/txwaf
      resources:
        requests:
          memory: 2Gi
          cpu: 2

Note: The RAM resource recommendation is based on having all features enabled. A smaller RAM footprint is possible if certain features are disabled. Please consult the ThreatX SOC if you would like to optimize the memory requirements of the Sensor container.

Using a Custom Default Configuration

There are some use cases for modifying the sensor’s default configuration, including:

  • Forwarding request logs from unconfigured sites to STDOUT so they can be logged within Kubernetes.
  • Allowing open access to the back-end applications until the sensor has successfully authenticated and pulled the site configurations. (Applicable to the Sidecar Pattern only)

Within Kubernetes, you can override the default configuration file by using a ConfigMap.

The sensor’s default configuration file is stored at /etc/nginx/sites-enabled/default. To replace this file with

a custom configuration, you can create a ConfigMap entry with the new file and then add that ConfigMap to the ThreatX WAF sensor container spec in the YAML configuration.

1: Create the ConfigMap from a modified default config file:

$ kubectl create configmap txdefault --from-file=./default

2: Reference the ConfigMap in the ThreatX WAF container spec:

spec:
  containers:
  - name: threatx
    image: threatx/txwaf
    volumeMounts:
      - name: txdefault-vol
        mountPath: /etc/nginx/sites-enabled/default
        subPath: default
volumes:
  - name: txdefault-vol
    configMap:
      name: txdefault

Here is the default configuration file with some options to support the use cases above:

server {
    listen 80 default_server;
    server_name _;
    root /usr/share/nginx/html;
    index index.html index.htm;

    location / {
        add_header Content-Type text/html;
        return 200 "<html><head></head><body>Nothing to see here</body></html>";
    }
    # Change to this to forward to the app before the WAF connects to gateway
    # when using a sidecar pattern. Modify the protocol and port as needed:
    #
    # location / {
    # if (-f "/tmp/file_hashes") {
    # add_header Content-Type text/html;
    # return 200 "<html><head></head><body>Nothing to see here</body></html>";
    # }
    # proxy_pass http://127.0.0.1:8080;
    # }

    location /tx_pagespeed_stats { allow 127.0.0.1; deny all; }

    error_log /var/log/nginx/error.log error;
    access_log /var/log/nginx/access.log main;

    # Change to this to log to STDOUT:
    #
    # error_log /proc/1/fd/1 error;
    # access_log /proc/1/fd/1 main;

}

server {
    listen 443 ssl default_server;
    server_name _;

    ssl_certificate /etc/ssl/threatx/default.crt;
    ssl_certificate_key /etc/ssl/threatx/default.key;

    # Change to your default cert and key at /etc/threatx/localcerts if the
    # Kubernetes secret is configured:
    #
    # ssl_certificate /etc/threatx/localcerts/www.example.com.crt;
    # ssl_certificate_key /etc/threatx/localcerts/www.example.com.key;

    ssl_prefer_server_ciphers on;
    ssl_protocols TLSv1.1 TLSv1.2;
    ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-
GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-
SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-
SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:AES256-GCM-SHA384:AES128-GCMSHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-
SHA:HIGH:!aNULL:!eNULL:!EXPORT:!3DES:!DES:!MD5:!PSK:!RC4";
    ssl_dhparam /etc/nginx/dhparam.pem;
    root /usr/share/nginx/html;
    index index.html index.htm;

    location / {
        add_header Content-Type text/html;
        return 200 "<html><head></head><body>Nothing to see here</body></html>";
    }

    # Change to this to forward to the app before the WAF connects to gateway
    # when using a sidecar pattern. Modify the protocol and port as needed:
    #
    # location / {
    # if (-f "/tmp/file_hashes") {
    # add_header Content-Type text/html;
    # return 200 "<html><head></head><body>Nothing to see here</body></html>";
    # }
    # proxy_pass http://127.0.0.1:8080;
    # }

    location /tx_pagespeed_stats { allow 127.0.0.1; deny all; }

    error_log /var/log/nginx/default_error.log error;
    access_log /var/log/nginx/default_ssl.access.log;

    # Change to this to log to STDOUT:
    #
    # error_log /proc/1/fd/1 error;
    # access_log /proc/1/fd/1 main;

Last Updated 2023-03-09

On this page

Additional Resources