How to switch from an NGINX Ingress Regional Load Balancer to a Global HTTPS Load Balancer on GCP

If you are using Gitlab and you set up your cloud environment on GCP with the cluster-management project from Gitlab, you will earlier or later notice, that there are a few things missing which you may need. Here we will focus on the missing implementation to setup a Global HTTPs Load Balancer instead of a Regional HTTPs Load Balancer in order to use Cloud Armor on the Cloud Project.

 controller:
  stats:
    enabled: true
  podAnnotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "10254"
  service:
      type: ClusterIP
      annotations:
        cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "ingress-nginx-80-neg"}}}' 

By applying these settings, we override the type of the ingress to ClusterIP (this was before LoadBalancer by default) and we also annotate that ingress with a NEG (Network Endpoint Group). The NEG will serve as the backend of the new Global Load Balancer which we need to create. Let the pipeline run, so the changes are deployed in your cluster. The ingress should be deployed with the type ClusterIP.

Change necessary stuff on GCP side

First of all you need to connect to your GCP project and make sure you are on the right context if you have multiple. You can check by using kubectl config get-contexts (shows current context by “star”).First of all you need to set some global variables. They can look like this:

ZONE=europe-west6-c
CLUSTER_NAME=my-gke-cluster
HEALTH_CHECK_NAME=nginx-ingress-controller-health-check
NETWORK_NAME=my-review-default-vpc
NETWORK_TAGS=$(gcloud compute instances describe \
    $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') \
    --zone=$ZONE --format="value(tags.items[0])")

Please note that most (maybe all?) following configurations can be done via the GCP frontend. However, I did not tested that and sticked to the console-way.

Create a static IP

If you don’t have your own IP-Adress you need to create a static IP-Adress with the following command:

 gcloud compute addresses create ${CLUSTER_NAME}-loadbalancer-ip \
--global \
--ip-version IPV4 

Create Firewall Rule

You need to create a firewall rule, to allow our new Global Load Balancer the access and communication to our Cluster. The IP-Addresses which we are permitting are used for the health checks and are IP-Address of the Google Front End (GFE) that connected to the backend. Without them, the health checks will not succeed for the neg (Backend of LB). You need to use the following command to create the needed firewall:

gcloud compute firewall-rules create ${CLUSTER_NAME}-allow-tcp-loadbalancer \
    --allow tcp:80 \
    --source-ranges 130.211.0.0/22,35.191.0.0/16 \
    --target-tags $NETWORK_TAGS \
    --network $NETWORK_NAME  

Create the Health Check

We need to create a health-check for the upcoming Backend Service to see if everything is still fine in our system. To create the healthcheck add the following command:

gcloud compute backend-services create ${CLUSTER_NAME}-backend-service \
    --load-balancing-scheme=EXTERNAL \
    --protocol=HTTP \
    --port-name=http \
    --health-checks=${CLUSTER_NAME}-nginx-health-check \
    --global  

Add the ingress (NEG) to the Backend-Service

The Backend-Service of the upcoming Loadbalancer will now get the ingress, which is now a NEG and not a LoadBalancer anymore, attached. Use the following command:

gcloud compute backend-services add-backend ${CLUSTER_NAME}-backend-service \
  --network-endpoint-group=ingress-nginx-80-neg \
  --network-endpoint-group-zone=$ZONE \
  --balancing-mode=RATE \
  --capacity-scaler=1.0 \
  --max-rate-per-endpoint=100 \
  --global

Create the new Global HTTPs Load Balancer

You can create the new loadbalancer by using the following command. But be patient, since this may take a few minutes to complete or to be visible in the GCP frontend.

gcloud compute url-maps create ${CLUSTER_NAME}-loadbalancer \
    --default-service ${CLUSTER_NAME}-backend-service 

Create a self-managed certificate

I will dive more into the certificate topic on the loadbalancer on the next blogpost. Since we use a HTTPs Load Balancer we need to have some kind of certificate. I created a self-managed certificate via OpenSSL and uploaded the content of the files to GCP. However you can also do this via console:

gcloud compute ssl-certificates create $CERTIFICATE_NAME \
    --certificate=my-cert.pem \
    --private-key=my-cert-key.pem \
    --global

Please do not use this on production since the certificate will not be valid and the connection will not be secured. Refer to the next blogpost on how to handle certification on the loadbalancer.

Edit the Loadbalancer in the GCP Frontend

The loadbalancer should be visible. Select it and click on edit. Everything should be configured except for the frontend configuration. Do the following configurations:
– Select Add Frontend IP and Port
– Name it and select HTTPs
– If you reserverd a static ip address, use it in the ip address field. otherwise use Ephemeral
– Select the created certificate (You can also use a google managed one)
– If you have a static IP Address, you can enable HTTP to HTTPS redirect. There will be a new “loadbalancer” / “mapping” created, without a backend. I’m pretty sure (not 100%) it’s more or less a forwarding rule
– Save the Loadbalancer– Check that everything is healthy and works
Congratulation, your loadbalancer is up and running 🙂

Additional Information on how this works

How is the NEG updated?

The Load Balancer / The Neg gets notified if the deployment of the ingress changes the pod. Therefore its also possible to scale up the ingress itself. The NEG updates itself, if something happens. This is per default configured in the deployment of the ingress with --publish-service

Firewall Rule specific network tag

The firewall Rule for the connection to the backend only applies to a specific tag, which looks like a node. For example: “gke-staging-456b4340-node”However this is a Network Tag, which is on every Compute Instance of the cluster. Therefor the healthchecks are working even if there are new nodes or the existings are changing.

Kai Müller
Software Engineer