24 Jun 2020

Adding Autoscaling to HPCC Systems On the Google Cloud Platform

Page Overview

This page will cover the scaling of an HPCC Kubernetes Cluster along with Vertical and Horizontal Pod Scaling.

Cluster Autoscaler

What does Cluster Autoscaler do?

Cluster Autoscaler automatically resizes the amount of nodes in your application based on the demands of your workload. Cluster Autoscaler removes the need for Cloud admins to manually add/remove nodes or over-position nodes.


(If you are not familiar with the Cloud SDK, I recommend you use the Cloud Console. The GUI is intuitive and will accomplish the same things as the Cloud SDK)

Create a cluster With cluster autoscaling enabled:

For this tutorial, we will be creating an HPCC Cluster called “hpcc-cluster” (very original :))

Run the command below with your own parameters:

gcloud container clusters create hpcc-cluster \
  --zone <zone> \
  --node-locations <location(s)> \
  --num-nodes <num-nodes> --enable-autoscaling --min-nodes <min-nodes> --max-nodes <max-nodes>
  • Specify a compute region. Ex: us-east1
  • <location(s)> Specify zones per compute engine. Ex: us-east1-b,us-east1-c,us-east1-d (Multiple zones must be split by comma)
  • The number of nodes in a pool in a zonal cluster. If the cluster is multi-zonal, num-nodes is the amount of nodes in a zone.
  • –enable-autoscaling Defaults to true. Enables autoscaling
  • Minimum node pool size.
  • Maximum node pool size.

Add cluster autoscaler to an existing cluster

Run the command below with you own parameters:

gcloud container clusters update cluster-name --enable-autoscaling \
    --min-nodes 1 --max-nodes 10 --zone <zone> --node-pool default-pool
  • Specify a compute region. Ex: us-east1
  • <location(s)> Specify zones per compute engine. Ex: us-east1-b,us-east1-c,us-east1-d (Multiple zones must be split by comma)
  • Specify the node pool. If only one node pool, select "default-pool"
  • –enable-autoscaling Defaults to true. Enables autoscaling
  • Minimum node pool size.
  • Maximum node pool size.

Manual Scaling of an HPCC Application:

What is Google Kubernetes Engine (GKE) scaling and what does it do?

When applications are deployed in GKE, you can determine how many replicas you want to run of each application. When GKE applications are scaled, you either increase or decrease the number of replicas.

Ex: Say if your GKE application is using up too many resources that you do not require. GKE scaling will automatically decrease the number of replicas being used, thus saving money and resources.


Scaling the Application:

For this example, we will be using the kubectl apply command as it is easier to control/edit through config files. there are other options that you can choose from which are listed here.

If the HPCC Systems deployments already exists, you may be better off using kubectl scale or the Google Cloud Console.

Kubectl Scale:

Kubectl scale lets you instantly change the number of replicas you want to use per pod. Steps:

  • Run kubectl get deployments to get the controller deployment names. You should get something like this:
    hthor                                 1/1     1            1           6d6h
    mydali                                1/1     1            1           6d6h
    myeclccserver                         1/1     1            1           6d6h
    myesp                                 1/1     1            1           6d6h
    nfsstorage-hpcc-nfs                   1/1     1            1           6d7h
    roxie                                 1/1     1            1           6d6h
    roxie-cluster-slave-1                 1/1     1            1           6d6h
    roxie-cluster-slave-2                 1/1     1            1           6d6h
    roxie-cluster-toposerver              1/1     1            1           6d6h
    thor-agent                            1/1     1            1           6d6h
    thor-thoragent                        1/1     1            1           6d6h
  • Run kubectl scale deployments <my-app> --replicas 4 t0 set the replica amount to 4. This value can be changed. represents one deployment out of the list. If you want to, you can do this for every single deployment.
  • Finally, verify that the replica amount has been updated with kubectl get deployments <my-app> The result should be:
    my-app                4         4         4            4           15m

    Enable GKE Scaling before deployment:

    The HPCC deployments do come predefined with the amount of replicas in each respective yaml folder. You can change these values to whatever you would like by either creating or editing spec: replicas.

The replica structure may look a little confusing for HPCC Systems because occasionally there are defined in other files. For example, in the helm/hpcc/values.yaml file, you can set the replica amount for roxie, myeclcserver, myesp, and hthor. So my advice would be to use kubectl scale after deployment.

Horizontal Autoscaling:

Similar to the manual scaling above, Horizontal Autoscaling increases or decreases the amount of replicas, but (as the name implies) automatically.


Enable Horizontal Autoscaling:

For this Horizontal Autoscaling tutorial, we will be using kubectl apply.

  • First in the deployment yaml file, specify requests in the resources section.
    # based on CPU utilization
      cpu: "250m"

    (This will have to be changed in each deployment yaml file)

  • Create a file called -hpa.yaml and include it in the same file path as the original deployment yaml file. Change to the name of the deployment file.
  • Add this code to your your new hpa yaml file:
    apiVersion: autoscaling/v1
    kind: HorizontalPodAutoscaler
    name: <deployment-name>
      apiVersion: apps/v1
      kind: Deployment
      name: <deployment-name>
    minReplicas: 1
    maxReplicas: 10
    targetCPUUtilizationPercentage: 50
  • Deploy using this command: kubectl apply -f <deployment>-hpa.yaml

(You can change the minReplicas, maxReplicas, and targetCPUUtilizationPercentage to whatever you like. However, make sure your cluster has enough resources to handle any changes you make.)

  • Finally, verify that the HPA is working by typing this command: kubectl get hpa.

    (Expected Result)

    name   Deployment/name   0%/50%    1         10        3          61s

Vertical Autoscaling:

What does Vertical Autoscaling do?

Vertical Autoscaler analyzes and adjusts your containers’ cpu and memory requests. It can be configured to only give recommendations or to make automatic changes to cpu and memory requests.


How to enable Vertical Autoscaling for a cluster:

Creating a new cluster with vertical autoscaling is very straightforward. Just enter in the below command in Google SDK.

gcloud container clusters create cluster-name \
    --enable-vertical-pod-autoscaling --cluster-version=1.14.7

Of course, change “cluster-name” to your HPCC clusters’ name.

Enabling vertical autoscaling on an existing cluster is also very simple. Enter in below command into the Google Cloud SDK.

gcloud container clusters update cluster-name --enable-vertical-pod-autoscaling

Getting resource recommendations:

  • Create a new yaml file called -vpa in the same file path as the deployment you are trying to enable Vertical Autoscaler on.
  • Copy and paste this code with your own parameters into that yaml file:
    apiVersion: autoscaling.k8s.io/v1
    kind: VerticalPodAutoscaler
    name: <deployment>-vpa
      apiVersion: "apps/v1"
      kind:       Deployment
      name:       <deployment>
      updateMode: "Off"
  • Create the deployment with the command kubectl create -f <deployment>-vpa.yaml

When updateMode is marked off, VPA does not take action on its resource recommendations. VPA analyzes the cpu usage and outputs its results in the “status” field.

When updateMode is marked “Auto”, VPA has the ability to automatically scale based on its own recommendations. If you want to take action on resource recommendations, change “Off” to “Auto”.

You can access recommendations through the kube command kubectl get <deployment>-vpa --output yaml. (Expected Results)

    - containerName: <my-container>
        cpu: 25m
        memory: 262144k
        cpu: 25m
        memory: 262144k
        cpu: 7931m
        memory: 8291500k

Next Steps

After finishing this tutorial on autoscaling, the next step would be introducing yaml files into the HPCC ecosystem that will manage autoscaling through code. This offers tremendous benefits when compared to command line commands by simplifying the process.


  1. “Cluster Autoscaler Kubernetes Engine Documentation Google Cloud.” Google, Google, cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler.

Thank you for reading my blog! Stay tuned for more!