Skip to content
Stand with Ukraine flag

Helm · GCP GKE

This guide covers deploying a TBMQ cluster using the official Helm chart on Google Cloud Platform (GCP) using Google Kubernetes Engine (GKE).

To deploy a TBMQ cluster using Helm on GKE, you need the following tools installed on your local machine:

See the before you begin guide for more info.

Create a new GCP project (recommended) or choose an existing one. Make sure you have selected the correct project:

Terminal window
gcloud init
Terminal window
gcloud services enable container.googleapis.com

Define environment variables used in various commands throughout this guide. Execute the following command (Linux):

Terminal window
export GCP_PROJECT=$(gcloud config get-value project)
export GCP_REGION=us-central1
export GCP_ZONE1=us-central1-a
export GCP_ZONE2=us-central1-b
export GCP_ZONE3=us-central1-c
export GCP_NETWORK=default
export TB_CLUSTER_NAME=tbmq-cluster
export TB_DATABASE_NAME=tbmq-db
echo "You have selected project: $GCP_PROJECT, region: $GCP_REGION, gcp zones: $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3, network: $GCP_NETWORK, cluster: $TB_CLUSTER_NAME, database: $TB_DATABASE_NAME"

Where:

  • $GCP_PROJECT — your current GCP project ID (fetched automatically via gcloud)
  • us-central1 — one of the available compute regions, referenced as $GCP_REGION
  • default — the default GCP network name, referenced as $GCP_NETWORK
  • tbmq-cluster — the cluster name, referenced as $TB_CLUSTER_NAME

Create a regional cluster distributed across 3 zones. The example below provisions one e2-standard-4 node per zone (three nodes total), but you can adjust --machine-type and --num-nodes to suit your workload requirements. For a full list of available machine types, refer to the GCP machine types documentation.

Terminal window
gcloud container clusters create $TB_CLUSTER_NAME \
--release-channel stable \
--region $GCP_REGION \
--network=$GCP_NETWORK \
--node-locations $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3 \
--enable-ip-alias \
--num-nodes=1 \
--node-labels=role=main \
--machine-type=e2-standard-4

Alternatively, use this guide for a custom cluster setup.

Connect kubectl to the newly created cluster:

Terminal window
gcloud container clusters get-credentials $TB_CLUSTER_NAME --region $GCP_REGION

Before installing the chart, add the TBMQ Helm repository to your local Helm client:

Terminal window
helm repo add tbmq-helm-chart https://helm.thingsboard.io/tbmq
helm repo update

Create a dedicated namespace for your TBMQ cluster deployment:

Terminal window
kubectl create namespace tbmq
Terminal window
kubectl config set-context --current --namespace=tbmq

This sets tbmq as the default namespace for your current context, so you don’t need to pass --namespace to every command.

To customize your TBMQ deployment, download the default values.yaml from the chart:

Terminal window
helm show values tbmq-helm-chart/tbmq-cluster > values.yaml

Starting with chart version 2.0.0, the TBMQ Helm chart no longer bundles PostgreSQL, Kafka, or a Redis-compatible cache. This step assumes you have already deployed all three so they’re reachable from the tbmq namespace. On GCP, the common choices are managed services: Cloud SQL for PostgreSQL, Google Managed Service for Apache Kafka, and Memorystore for Valkey. You can also self-host any of them inside the GKE cluster with operators like CrunchyData PGO, Strimzi, or Valkey. For a Cloud SQL provisioning walkthrough, first enable the required GCP services, then follow the Cloud SQL provisioning instructions.

Point the chart at your instances by setting the connection values in values.yaml. At a minimum, set:

postgresql:
host: "10.0.0.3"
port: 5432
database: "thingsboard_mqtt_broker"
username: "postgres"
existingSecret: "my-pg-secret"
existingSecretPasswordKey: "password"
kafka:
bootstrapServers: "bootstrap.my-kafka.us-central1.managedkafka.my-project.cloud.goog:9092"
redis:
connectionType: "standalone"
host: "10.0.0.4"
port: 6379
existingSecret: "my-redis-secret"
existingSecretPasswordKey: "redis-password"

Replace the hostnames, credentials, and redis.connectionType (standalone or cluster) with values that match your actual deployment. For the full set of supported keys, see the Infrastructure Configuration section of the chart documentation.

Configure license and broker images

Section titled Configure license and broker images

The chart defaults point at the open-source broker and Integration Executor images. Switch both to the PE variants and provide a license.

In your values.yaml, update the tbmq.image and tbmq-ie.image blocks:

tbmq:
image:
repository: thingsboard/tbmq-pe-node
tag: 2.3.0PE
tbmq-ie:
image:
repository: thingsboard/tbmq-pe-integration-executor
tag: 2.3.0PE

Pre-create a Kubernetes Secret holding the license value in the namespace you plan to install into, and reference it from values.yaml:

Terminal window
kubectl create secret generic my-tbmq-license -n tbmq \
--from-literal=license-key='YOUR_LICENSE_VALUE'
license:
existingSecret: my-tbmq-license

By default, the Helm chart deploys a standard NGINX Ingress Controller for HTTP and MQTT traffic. Since you are deploying on GCP GKE, change the load balancer type:

loadbalancer:
type: "gcp"

This automatically configures:

  • Plain HTTP traffic exposed via HTTP Load Balancer
  • Plain MQTT traffic exposed via TCP Load Balancer

The process of configuring the load balancer using Google-managed SSL certificates is described in the official GKE documentation. Make sure you read the prerequisites carefully before proceeding.

Reserve a static global IP address:

Terminal window
gcloud compute addresses create tbmq-http-lb-address --global

Get the reserved static IP address:

Terminal window
gcloud compute addresses describe tbmq-http-lb-address --global --format="get(address)"

Configure your DNS — you must have at least one fully qualified domain name (FQDN) pointing to the reserved static IP address. This is required for the managed certificate to be issued successfully.

Update values.yaml:

loadbalancer:
type: "gcp"
http:
enabled: true
ssl:
enabled: true
# Name of the ManagedCertificate resource automatically created by the Helm chart.
certificateRef: "<your-managed-certificate-resource-name>"
domains:
# Must point to the reserved static IP.
- <your-domain-name>
# Static IP address name for the GCP HTTP(S) load balancer.
staticIP: "tbmq-http-lb-address"

This will automatically issue and manage an SSL certificate via the ManagedCertificate resource created by the Helm chart and expose TBMQ securely over HTTPS.

GCP Load Balancer does not support TLS termination for MQTT traffic. To secure MQTT communication, configure mutual TLS (mTLS) directly in TBMQ. Refer to the TBMQ Helm chart documentation for details on configuring mTLS.

Make sure you’re in the same directory as your customized values.yaml file, then install TBMQ:

Terminal window
helm install my-tbmq-cluster tbmq-helm-chart/tbmq-cluster \
-f values.yaml \
--set installation.installDbSchema=true

Once the deployment completes, you should see output similar to:

NAME: my-tbmq-cluster
LAST DEPLOYED: Wed Mar 26 17:42:49 2025
NAMESPACE: tbmq
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
TBMQ Cluster my-tbmq-cluster will be deployed in few minutes.
Info:
Namespace: tbmq

Get the address of the load balancer:

Terminal window
kubectl get ingress

Expected output:

NAME CLASS HOSTS ADDRESS PORTS AGE
my-tbmq-cluster-http-lb gce * 34.111.24.134 80 3d1h

Use the ADDRESS field of my-tbmq-cluster-http-lb to open the TBMQ web interface in your browser.

You should see the TBMQ login page. Use the default System Administrator credentials:

Username:

Password:

sysadmin

On first login, you are prompted to change the default password and re-login with the new credentials.

Check that your domain name resolves to the reserved static IP:

Terminal window
dig <your-domain-name>

Once the DNS record is confirmed, wait for the Google-managed certificate to finish provisioning. This can take up to 60 minutes. Check certificate status with:

Terminal window
kubectl describe managedcertificate <your-managed-certificate-resource-name>

The certificate will be provisioned once the domain records are configured correctly. Use <your-domain-name> to connect to the cluster.

The service my-tbmq-cluster-mqtt-lb is the LoadBalancer used for MQTT communication. Retrieve its EXTERNAL-IP with:

Terminal window
kubectl get services

Expected output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-tbmq-cluster-mqtt-lb LoadBalancer 10.100.119.170 *.*.*.* 1883:30308/TCP,8084:30309/TCP,8883:31609/TCP,8085:31610/TCP 6m58s

Use the EXTERNAL-IP field to connect to the cluster via MQTT.

To examine service logs for errors, view TBMQ logs with:

Terminal window
kubectl logs -f my-tbmq-cluster-tbmq-node-0

Check the state of all StatefulSets:

Terminal window
kubectl get statefulsets

See the kubectl Cheat Sheet for more details.

Chart version 2.0.0 is a breaking change from the 1.x line: the Bitnami PostgreSQL, Kafka, and Redis subcharts that earlier versions bundled have been removed. Upgrades from chart 1.x require either keeping the existing in-cluster Bitnami stack (annotated with helm.sh/resource-policy: keep) or provisioning a fresh third-party stack (PostgreSQL 17, Apache Kafka 4.0.0, Valkey 8.0+). Same-edition upgrades and CE → PE cross-edition migration on the same TBMQ version are also supported via the chart’s pre-upgrade hook.

For the full procedure, refer to the Upgrading section of the TBMQ Helm Chart documentation on Artifact Hub. It covers the chart 1.x → 2.0.0 migration paths, the upgrade.upgradeDbSchema and upgrade.fromVersion flags, and the CE → PE migration steps.

To uninstall the TBMQ Helm chart:

Terminal window
helm delete my-tbmq-cluster

This removes all TBMQ components associated with the release from the current namespace.

The helm delete command removes only the logical resources of the TBMQ cluster. Per-pod PVCs created from the broker’s volumeClaimTemplate are intentionally not deleted by helm uninstall and carry no app= label. Drop them explicitly by name pattern:

Terminal window
kubectl get pvc -n tbmq -o name | grep tbmq-node-data | xargs -I {} kubectl delete -n tbmq {}

Cloud SQL, Managed Kafka, and Memorystore resources are owned by GCP and are not affected by helm delete. Drop them through their own console or IaC tooling if you no longer need them.

To delete the GKE cluster:

Terminal window
gcloud container clusters delete $TB_CLUSTER_NAME --region=$GCP_REGION