Helm · GCP GKE
This guide covers deploying a TBMQ cluster using the official Helm chart on Google Cloud Platform (GCP) using Google Kubernetes Engine (GKE).
Prerequisites
Section titled “Prerequisites”To deploy a TBMQ cluster using Helm on GKE, you need the following tools installed on your local machine:
Configure your Kubernetes environment
Section titled “Configure your Kubernetes environment”Configure GCP tools
Section titled “Configure GCP tools”See the before you begin guide for more info.
Create a new GCP project (recommended) or choose an existing one. Make sure you have selected the correct project:
gcloud initEnable GKE service
Section titled “Enable GKE service”gcloud services enable container.googleapis.comDefine environment variables
Section titled “Define environment variables”Define environment variables used in various commands throughout this guide. Execute the following command (Linux):
export GCP_PROJECT=$(gcloud config get-value project)export GCP_REGION=us-central1export GCP_ZONE1=us-central1-aexport GCP_ZONE2=us-central1-bexport GCP_ZONE3=us-central1-cexport GCP_NETWORK=defaultexport TB_CLUSTER_NAME=tbmq-clusterexport TB_DATABASE_NAME=tbmq-dbecho "You have selected project: $GCP_PROJECT, region: $GCP_REGION, gcp zones: $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3, network: $GCP_NETWORK, cluster: $TB_CLUSTER_NAME, database: $TB_DATABASE_NAME"Where:
$GCP_PROJECT— your current GCP project ID (fetched automatically viagcloud)us-central1— one of the available compute regions, referenced as$GCP_REGIONdefault— the default GCP network name, referenced as$GCP_NETWORKtbmq-cluster— the cluster name, referenced as$TB_CLUSTER_NAME
Configure and create GKE cluster
Section titled “Configure and create GKE cluster”Create a regional cluster distributed across 3 zones. The example below provisions one e2-standard-4 node per zone
(three nodes total), but you can adjust --machine-type and --num-nodes to suit your workload requirements.
For a full list of available machine types, refer to the GCP machine types documentation.
gcloud container clusters create $TB_CLUSTER_NAME \ --release-channel stable \ --region $GCP_REGION \ --network=$GCP_NETWORK \ --node-locations $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3 \ --enable-ip-alias \ --num-nodes=1 \ --node-labels=role=main \ --machine-type=e2-standard-4Alternatively, use this guide for a custom cluster setup.
Update the context of kubectl
Section titled “Update the context of kubectl”Connect kubectl to the newly created cluster:
gcloud container clusters get-credentials $TB_CLUSTER_NAME --region $GCP_REGIONAdd the TBMQ Cluster Helm repository
Section titled “Add the TBMQ Cluster Helm repository”Before installing the chart, add the TBMQ Helm repository to your local Helm client:
helm repo add tbmq-helm-chart https://helm.thingsboard.io/tbmqhelm repo updateCreate namespace
Section titled “Create namespace”Create a dedicated namespace for your TBMQ cluster deployment:
kubectl create namespace tbmqkubectl config set-context --current --namespace=tbmqThis sets tbmq as the default namespace for your current context, so you don’t need to pass --namespace to every command.
Modify default chart values
Section titled “Modify default chart values”To customize your TBMQ deployment, download the default values.yaml from the chart:
helm show values tbmq-helm-chart/tbmq-cluster > values.yamlDeploy and connect to dependencies
Section titled “Deploy and connect to dependencies”Starting with chart version 2.0.0, the TBMQ Helm chart no longer bundles PostgreSQL, Kafka, or a Redis-compatible cache.
This step assumes you have already deployed all three so they’re reachable from the tbmq namespace. On GCP, the
common choices are managed services: Cloud SQL for PostgreSQL, Google Managed Service for Apache Kafka, and
Memorystore for Valkey. You can also self-host any of them inside the GKE cluster with operators like CrunchyData PGO,
Strimzi, or Valkey. For a Cloud SQL provisioning walkthrough, first
enable the required GCP services,
then follow the Cloud SQL provisioning instructions.
Point the chart at your instances by setting the connection values in values.yaml. At a minimum, set:
postgresql: host: "10.0.0.3" port: 5432 database: "thingsboard_mqtt_broker" username: "postgres" existingSecret: "my-pg-secret" existingSecretPasswordKey: "password"
kafka: bootstrapServers: "bootstrap.my-kafka.us-central1.managedkafka.my-project.cloud.goog:9092"
redis: connectionType: "standalone" host: "10.0.0.4" port: 6379 existingSecret: "my-redis-secret" existingSecretPasswordKey: "redis-password"Replace the hostnames, credentials, and redis.connectionType (standalone or cluster) with values that match
your actual deployment. For the full set of supported keys, see the
Infrastructure Configuration
section of the chart documentation.
Configure license and broker images
Section titled Configure license and broker imagesThe chart defaults point at the open-source broker and Integration Executor images. Switch both to the PE variants and provide a license.
In your values.yaml, update the tbmq.image and tbmq-ie.image blocks:
tbmq: image: repository: thingsboard/tbmq-pe-node tag: 2.3.0PE
tbmq-ie: image: repository: thingsboard/tbmq-pe-integration-executor tag: 2.3.0PEPre-create a Kubernetes Secret holding the license value in the namespace you plan to install into,
and reference it from values.yaml:
kubectl create secret generic my-tbmq-license -n tbmq \ --from-literal=license-key='YOUR_LICENSE_VALUE'license: existingSecret: my-tbmq-licenseLoad balancer configuration
Section titled “Load balancer configuration”By default, the Helm chart deploys a standard NGINX Ingress Controller for HTTP and MQTT traffic. Since you are deploying on GCP GKE, change the load balancer type:
loadbalancer: type: "gcp"This automatically configures:
- Plain HTTP traffic exposed via HTTP Load Balancer
- Plain MQTT traffic exposed via TCP Load Balancer
HTTPS access
Section titled “HTTPS access”The process of configuring the load balancer using Google-managed SSL certificates is described in the official GKE documentation. Make sure you read the prerequisites carefully before proceeding.
Reserve a static global IP address:
gcloud compute addresses create tbmq-http-lb-address --globalGet the reserved static IP address:
gcloud compute addresses describe tbmq-http-lb-address --global --format="get(address)"Configure your DNS — you must have at least one fully qualified domain name (FQDN) pointing to the reserved static IP address. This is required for the managed certificate to be issued successfully.
Update values.yaml:
loadbalancer: type: "gcp" http: enabled: true ssl: enabled: true # Name of the ManagedCertificate resource automatically created by the Helm chart. certificateRef: "<your-managed-certificate-resource-name>" domains: # Must point to the reserved static IP. - <your-domain-name> # Static IP address name for the GCP HTTP(S) load balancer. staticIP: "tbmq-http-lb-address"This will automatically issue and manage an SSL certificate via the ManagedCertificate resource
created by the Helm chart and expose TBMQ securely over HTTPS.
MQTTS access
Section titled “MQTTS access”GCP Load Balancer does not support TLS termination for MQTT traffic. To secure MQTT communication, configure mutual TLS (mTLS) directly in TBMQ. Refer to the TBMQ Helm chart documentation for details on configuring mTLS.
Install the TBMQ Helm chart
Section titled “Install the TBMQ Helm chart”Make sure you’re in the same directory as your customized values.yaml file, then install TBMQ:
helm install my-tbmq-cluster tbmq-helm-chart/tbmq-cluster \ -f values.yaml \ --set installation.installDbSchema=trueOnce the deployment completes, you should see output similar to:
NAME: my-tbmq-clusterLAST DEPLOYED: Wed Mar 26 17:42:49 2025NAMESPACE: tbmqSTATUS: deployedREVISION: 1TEST SUITE: NoneNOTES:TBMQ Cluster my-tbmq-cluster will be deployed in few minutes.Info:Namespace: tbmqValidate HTTP access
Section titled “Validate HTTP access”Get the address of the load balancer:
kubectl get ingressExpected output:
NAME CLASS HOSTS ADDRESS PORTS AGEmy-tbmq-cluster-http-lb gce * 34.111.24.134 80 3d1hUse the ADDRESS field of my-tbmq-cluster-http-lb to open the TBMQ web interface in your browser.
You should see the TBMQ login page. Use the default System Administrator credentials:
Username:
Password:
sysadminOn first login, you are prompted to change the default password and re-login with the new credentials.
Validate HTTPS access (if configured)
Section titled “Validate HTTPS access (if configured)”Check that your domain name resolves to the reserved static IP:
dig <your-domain-name>Once the DNS record is confirmed, wait for the Google-managed certificate to finish provisioning. This can take up to 60 minutes. Check certificate status with:
kubectl describe managedcertificate <your-managed-certificate-resource-name>The certificate will be provisioned once the domain records are configured correctly.
Use <your-domain-name> to connect to the cluster.
Validate MQTT access
Section titled “Validate MQTT access”The service my-tbmq-cluster-mqtt-lb is the LoadBalancer used for MQTT communication. Retrieve its EXTERNAL-IP with:
kubectl get servicesExpected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEmy-tbmq-cluster-mqtt-lb LoadBalancer 10.100.119.170 *.*.*.* 1883:30308/TCP,8084:30309/TCP,8883:31609/TCP,8085:31610/TCP 6m58sUse the EXTERNAL-IP field to connect to the cluster via MQTT.
Troubleshooting
Section titled “Troubleshooting”To examine service logs for errors, view TBMQ logs with:
kubectl logs -f my-tbmq-cluster-tbmq-node-0Check the state of all StatefulSets:
kubectl get statefulsetsSee the kubectl Cheat Sheet for more details.
Upgrading
Section titled “Upgrading”Chart version 2.0.0 is a breaking change from the 1.x line: the Bitnami PostgreSQL, Kafka, and Redis subcharts that
earlier versions bundled have been removed. Upgrades from chart 1.x require either keeping the existing in-cluster
Bitnami stack (annotated with helm.sh/resource-policy: keep) or provisioning a fresh third-party stack
(PostgreSQL 17, Apache Kafka 4.0.0, Valkey 8.0+). Same-edition upgrades and CE → PE cross-edition migration on the
same TBMQ version are also supported via the chart’s pre-upgrade hook.
For the full procedure, refer to the
Upgrading
section of the TBMQ Helm Chart documentation on Artifact Hub. It covers the chart 1.x → 2.0.0 migration paths, the
upgrade.upgradeDbSchema and upgrade.fromVersion flags, and the CE → PE migration steps.
Uninstalling TBMQ Helm chart
Section titled “Uninstalling TBMQ Helm chart”To uninstall the TBMQ Helm chart:
helm delete my-tbmq-clusterThis removes all TBMQ components associated with the release from the current namespace.
The helm delete command removes only the logical resources of the TBMQ cluster. Per-pod PVCs created from the
broker’s volumeClaimTemplate are intentionally not deleted by helm uninstall and carry no app= label.
Drop them explicitly by name pattern:
kubectl get pvc -n tbmq -o name | grep tbmq-node-data | xargs -I {} kubectl delete -n tbmq {}Cloud SQL, Managed Kafka, and Memorystore resources are owned by GCP and are not affected by helm delete. Drop
them through their own console or IaC tooling if you no longer need them.
Delete Kubernetes cluster
Section titled “Delete Kubernetes cluster”To delete the GKE cluster:
gcloud container clusters delete $TB_CLUSTER_NAME --region=$GCP_REGION