Skip to content
Stand with Ukraine flag

GCP

This guide covers setting up TBMQ in cluster mode on Google Kubernetes Engine (GKE).

Install kubectl, helm, and gcloud tools. See the before you begin guide for more info.

Create a new Google Cloud Platform project (recommended) or select an existing one. Make sure the correct project is active:

Terminal window
gcloud init

Enable the required GCP services:

Terminal window
gcloud services enable container.googleapis.com sql-component.googleapis.com sqladmin.googleapis.com
Terminal window
git clone -b release-2.3.0 https://github.com/thingsboard/tbmq.git
cd tbmq/k8s/gcp

Define environment variables used throughout this guide. Execute the following command on Linux:

Terminal window
export GCP_PROJECT=$(gcloud config get-value project)
export GCP_REGION=us-central1
export GCP_ZONE1=us-central1-a
export GCP_ZONE2=us-central1-b
export GCP_ZONE3=us-central1-c
export GCP_NETWORK=default
export TB_CLUSTER_NAME=tbmq-cluster
export TB_DATABASE_NAME=tbmq-db
echo "Project: $GCP_PROJECT, region: $GCP_REGION, zones: $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3, network: $GCP_NETWORK, cluster: $TB_CLUSTER_NAME, database: $TB_DATABASE_NAME"

Where:

  • $GCP_PROJECT — fetched from your current gcloud config.
  • us-central1 — one of the available compute regions ($GCP_REGION).
  • default — default GCP network name ($GCP_NETWORK).
  • tbmq-cluster — cluster name ($TB_CLUSTER_NAME).
  • tbmq-db — database server name ($TB_DATABASE_NAME).

Create a regional cluster distributed across 3 zones. The example below provisions one e2-standard-4 node per zone (3 nodes total). Adjust --machine-type and --num-nodes as needed. See GCP machine types for options.

Terminal window
gcloud container clusters create $TB_CLUSTER_NAME \
--release-channel stable \
--region $GCP_REGION \
--network=$GCP_NETWORK \
--node-locations $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3 \
--enable-ip-alias \
--num-nodes=1 \
--node-labels=role=main \
--machine-type=e2-standard-4

Alternatively, follow the GKE regional cluster guide for a custom setup.

Terminal window
gcloud container clusters get-credentials $TB_CLUSTER_NAME --region $GCP_REGION

Provision Google Cloud SQL (PostgreSQL) instance

Section titled “Provision Google Cloud SQL (PostgreSQL) instance”

Enable service networking to allow your K8S cluster to connect to the DB instance:

Terminal window
gcloud services enable servicenetworking.googleapis.com --project=$GCP_PROJECT
gcloud compute addresses create google-managed-services-$GCP_NETWORK \
--global \
--purpose=VPC_PEERING \
--prefix-length=16 \
--network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK
gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=google-managed-services-$GCP_NETWORK \
--network=$GCP_NETWORK \
--project=$GCP_PROJECT

Create a PostgreSQL 17 instance with the following recommendations:

  • Same region as your GKE cluster ($GCP_REGION)
  • Same VPC network as your GKE cluster
  • Private IP only (no public IP)
  • High availability for production; single-zone for development
  • At least 2 vCPUs and 7.5 GB RAM
Terminal window
gcloud beta sql instances create $TB_DATABASE_NAME \
--database-version=POSTGRES_17 \
--region=$GCP_REGION --availability-type=regional \
--no-assign-ip --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK \
--cpu=2 --memory=7680MB --edition=ENTERPRISE

Alternatively, follow the Cloud SQL quickstart guide.

Note the PRIVATE_ADDRESS from the output — this is your YOUR_DB_IP_ADDRESS.

Terminal window
gcloud sql users set-password postgres \
--instance=$TB_DATABASE_NAME \
--password=secret

Replace secret with a strong password. This will be YOUR_DB_PASSWORD.

Terminal window
gcloud sql databases create thingsboard_mqtt_broker --instance=$TB_DATABASE_NAME

Replace YOUR_DB_IP_ADDRESS, YOUR_DB_PASSWORD, and YOUR_DB_NAME in the configmap:

Terminal window
nano tbmq-db-configmap.yml
Terminal window
kubectl apply -f tbmq-namespace.yml
kubectl config set-context $(kubectl config current-context) --namespace=thingsboard-mqtt-broker

TBMQ relies on Valkey to store messages for DEVICE persistent clients and to reduce database load during authentication. Without caching, every new connection triggers a database query, which can overload the database under high connection rates.

Use Google Cloud Memorystore for Valkey. Refer to the following resources:

Once your Valkey instance is ready, update tbmq-cache-configmap.yml:

For standalone mode:

REDIS_CONNECTION_TYPE: "standalone"
REDIS_HOST: "YOUR_VALKEY_ENDPOINT_URL_WITHOUT_PORT"
#REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"

For cluster mode:

REDIS_CONNECTION_TYPE: "cluster"
REDIS_NODES: "COMMA_SEPARATED_LIST_OF_NODES"
#REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"
# Recommended in Kubernetes for handling dynamic IPs and failover:
#REDIS_LETTUCE_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"
#REDIS_JEDIS_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"

Run the installation script:

Terminal window
./k8s-install-tbmq.sh

After completion, you should see:

Installation finished successfully!

TBMQ requires a running Kafka cluster. Choose one of the following options:

Runs as a StatefulSet with 3 pods in KRaft dual-role mode (each node acts as both controller and broker). Suitable for a lightweight, self-managed Kafka setup.

See the full deployment guide.

Quick steps:

Terminal window
kubectl apply -f kafka/tbmq-kafka.yml

In tbmq.yml and tbmq-ie.yml, uncomment the section marked:

# Uncomment the following lines to connect to Apache Kafka

Option 2. Deploy a Kafka cluster with the Strimzi Operator

Section titled “Option 2. Deploy a Kafka cluster with the Strimzi Operator”

Uses the Strimzi Cluster Operator for easier upgrades, scaling, and operational management.

See the full deployment guide.

Install the Strimzi operator:

Terminal window
helm install tbmq-kafka -f kafka/operator/values-strimzi-kafka-operator.yaml oci://quay.io/strimzi-helm/strimzi-kafka-operator --version 0.47.0

Deploy the Kafka cluster:

Terminal window
kubectl apply -f kafka/operator/kafka-cluster.yaml

In tbmq.yml and tbmq-ie.yml, uncomment the section marked:

# Uncomment the following lines to connect to Strimzi

Deploy TBMQ:

Terminal window
./k8s-deploy-tbmq.sh

After a few minutes, check pod status:

Terminal window
kubectl get pods

You should see tbmq-0 and tbmq-1 pods, each in the READY state.

You have two options:

  • HTTP — no HTTPS support. Suitable for development only.
  • HTTPS — SSL termination with Google-managed certificates. Recommended for production.
Terminal window
kubectl apply -f receipts/http-load-balancer.yml

Check provisioning status:

Terminal window
kubectl get ingress

Once ready:

NAME CLASS HOSTS ADDRESS PORTS AGE
tbmq-http-loadbalancer <none> * 34.111.24.134 80 7m25s

HTTPS uses Google-managed SSL certificates. Read the prerequisites carefully before proceeding.

Reserve a static IP address:

Terminal window
gcloud compute addresses create tbmq-http-lb-address --global

Replace PUT_YOUR_DOMAIN_HERE with a valid domain name in the HTTPS load balancer config:

Terminal window
nano receipts/https-load-balancer.yml

Deploy:

Terminal window
kubectl apply -f receipts/https-load-balancer.yml

Check provisioning status:

Terminal window
kubectl get ingress

Once provisioned:

NAME CLASS HOSTS ADDRESS PORTS AGE
tbmq-https-loadbalancer gce * 34.111.24.134 80 7m25s

Assign your domain name to the load balancer IP address shown above. Verify DNS propagation:

Terminal window
dig YOUR_DOMAIN_NAME

Then wait for the Google-managed certificate to finish provisioning (up to 60 minutes). Check status:

Terminal window
kubectl describe managedcertificate managed-cert

Create a TCP load balancer that forwards traffic on ports 1883 and 8883:

Terminal window
kubectl apply -f receipts/mqtt-load-balancer.yml

Follow this guide to create a .pem certificate file. Save it as server.pem in the working directory.

Create a ConfigMap from your PEM files:

Terminal window
kubectl create configmap tbmq-mqtts-config \
--from-file=server.pem=YOUR_PEM_FILENAME \
--from-file=mqttserver_key.pem=YOUR_PEM_KEY_FILENAME \
-o yaml --dry-run=client | kubectl apply -f -

Where:

  • YOUR_PEM_FILENAME — path to your server certificate file
  • YOUR_PEM_KEY_FILENAME — path to your server certificate private key file

Uncomment all sections marked with “Uncomment the following lines to enable two-way MQTTS” in tbmq.yml:

Terminal window
kubectl apply -f tbmq.yml

Open the TBMQ web interface using the DNS name of the load balancer:

Terminal window
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
tbmq-http-loadbalancer <none> * 34.111.24.134 80 3d1h

Use the ADDRESS of tbmq-http-loadbalancer to access the UI.

You should see the TBMQ login page. Use the default System Administrator credentials:

Username:

Password:

sysadmin

On first login, you are prompted to change the default password and re-login with the new credentials.

The service tbmq-mqtt-loadbalancer is the LoadBalancer used for MQTT communication. Retrieve its EXTERNAL-IP with:

Terminal window
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tbmq-mqtt-loadbalancer LoadBalancer 10.100.119.170 ******* 1883:30308/TCP,8883:31609/TCP 6m58s

Use the EXTERNAL-IP field to connect to the cluster via MQTT.

View TBMQ pod logs:

Terminal window
kubectl logs -f tbmq-0

Check the state of all StatefulSets:

Terminal window
kubectl get statefulsets

See the kubectl Cheat Sheet for more details.

  1. Check the version-specific notes below for any preparation your target version requires.
  2. Back up your database (optional but recommended).
  3. Run the upgrade commands.

For full version history and supported upgrade paths, see the upgrade instructions page. If the documentation does not cover your specific upgrade path, contact us for guidance.

If there are no version-specific notes for your upgrade path, skip directly to Run upgrade.

Backing up your PostgreSQL database before upgrading is highly recommended but optional. For guidance, follow the Cloud SQL backup and recovery documentation.

This release migrates all third-party components from Bitnami images to official open-source alternatives. Review the third-party component updates for full details.

Then proceed with the upgrade.

This release migrates MQTT authentication from YAML/env configuration into the database.

The upgrade script reads from database-setup.yml. Variables from tbmq.yml are not applied during the upgrade — only the values in database-setup.yml are used. Ensure this file reflects your active configuration.

Supported variables in database-setup.yml

  • SECURITY_MQTT_BASIC_ENABLED (true|false)
  • SECURITY_MQTT_SSL_ENABLED (true|false)
  • SECURITY_MQTT_SSL_SKIP_VALIDITY_CHECK_FOR_CLIENT_CERT (true|false) — usually false

Once the file is verified, proceed with the upgrade.

Pull the latest changes from the release branch:

Terminal window
git pull origin release-2.3.0

Note: Make sure any custom changes are not lost during the merge.

After pulling, run the upgrade script:

Terminal window
./k8s-upgrade-tbmq.sh

Delete TBMQ nodes:

Terminal window
./k8s-delete-tbmq.sh

Delete all TBMQ nodes, ConfigMaps, and load balancers:

Terminal window
./k8s-delete-all.sh

Delete the GKE cluster:

Terminal window
gcloud container clusters delete $TB_CLUSTER_NAME --region=$GCP_REGION