GCP
This guide covers setting up TBMQ in cluster mode on Google Kubernetes Engine (GKE).
Prerequisites
Section titled “Prerequisites”Install kubectl, helm, and gcloud tools. See the before you begin guide for more info.
Create a new Google Cloud Platform project (recommended) or select an existing one. Make sure the correct project is active:
gcloud initEnable the required GCP services:
gcloud services enable container.googleapis.com sql-component.googleapis.com sqladmin.googleapis.comClone TBMQ K8S repository
Section titled Clone TBMQ K8S repositorygit clone -b release-2.3.0 https://github.com/thingsboard/tbmq-pe-k8s.gitcd tbmq-pe-k8s/gcpDefine environment variables
Section titled “Define environment variables”Define environment variables used throughout this guide. Execute the following command on Linux:
export GCP_PROJECT=$(gcloud config get-value project)export GCP_REGION=us-central1export GCP_ZONE1=us-central1-aexport GCP_ZONE2=us-central1-bexport GCP_ZONE3=us-central1-cexport GCP_NETWORK=defaultexport TB_CLUSTER_NAME=tbmq-clusterexport TB_DATABASE_NAME=tbmq-dbecho "Project: $GCP_PROJECT, region: $GCP_REGION, zones: $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3, network: $GCP_NETWORK, cluster: $TB_CLUSTER_NAME, database: $TB_DATABASE_NAME"Where:
$GCP_PROJECT— fetched from your current gcloud config.us-central1— one of the available compute regions ($GCP_REGION).default— default GCP network name ($GCP_NETWORK).tbmq-cluster— cluster name ($TB_CLUSTER_NAME).tbmq-db— database server name ($TB_DATABASE_NAME).
Configure and create GKE cluster
Section titled “Configure and create GKE cluster”Create a regional cluster distributed across 3 zones. The example below provisions one e2-standard-4 node per zone (3 nodes total). Adjust --machine-type and --num-nodes as needed. See GCP machine types for options.
gcloud container clusters create $TB_CLUSTER_NAME \ --release-channel stable \ --region $GCP_REGION \ --network=$GCP_NETWORK \ --node-locations $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3 \ --enable-ip-alias \ --num-nodes=1 \ --node-labels=role=main \ --machine-type=e2-standard-4Alternatively, follow the GKE regional cluster guide for a custom setup.
Update kubectl context
Section titled “Update kubectl context”gcloud container clusters get-credentials $TB_CLUSTER_NAME --region $GCP_REGIONProvision Google Cloud SQL (PostgreSQL) instance
Section titled “Provision Google Cloud SQL (PostgreSQL) instance”Prerequisites
Section titled “Prerequisites”Enable service networking to allow your K8S cluster to connect to the DB instance:
gcloud services enable servicenetworking.googleapis.com --project=$GCP_PROJECT
gcloud compute addresses create google-managed-services-$GCP_NETWORK \ --global \ --purpose=VPC_PEERING \ --prefix-length=16 \ --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK
gcloud services vpc-peerings connect \ --service=servicenetworking.googleapis.com \ --ranges=google-managed-services-$GCP_NETWORK \ --network=$GCP_NETWORK \ --project=$GCP_PROJECTCreate database instance
Section titled “Create database instance”Create a PostgreSQL 17 instance with the following recommendations:
- Same region as your GKE cluster (
$GCP_REGION) - Same VPC network as your GKE cluster
- Private IP only (no public IP)
- High availability for production; single-zone for development
- At least 2 vCPUs and 7.5 GB RAM
gcloud beta sql instances create $TB_DATABASE_NAME \ --database-version=POSTGRES_17 \ --region=$GCP_REGION --availability-type=regional \ --no-assign-ip --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK \ --cpu=2 --memory=7680MB --edition=ENTERPRISEAlternatively, follow the Cloud SQL quickstart guide.
Note the PRIVATE_ADDRESS from the output — this is your YOUR_DB_IP_ADDRESS.
Set database password
Section titled “Set database password”gcloud sql users set-password postgres \ --instance=$TB_DATABASE_NAME \ --password=secretReplace secret with a strong password. This will be YOUR_DB_PASSWORD.
Create the database
Section titled “Create the database”gcloud sql databases create thingsboard_mqtt_broker --instance=$TB_DATABASE_NAMEEdit database settings
Section titled “Edit database settings”Replace YOUR_DB_IP_ADDRESS, YOUR_DB_PASSWORD, and YOUR_DB_NAME in the configmap:
nano tbmq-db-configmap.ymlCreate namespace
Section titled “Create namespace”kubectl apply -f tbmq-namespace.ymlkubectl config set-context $(kubectl config current-context) --namespace=thingsboard-mqtt-brokerProvision Valkey cluster
Section titled “Provision Valkey cluster”TBMQ relies on Valkey to store messages for DEVICE persistent clients and to reduce database load during authentication. Without caching, every new connection triggers a database query, which can overload the database under high connection rates.
Use Google Cloud Memorystore for Valkey. Refer to the following resources:
- Create Memorystore for Valkey instances — provision Cluster Mode Enabled/Disabled instances, including networking prerequisites.
- Product overview — architecture, shards, endpoints, and supported Valkey versions (including 8.0).
- Networking requirements — Private Service Connect and service connection policy setup.
- Instance and node specification — choosing node types (e.g.,
standard-small,highmem-medium). - Cluster vs Standalone — comparing horizontal scaling and feature support.
- High availability and replicas — multi-zone deployment and replica best practices.
- Best practices — memory management, eviction policies, and scaling guidance.
Once your Valkey instance is ready, update tbmq-cache-configmap.yml:
For standalone mode:
REDIS_CONNECTION_TYPE: "standalone"REDIS_HOST: "YOUR_VALKEY_ENDPOINT_URL_WITHOUT_PORT"#REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"For cluster mode:
REDIS_CONNECTION_TYPE: "cluster"REDIS_NODES: "COMMA_SEPARATED_LIST_OF_NODES"#REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"# Recommended in Kubernetes for handling dynamic IPs and failover:#REDIS_LETTUCE_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"#REDIS_JEDIS_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"Installation
Section titled “Installation”Run the installation script:
./k8s-install-tbmq.shAfter completion, you should see:
Installation finished successfully!Get the license key
Section titled Get the license keyBefore proceeding, ensure you have an active TBMQ license. If you don't have one yet, visit the Pricing page, choose a pay-as-you-go subscription or a perpetual license, and use the calculator to size your deployment — session and throughput limits, production and development instances, and any add-ons — to obtain your license key.
Configure the license key
Section titled Configure the license keyCreate a Kubernetes secret with your license key:
export TBMQ_LICENSE_KEY=YOUR_LICENSE_KEY_HEREkubectl create -n thingsboard-mqtt-broker secret generic tbmq-license --from-literal=license-key=$TBMQ_LICENSE_KEYProvision Kafka
Section titled “Provision Kafka”TBMQ requires a running Kafka cluster. Choose one of the following options:
Option 1. Deploy an Apache Kafka cluster
Section titled “Option 1. Deploy an Apache Kafka cluster”Runs as a StatefulSet with 3 pods in KRaft dual-role mode (each node acts as both controller and broker). Suitable for a lightweight, self-managed Kafka setup.
See the full deployment guide.
Quick steps:
kubectl apply -f kafka/tbmq-kafka.ymlIn tbmq.yml and tbmq-ie.yml, uncomment the section marked:
# Uncomment the following lines to connect to Apache KafkaOption 2. Deploy a Kafka cluster with the Strimzi Operator
Section titled “Option 2. Deploy a Kafka cluster with the Strimzi Operator”Uses the Strimzi Cluster Operator for easier upgrades, scaling, and operational management.
See the full deployment guide.
Install the Strimzi operator:
helm install tbmq-kafka -f kafka/operator/values-strimzi-kafka-operator.yaml oci://quay.io/strimzi-helm/strimzi-kafka-operator --version 0.47.0Deploy the Kafka cluster:
kubectl apply -f kafka/operator/kafka-cluster.yamlIn tbmq.yml and tbmq-ie.yml, uncomment the section marked:
# Uncomment the following lines to connect to StrimziStart TBMQ
Section titled “Start TBMQ”Deploy TBMQ:
./k8s-deploy-tbmq.shAfter a few minutes, check pod status:
kubectl get podsYou should see tbmq-0 and tbmq-1 pods, each in the READY state.
Configure load balancers
Section titled “Configure load balancers”Configure HTTP(S) load balancer
Section titled “Configure HTTP(S) load balancer”You have two options:
- HTTP — no HTTPS support. Suitable for development only.
- HTTPS — SSL termination with Google-managed certificates. Recommended for production.
HTTP load balancer
Section titled “HTTP load balancer”kubectl apply -f receipts/http-load-balancer.ymlCheck provisioning status:
kubectl get ingressOnce ready:
NAME CLASS HOSTS ADDRESS PORTS AGEtbmq-http-loadbalancer <none> * 34.111.24.134 80 7m25sHTTPS load balancer
Section titled “HTTPS load balancer”HTTPS uses Google-managed SSL certificates. Read the prerequisites carefully before proceeding.
Reserve a static IP address:
gcloud compute addresses create tbmq-http-lb-address --globalReplace PUT_YOUR_DOMAIN_HERE with a valid domain name in the HTTPS load balancer config:
nano receipts/https-load-balancer.ymlDeploy:
kubectl apply -f receipts/https-load-balancer.ymlCheck provisioning status:
kubectl get ingressOnce provisioned:
NAME CLASS HOSTS ADDRESS PORTS AGEtbmq-https-loadbalancer gce * 34.111.24.134 80 7m25sAssign your domain name to the load balancer IP address shown above. Verify DNS propagation:
dig YOUR_DOMAIN_NAMEThen wait for the Google-managed certificate to finish provisioning (up to 60 minutes). Check status:
kubectl describe managedcertificate managed-certConfigure MQTT load balancer
Section titled “Configure MQTT load balancer”Create a TCP load balancer that forwards traffic on ports 1883 and 8883:
kubectl apply -f receipts/mqtt-load-balancer.ymlMQTT over SSL
Section titled “MQTT over SSL”Follow this guide to create a .pem certificate file.
Save it as server.pem in the working directory.
Create a ConfigMap from your PEM files:
kubectl create configmap tbmq-mqtts-config \ --from-file=server.pem=YOUR_PEM_FILENAME \ --from-file=mqttserver_key.pem=YOUR_PEM_KEY_FILENAME \ -o yaml --dry-run=client | kubectl apply -f -Where:
YOUR_PEM_FILENAME— path to your server certificate fileYOUR_PEM_KEY_FILENAME— path to your server certificate private key file
Uncomment all sections marked with “Uncomment the following lines to enable two-way MQTTS” in tbmq.yml:
kubectl apply -f tbmq.ymlValidate the setup
Section titled “Validate the setup”Open the TBMQ web interface using the DNS name of the load balancer:
kubectl get ingressNAME CLASS HOSTS ADDRESS PORTS AGEtbmq-http-loadbalancer <none> * 34.111.24.134 80 3d1hUse the ADDRESS of tbmq-http-loadbalancer to access the UI.
You should see the TBMQ login page. Use the default System Administrator credentials:
Username:
Password:
sysadminOn first login, you are prompted to change the default password and re-login with the new credentials.
Validate MQTT access
Section titled “Validate MQTT access”The service tbmq-mqtt-loadbalancer is the LoadBalancer used for MQTT communication. Retrieve its EXTERNAL-IP with:
kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEtbmq-mqtt-loadbalancer LoadBalancer 10.100.119.170 ******* 1883:30308/TCP,8883:31609/TCP 6m58sUse the EXTERNAL-IP field to connect to the cluster via MQTT.
Troubleshooting
Section titled “Troubleshooting”View TBMQ pod logs:
kubectl logs -f tbmq-0Check the state of all StatefulSets:
kubectl get statefulsetsSee the kubectl Cheat Sheet for more details.
Upgrading
Section titled “Upgrading”- Check the version-specific notes below for any preparation your target version requires.
- Back up your database (optional but recommended).
- Run the upgrade commands.
For full version history and supported upgrade paths, see the upgrade instructions page. If the documentation does not cover your specific upgrade path, contact us for guidance.
If there are no version-specific notes for your upgrade path, skip directly to Run upgrade.
Backup and restore (optional)
Section titled “Backup and restore (optional)”Backing up your PostgreSQL database before upgrading is highly recommended but optional. For guidance, follow the Cloud SQL backup and recovery documentation.
Upgrade to 2.3.0
Section titled Upgrade to 2.3.0This is a standard upgrade from v2.2.0. No third-party component changes are required — the official images are already in use since v2.2.0.
Proceed with the upgrade.
Upgrade from TBMQ to TBMQ PE
Section titled Upgrade from TBMQ to TBMQ PECE-to-PE migration is supported for the same version only. If you are on an earlier CE version, upgrade TBMQ CE to the latest version first. For all supported paths, see the upgrade instructions.
Before upgrading, merge your current configuration with the latest TBMQ PE K8S scripts. Don't forget to configure the license key.
Run the following commands to stop TBMQ, migrate the database, and redeploy:
./k8s-delete-tbmq.sh./k8s-upgrade-tbmq.sh --fromVersion=ce./k8s-deploy-tbmq.shRun upgrade
Section titled “Run upgrade”Pull the latest changes from the release branch:
git pull origin release-2.3.0Note: Make sure any custom changes are not lost during the merge.
After pulling, run the upgrade script:
./k8s-upgrade-tbmq.shCluster deletion
Section titled “Cluster deletion”Delete TBMQ nodes:
./k8s-delete-tbmq.shDelete all TBMQ nodes, ConfigMaps, and load balancers:
./k8s-delete-all.shDelete the GKE cluster:
gcloud container clusters delete $TB_CLUSTER_NAME --region=$GCP_REGION