Stand with Ukraine flag
Pricing Try it now
PE MQTT Broker
Installation > Cluster on GCP (Kubernetes)
Getting Started Documentation
Architecture API FAQ
On this page

Deploy TBMQ PE Cluster on GCP with Kubernetes

This guide will help you set up TBMQ PE in GKE.

Prerequisites

Install and configure tools

To deploy TBMQ PE on GKE cluster you’ll need to install kubectl and gcloud tools. See before you begin guide for more info.

Create a new Google Cloud Platform project (recommended) or choose the existing one.

Make sure you have selected the correct project by executing the following command:

1
gcloud init

Enable GCP services

Enable the GKE and SQL services for your project by executing the following command:

1
gcloud services enable container.googleapis.com sql-component.googleapis.com sqladmin.googleapis.com

Clone TBMQ PE K8S repository

1
2
git clone -b release-2.2.0 https://github.com/thingsboard/tbmq-pe-k8s.git
cd tbmq-pe-k8s/gcp

Define environment variables

Define environment variables that you will use in various commands later in this guide.

We assume you are using Linux. Execute the following command:

1
2
3
4
5
6
7
8
9
10
export GCP_PROJECT=$(gcloud config get-value project)
export GCP_REGION=us-central1
export GCP_ZONE=us-central1
export GCP_ZONE1=us-central1-a
export GCP_ZONE2=us-central1-b
export GCP_ZONE3=us-central1-c
export GCP_NETWORK=default
export TB_CLUSTER_NAME=tbmq-cluster
export TB_DATABASE_NAME=tbmq-db
echo "You have selected project: $GCP_PROJECT, region: $GCP_REGION, gcp zones: $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3, network: $GCP_NETWORK, cluster: $TB_CLUSTER_NAME, database: $TB_DATABASE_NAME"

where:

  • the first line uses gcloud command to fetch your current GCP project id. We will refer to it later in this guide using $GCP_PROJECT;
  • us-central1 is one of the available compute regions. We will refer to it later in this guide using $GCP_REGION;
  • default is a default GCP network name; We will refer to it later in this guide using $GCP_NETWORK;
  • tbmq-cluster is the name of your cluster. You may input a different name. We will refer to it later in this guide using $TB_CLUSTER_NAME;
  • tbmq-db is the name of your database server. You may input a different name. We will refer to it later in this guide using $TB_DATABASE_NAME.

Configure and create GKE cluster

Create a regional cluster distributed across 3 zones with nodes of your preferred machine type. The example below provisions one e2-standard-4 node per zone (three nodes total), but you can modify the --machine-type and --num-nodes to suit your workload requirements. For a full list of available machine types and their specifications, refer to the GCP machine types documentation.

Execute the following command (recommended):

1
2
3
4
5
6
7
8
9
gcloud container clusters create $TB_CLUSTER_NAME \
--release-channel stable \
--region $GCP_REGION \
--network=$GCP_NETWORK \
--node-locations $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3 \
--enable-ip-alias \
--num-nodes=1 \
--node-labels=role=main \
--machine-type=e2-standard-4

Alternatively, you may use this guide for custom cluster setup.

Update the context of kubectl

Update the context of kubectl using command:

1
gcloud container clusters get-credentials $TB_CLUSTER_NAME --region $GCP_REGION

Provision Google Cloud SQL (PostgreSQL) Instance

Prerequisites

Enable service networking to allow your K8S cluster to connect to the DB instance:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
gcloud services enable servicenetworking.googleapis.com --project=$GCP_PROJECT

gcloud compute addresses create google-managed-services-$GCP_NETWORK \
--global \
--purpose=VPC_PEERING \
--prefix-length=16 \
--network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK

gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=google-managed-services-$GCP_NETWORK \
--network=$GCP_NETWORK \
--project=$GCP_PROJECT
    

Create database server instance

Create the PostgreSQL instance with database version “PostgreSQL 17” and the following recommendations:

  • use the same region where your K8S cluster GCP_REGION is located;
  • use the same VPC network where your K8S cluster GCP_REGION is located;
  • use private IP address to connect to your instance and disable public IP address;
  • use highly available DB instance for production and single zone instance for development clusters;
  • use at least 2 vCPUs and 7.5 GB RAM, which is sufficient for most of the workloads. You may scale it later if needed.

Execute the following command:

1
2
3
4
5
6
gcloud beta sql instances create $TB_DATABASE_NAME \
--database-version=POSTGRES_17 \
--region=$GCP_REGION --availability-type=regional \
--no-assign-ip --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK \
--cpu=2 --memory=7680MB --edition=ENTERPRISE

Alternatively, you may follow this guide to configure your database.

Note your IP address (YOUR_DB_IP_ADDRESS) from the command output. Successful command output should look similar to this:

1
2
3
Created [https://sqladmin.googleapis.com/sql/v1beta4/projects/YOUR_PROJECT_ID/instances/$TB_DATABASE_NAME].
NAME                        DATABASE_VERSION  LOCATION       TIER              PRIMARY_ADDRESS  PRIVATE_ADDRESS  STATUS
$TB_DATABASE_NAME           POSTGRES_17       us-central1-f  db-custom-2-7680  35.192.189.68    -                RUNNABLE

Set database password

Set the password for your new database server instance:

1
2
3
gcloud sql users set-password postgres \
--instance=$TB_DATABASE_NAME \
--password=secret

where:

  • instance is the name of your database server instance;
  • secret is the password. You should input a different password. We will refer to it later in this guide using YOUR_DB_PASSWORD.

Create the database

Create “thingsboard_mqtt_broker” database inside your postgres database server instance:

1
gcloud sql databases create thingsboard_mqtt_broker --instance=$TB_DATABASE_NAME

where, thingsboard_mqtt_broker is the name of your database. You may input a different name. We will refer to it later in this guide using YOUR_DB_NAME.

Edit database settings

Replace YOUR_DB_IP_ADDRESS, YOUR_DB_PASSWORD and YOUR_DB_NAME with the correct values:

1
nano tbmq-db-configmap.yml

Create Namespace

Let’s create a dedicated namespace for our TBMQ cluster deployment to ensure better resource isolation and management.

1
2
kubectl apply -f tbmq-namespace.yml
kubectl config set-context $(kubectl config current-context) --namespace=thingsboard-mqtt-broker

Provision Valkey cluster

TBMQ relies on Valkey to store messages for DEVICE persistent clients. The cache also improves performance by reducing the number of direct database reads, especially when authentication is enabled and multiple clients connect at once. Without caching, every new connection triggers a database query to validate MQTT client credentials, which can cause the unnecessary load under high connection rates.

To set up Valkey in Google Cloud, refer to the Google Memorystore for Valkey documentation:

  • Create Memorystore for Valkey instances: Instructions to provision both Cluster Mode Enabled and Cluster Mode Disabled instances, including prerequisites like service connection policies and networking setup. (Google Cloud)

  • General overview: Details on the managed Valkey service, architecture, and key concepts such as shards, endpoints, and supported Valkey versions (including 8.0). (Google Cloud)

  • Networking requirements: Guidance on Private Service Connect and service connection policy setup necessary for secure connectivity. (Google Cloud)

  • Instance & node sizing: Recommendations for choosing node types according to workload (e.g., standard-small, highmem-medium), memory capacity, and performance characteristics. (Google Cloud)

  • Cluster vs Standalone (Cluster Mode Enabled vs Disabled): Comparison of horizontal scaling, throughput, and feature support—helpful in choosing the appropriate mode for your use case. (Google Cloud)

  • High Availability & Replicas: Best practices for multi-zone deployment, replica usage for read scaling, and resilience in production scenarios. (Google Cloud)

  • Best practices & scaling guidance: Advice on memory management, eviction policies, when to scale, and how to handle growing workloads effectively. (Google Cloud)

Once your Valkey cluster is ready, update the cache configuration in tbmq-cache-configmap.yml with the correct endpoint values:

  • For standalone Valkey: Uncomment and set the following values. Make sure the REDIS_HOST value does not include the port (:6379).

    1
    2
    3
    
    REDIS_CONNECTION_TYPE: "standalone"
    REDIS_HOST: "YOUR_VALKEY_ENDPOINT_URL_WITHOUT_PORT"
    #REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"
    
  • For Valkey cluster: Provide a comma-separated list of “host:port” node endpoints to bootstrap from.

    1
    2
    3
    4
    5
    6
    
    REDIS_CONNECTION_TYPE: "cluster"
    REDIS_NODES: "COMMA_SEPARATED_LIST_OF_NODES"
    #REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"
    # Recommended in Kubernetes for handling dynamic IPs and failover:
    #REDIS_LETTUCE_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"
    #REDIS_JEDIS_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"
    

Installation

Execute the following command to run the installation:

1
./k8s-install-tbmq.sh

After this command is finished, you should see the next line in the console:

1
Installation finished successfully!
Doc info icon

Otherwise, please check if you set the PostgreSQL URL and PostgreSQL password in the tbmq-db-configmap.yml correctly.

Get the license key

Before proceeding, make sure you’ve selected your subscription plan or chosen to purchase a perpetual license. If you haven’t done this yet, please visit the Pricing page to compare available options and obtain your license key.

Note: Throughout this guide, we’ll refer to your license key as YOUR_LICENSE_KEY_HERE.

Configure the license key

Create a k8s secret with your license key:

1
2
export TBMQ_LICENSE_KEY=YOUR_LICENSE_KEY_HERE 
kubectl create -n thingsboard-mqtt-broker secret generic tbmq-license --from-literal=license-key=$TBMQ_LICENSE_KEY
Doc info icon

Don’t forget to replace YOUR_LICENSE_KEY_HERE with the value of your license key.

Provision Kafka

TBMQ requires a running Kafka cluster. You can set up Kafka in two ways:

  • Deploy a self-managed Apache Kafka cluster
  • Deploy a managed Kafka cluster with the Strimzi Operator

Choose the option that best fits your environment and operational needs.

Option 1. Deploy an Apache Kafka Cluster

  • Runs as a StatefulSet with 3 pods in KRaft dual-role mode (each node acts as both controller and broker).
  • Suitable if you want a lightweight, self-managed Kafka setup.

  • See the full deployment guide here.

Quick steps:

1
kubectl apply -f kafka/tbmq-kafka.yml

Update TBMQ configuration files (tbmq.yml and tbmq-ie.yml) and uncomment the section marked:

1
# Uncomment the following lines to connect to Apache Kafka

Option 2. Deploy a Kafka Cluster with the Strimzi Operator

Quick steps:

Install the Strimzi operator:

1
helm install tbmq-kafka -f kafka/operator/values-strimzi-kafka-operator.yaml oci://quay.io/strimzi-helm/strimzi-kafka-operator --version 0.47.0

Deploy the Kafka cluster:

1
kubectl apply -f kafka/operator/kafka-cluster.yaml

Update TBMQ configuration files (tbmq.yml and tbmq-ie.yml) and uncomment the section marked:

1
# Uncomment the following lines to connect to Strimzi

Starting

Execute the following command to deploy the broker:

1
./k8s-deploy-tbmq.sh

After a few minutes, you may execute the next command to check the state of all pods.

1
kubectl get pods

If everything went fine, you should be able to see tbmq-0 and tbmq-1 pods. Every pod should be in the READY state.

Configure Load Balancers

Configure HTTP(S) Load Balancer

Configure HTTP(S) Load Balancer to access the web interface of your TBMQ instance. Basically, you have 2 possible configuration options:

  • http — Load Balancer without HTTPS support. Recommended for development. The only advantage is simple configuration and minimum costs. May be a good option for development server but definitely not suitable for production.
  • https — Load Balancer with HTTPS support. Recommended for production. Acts as an SSL termination point. You may easily configure it to issue and maintain a valid SSL certificate. Automatically redirects all non-secure (HTTP) traffic to secure (HTTPS) port.

See links/instructions below on how to configure each of the suggested options.

HTTP Load Balancer

Execute the following command to deploy plain http load balancer:

1
kubectl apply -f receipts/http-load-balancer.yml

The process of load balancer provisioning may take some time. You may periodically check the status of the load balancer using the following command:

1
kubectl get ingress

Once provisioned, you should see the similar output:

1
2
NAME                     CLASS    HOSTS   ADDRESS         PORTS   AGE
tbmq-http-loadbalancer   <none>   *       34.111.24.134   80      7m25s

HTTPS Load Balancer

The process of configuring the load balancer using Google-managed SSL certificates is described on the official documentation page. The instructions below are extracted from the official documentation. Make sure you read prerequisites carefully before proceeding.

1
gcloud compute addresses create tbmq-http-lb-address --global

Replace the PUT_YOUR_DOMAIN_HERE with a valid domain name in the https-load-balancer.yml file:

1
nano receipts/https-load-balancer.yml

Execute the following command to deploy secure http load balancer:

1
 kubectl apply -f receipts/https-load-balancer.yml

The process of load balancer provisioning may take some time. You may periodically check the status of the load balancer using the following command:

1
kubectl get ingress

Once provisioned, you should see a similar output:

1
2
NAME                      CLASS    HOSTS   ADDRESS         PORTS   AGE
tbmq-https-loadbalancer   gce      *       34.111.24.134   80      7m25s

Now, assign the domain name you have used to the load balancer IP address (the one you see instead of 34.111.24.134 in the command output).

Check that the domain name is configured correctly using dig:

1
dig YOUR_DOMAIN_NAME

Sample output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
; <<>> DiG 9.11.3-1ubuntu1.16-Ubuntu <<>> YOUR_DOMAIN_NAME
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12513
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;YOUR_DOMAIN_NAME.	IN	A

;; ANSWER SECTION:
YOUR_DOMAIN_NAME. 36 IN	A	34.111.24.134

;; Query time: 0 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Fri Nov 19 13:00:00 EET 2021
;; MSG SIZE  rcvd: 74

Once assigned, wait for the Google-managed certificate to finish provisioning. This might take up to 60 minutes. You can check the status of the certificate using the following command:

1
kubectl describe managedcertificate managed-cert

Certificate will be eventually provisioned if you have configured domain records properly.

Once provisioned, you may use your domain name to access Web UI (over https).

Configure MQTT Load Balancer

Configure MQTT load balancer to be able to use MQTT protocol to connect devices.

Create TCP load balancer using the following command:

1
kubectl apply -f receipts/mqtt-load-balancer.yml

The load balancer will forward all TCP traffic for ports 1883 and 8883.

MQTT over SSL

Follow this guide to create a .pem file with the SSL certificate. Store the file as server.pem in the working directory.

You’ll need to create a config-map with your PEM file, you can do it by calling command:

1
2
3
4
kubectl create configmap tbmq-mqtts-config \
 --from-file=server.pem=YOUR_PEM_FILENAME \
 --from-file=mqttserver_key.pem=YOUR_PEM_KEY_FILENAME \
 -o yaml --dry-run=client | kubectl apply -f -
  • where YOUR_PEM_FILENAME is the name of your server certificate file.
  • where YOUR_PEM_KEY_FILENAME is the name of your server certificate private key file.

Then, uncomment all sections in the ‘tbmq.yml’ file that are marked with “Uncomment the following lines to enable two-way MQTTS”.

Execute command to apply changes:

1
kubectl apply -f tbmq.yml

Validate the setup

Now you can open the TBMQ web interface in your browser using the DNS name of the load balancer.

You can get the DNS name of the load-balancers using the next command:

1
kubectl get ingress

You should see the similar picture:

1
2
NAME                     CLASS    HOSTS   ADDRESS         PORTS   AGE
tbmq-http-loadbalancer   <none>   *       34.111.24.134   80      3d1h

Use ADDRESS field of the tbmq-http-loadbalancer to connect to the cluster.

You should see TBMQ login page. Use the following default credentials for System Administrator:

Username:

Password:

1
sysadmin

On the first user log-in you will be asked to change the default password to the preferred one and then re-login using the new credentials.

Validate MQTT access

To connect to the cluster via MQTT, you will need to get the corresponding service IP. You can do this with the command:

1
kubectl get services

You should see the similar picture:

1
2
NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP              PORT(S)                         AGE
tbmq-mqtt-loadbalancer   LoadBalancer   10.100.119.170   *******                  1883:30308/TCP,8883:31609/TCP   6m58s

Use EXTERNAL-IP field of the load-balancer to connect to the cluster via MQTT protocol.

Troubleshooting

In case of any issues, you can examine service logs for errors. For example, to see TBMQ logs, execute the following command:

1
kubectl logs -f tbmq-0

Use the next command to see the state of all statefulsets.

1
kubectl get statefulsets

See kubectl Cheat Sheet command reference for more details.

Upgrading

Review the release notes and upgrade instruction for detailed information on the latest changes.

If the documentation does not cover the specific upgrade instructions for your case, please contact us so we can provide further guidance.

Backup and restore (Optional)

While backing up your PostgreSQL database is highly recommended, it is optional before proceeding with the upgrade. For further guidance, follow the next instructions.

Upgrade from TBMQ CE to TBMQ PE (v2.2.0)

To upgrade your existing TBMQ Community Edition (CE) to TBMQ Professional Edition (PE), ensure you are running the latest TBMQ CE 2.2.0 version before starting the process. Merge your current configuration with the latest TBMQ PE K8S scripts. Do not forget to configure the license key.

Run the following commands, including the upgrade script to migrate PostgreSQL database data from CE to PE:

1
2
3
./k8s-delete-tbmq.sh
./k8s-upgrade-tbmq.sh --fromVersion=ce
./k8s-deploy-tbmq.sh

Cluster deletion

Execute the following command to delete TBMQ nodes:

1
./k8s-delete-tbmq.sh

Execute the following command to delete all TBMQ nodes and configmaps, load balancers, etc.:

1
./k8s-delete-all.sh

Execute the following command to delete the GKE cluster:

1
gcloud container clusters delete $TB_CLUSTER_NAME --region=$GCP_REGION

Next steps