Stand with Ukraine flag
Try it now Pricing
MQTT Broker
Installation > Cluster setup with Kubernetes on GCP
Getting Started Documentation
Architecture API FAQ
On this page

Cluster setup using GCP infrastructure

This guide will help you to setup TBMQ in microservices mode in GKE.

Prerequisites

Install and configure tools

To deploy ThingsBoard on GKE cluster you’ll need to install kubectl and gcloud tools. See before you begin guide for more info.

Create new Google Cloud Platform project (recommended) or choose existing one.

Make sure you have selected correct project by executing the following command:

1
gcloud init

Enable GCP services

Enable the GKE and SQL services for your project by executing the following command:

1
gcloud services enable container.googleapis.com sql-component.googleapis.com sqladmin.googleapis.com

Step 1. Clone TBMQ K8S scripts repository

1
2
git clone -b release-2.0.1 https://github.com/thingsboard/tbmq.git
cd tbmq/k8s/gcp

Step 2. Define environment variables

Define environment variables that you will use in various commands later in this guide.

We assume you are using Linux. Execute the following command:

1
2
3
4
5
6
7
8
9
10
export GCP_PROJECT=$(gcloud config get-value project)
export GCP_REGION=us-central1
export GCP_ZONE=us-central1
export GCP_ZONE1=us-central1-a
export GCP_ZONE2=us-central1-b
export GCP_ZONE3=us-central1-c
export GCP_NETWORK=default
export TB_CLUSTER_NAME=tbmq-cluster
export TB_DATABASE_NAME=tbmq-db
echo "You have selected project: $GCP_PROJECT, region: $GCP_REGION, gcp zones: $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3, network: $GCP_NETWORK, cluster: $TB_CLUSTER_NAME, database: $TB_DATABASE_NAME"

where:

  • first line uses gcloud command to fetch your current GCP project id. We will refer to it later in this guide using $GCP_PROJECT;
  • us-central1 is one of the available compute regions. We will refer to it later in this guide using $GCP_REGION;
  • default is a default GCP network name; We will refer to it later in this guide using $GCP_NETWORK;
  • tbmq-cluster is the name of your cluster. You may input a different name. We will refer to it later in this guide using $TB_CLUSTER_NAME;
  • tbmq-db is the name of your database server. You may input a different name. We will refer to it later in this guide using $TB_DATABASE_NAME;

Step 3. Configure and create GKE cluster

Create a regional cluster distributed across 3 zones with nodes of your preferred machine type. The example below provisions one e2-standard-4 node per zone (three nodes total), but you can modify the --machine-type and --num-nodes to suit your workload requirements. For a full list of available machine types and their specifications, refer to the GCP machine types documentation.

Execute the following command (recommended):

1
2
3
4
5
6
7
8
9
gcloud container clusters create $TB_CLUSTER_NAME \
--release-channel stable \
--region $GCP_REGION \
--network=$GCP_NETWORK \
--node-locations $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3 \
--enable-ip-alias \
--num-nodes=1 \
--node-labels=role=main \
--machine-type=e2-standard-4

Alternatively, you may use this guide for custom cluster setup.

Step 4. Update the context of kubectl

Update the context of kubectl using command:

1
gcloud container clusters get-credentials $TB_CLUSTER_NAME --region $GCP_REGION

Step 5. Provision Google Cloud SQL (PostgreSQL) Instance

5.1 Prerequisites

Enable service networking to allow your K8S cluster connect to the DB instance:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
gcloud services enable servicenetworking.googleapis.com --project=$GCP_PROJECT

gcloud compute addresses create google-managed-services-$GCP_NETWORK \
--global \
--purpose=VPC_PEERING \
--prefix-length=16 \
--network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK

gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=google-managed-services-$GCP_NETWORK \
--network=$GCP_NETWORK \
--project=$GCP_PROJECT
    

5.2 Create database server instance

Create the PostgreSQL instance with database version “PostgreSQL 15” and the following recommendations:

  • use the same region where your K8S cluster GCP_REGION is located;
  • use the same VPC network where your K8S cluster GCP_REGION is located;
  • use private IP address to connect to your instance and disable public IP address;
  • use highly available DB instance for production and single zone instance for development clusters;
  • use at least 2 vCPUs and 7.5 GB RAM, which is sufficient for most of the workloads. You may scale it later if needed;

Execute the following command:

1
2
3
4
5
6
gcloud beta sql instances create $TB_DATABASE_NAME \
--database-version=POSTGRES_15 \
--region=$GCP_REGION --availability-type=regional \
--no-assign-ip --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK \
--cpu=2 --memory=7680MB

Alternatively, you may follow this guide to configure your database.

Note your IP address (YOUR_DB_IP_ADDRESS) from command output. Successful command output should look similar to this:

1
2
3
Created [https://sqladmin.googleapis.com/sql/v1beta4/projects/YOUR_PROJECT_ID/instances/$TB_DATABASE_NAME].
NAME                        DATABASE_VERSION  LOCATION       TIER              PRIMARY_ADDRESS  PRIVATE_ADDRESS  STATUS
$TB_DATABASE_NAME           POSTGRES_15       us-central1-f  db-custom-2-7680  35.192.189.68    -                RUNNABLE

5.3 Set database password

Set password for your new database server instance:

1
2
3
gcloud sql users set-password postgres \
--instance=$TB_DATABASE_NAME \
--password=secret

where:

  • instance is the name of your database server instance;
  • secret is the password. You should input a different password. We will refer to it later in this guide using YOUR_DB_PASSWORD;

5.4 Create database

Create “thingsboard_mqtt_broker” database inside your postgres database server instance:

1
gcloud sql databases create thingsboard_mqtt_broker --instance=$TB_DATABASE_NAME

where, thingsboard_mqtt_broker is the name of your database. You may input a different name. We will refer to it later in this guide using YOUR_DB_NAME;

5.5 Edit database settings

Replace YOUR_DB_IP_ADDRESS, YOUR_DB_PASSWORD and YOUR_DB_NAME with the correct values:

1
nano tb-broker-db-configmap.yml

Step 6. Create Namespace

Let’s create a dedicated namespace for our TBMQ cluster deployment to ensure better resource isolation and management.

1
2
kubectl apply -f tb-broker-namespace.yml
kubectl config set-context $(kubectl config current-context) --namespace=thingsboard-mqtt-broker

Step 7. Provision Redis cluster

We recommend deploying Bitnami Redis Cluster from Helm. For that, review the redis folder.

1
ls redis/

You can find there default-values-redis.yml file - default values downloaded from Bitnami artifactHub. And values-redis.yml file with modified values. We recommend keeping the first file untouched and making changes to the second one only. This way the upgrade process to the next version will go more smoothly as it will be possible to see diff.

To add the Bitnami helm repo:

1
2
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

To install Bitnami Redis cluster, execute the following command:

1
helm install redis -f redis/values-redis.yml bitnami/redis-cluster --version 10.2.5

Once deployed, you should see the information about deployment state, followed by the command to get your REDIS_PASSWORD:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
NAME: redis
LAST DEPLOYED: Tue Apr  8 11:22:44 2025
NAMESPACE: thingsboard-mqtt-broker
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: redis-cluster
CHART VERSION: 10.2.5
APP VERSION: 7.2.5** Please be patient while the chart is being deployed **


To get your password run:
    export REDIS_PASSWORD=$(kubectl get secret --namespace "thingsboard-mqtt-broker" redis-redis-cluster -o jsonpath="{.data.redis-password}" | base64 -d)

Let’s modify this command to print the password to the terminal:

1
echo $(kubectl get secret --namespace "thingsboard-mqtt-broker" redis-redis-cluster -o jsonpath="{.data.redis-password}" | base64 -d)

You need to copy the output and paste it into the tb-broker-cache-configmap.yml file, replacing YOUR_REDIS_PASSWORD.

1
nano tb-broker-cache-configmap.yml
Doc info icon

The value of REDIS_NODES in tb-broker-cache-configmap.yml is set to "redis-redis-cluster-headless:6379" by default. The host name is based on the release name (redis) and the default naming conventions of the Bitnami chart. If you modify the nameOverride or fullnameOverride fields in your Redis values file, or change the release name during installation, you must update this value accordingly to match the actual headless service name created by the chart.

Step 8. Installation

Execute the following command to run installation:

1
./k8s-install-tbmq.sh

After this command finishes, you should see the next line in the console:

1
Installation finished successfully!
Doc info icon

Otherwise, please check if you set the PostgreSQL URL and PostgreSQL password in the tb-broker-db-configmap.yml correctly.

Step 9. Provision Kafka

We recommend deploying Bitnami Kafka from Helm. For that, review the kafka folder.

1
ls kafka/

You can find there default-values-kafka.yml file - default values downloaded from Bitnami artifactHub. And values-kafka.yml file with modified values. We recommend keeping the first file untouched and making changes to the second one only. This way the upgrade process to the next version will go more smoothly as it will be possible to see diff.

To add the Bitnami helm repo:

1
2
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

To install Bitnami Kafka execute the following command:

1
helm install kafka -f kafka/values-kafka.yml bitnami/kafka --version 25.3.3

Wait up to several minutes until Kafka pods are up and running.

Step 10. Starting

Execute the following command to deploy the broker:

1
./k8s-deploy-tbmq.sh

After a few minutes, you may execute the next command to check the state of all pods.

1
kubectl get pods

If everything went fine, you should be able to see tb-broker-0 and tb-broker-1 pods. Every pod should be in the READY state.

Step 11. Configure Load Balancers

11.1 Configure HTTP(S) Load Balancer

Configure HTTP(S) Load Balancer to access web interface of your TBMQ instance. Basically, you have 2 possible configuration options:

  • http - Load Balancer without HTTPS support. Recommended for development. The only advantage is simple configuration and minimum costs. May be good option for development server but definitely not suitable for production.
  • https - Load Balancer with HTTPS support. Recommended for production. Acts as an SSL termination point. You may easily configure it to issue and maintain a valid SSL certificate. Automatically redirects all non-secure (HTTP) traffic to secure (HTTPS) port.

See links/instructions below on how to configure each of the suggested options.

HTTP Load Balancer

Execute the following command to deploy plain http load balancer:

1
kubectl apply -f receipts/http-load-balancer.yml

The process of load balancer provisioning may take some time. You may periodically check the status of the load balancer using the following command:

1
kubectl get ingress

Once provisioned, you should see similar output:

1
2
NAME                          CLASS    HOSTS   ADDRESS         PORTS   AGE
tb-broker-http-loadbalancer   <none>   *       34.111.24.134   80      7m25s

HTTPS Load Balancer

The process of configuring the load balancer using Google-managed SSL certificates is described on the official documentation page. The instructions below are extracted from the official documentation. Make sure you read prerequisites carefully before proceeding.

1
gcloud compute addresses create tbmq-http-lb-address --global

Replace the PUT_YOUR_DOMAIN_HERE with valid domain name in the https-load-balancer.yml file:

1
nano receipts/https-load-balancer.yml

Execute the following command to deploy secure http load balancer:

1
 kubectl apply -f receipts/https-load-balancer.yml

The process of load balancer provisioning may take some time. You may periodically check the status of the load balancer using the following command:

1
kubectl get ingress

Once provisioned, you should see similar output:

1
2
NAME                           CLASS    HOSTS   ADDRESS         PORTS   AGE
tb-broker-https-loadbalancer   gce      *       34.111.24.134   80      7m25s

Now, assign the domain name you have used to the load balancer IP address (the one you see instead of 34.111.24.134 in the command output).

Check that the domain name is configured correctly using dig:

1
dig YOUR_DOMAIN_NAME

Sample output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
; <<>> DiG 9.11.3-1ubuntu1.16-Ubuntu <<>> YOUR_DOMAIN_NAME
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12513
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;YOUR_DOMAIN_NAME.	IN	A

;; ANSWER SECTION:
YOUR_DOMAIN_NAME. 36 IN	A	34.111.24.134

;; Query time: 0 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Fri Nov 19 13:00:00 EET 2021
;; MSG SIZE  rcvd: 74

Once assigned, wait for the Google-managed certificate to finish provisioning. This might take up to 60 minutes. You can check the status of the certificate using the following command:

1
kubectl describe managedcertificate managed-cert

Certificate will be eventually provisioned if you have configured domain records properly. Once provisioned, you may use your domain name to access Web UI (over https).

11.2 Configure MQTT Load Balancer

Configure MQTT load balancer to be able to use MQTT protocol to connect devices.

Create TCP load balancer using following command:

1
kubectl apply -f receipts/mqtt-load-balancer.yml

The load balancer will forward all TCP traffic for ports 1883 and 8883.

MQTT over SSL

Follow this guide to create a .pem file with the SSL certificate. Store the file as server.pem in the working directory.

You’ll need to create a config-map with your PEM file, you can do it by calling command:

1
2
3
4
kubectl create configmap tbmq-mqtts-config \
 --from-file=server.pem=YOUR_PEM_FILENAME \
 --from-file=mqttserver_key.pem=YOUR_PEM_KEY_FILENAME \
 -o yaml --dry-run=client | kubectl apply -f -
  • where YOUR_PEM_FILENAME is the name of your server certificate file.
  • where YOUR_PEM_KEY_FILENAME is the name of your server certificate private key file.

Then, uncomment all sections in the ‘tb-broker.yml’ file that are marked with “Uncomment the following lines to enable two-way MQTTS”.

Execute command to apply changes:

1
kubectl apply -f tb-broker.yml

Step 12. Validate the setup

Now you can open TBMQ web interface in your browser using DNS name of the load balancer.

You can get DNS name of the load-balancers using the next command:

1
kubectl get ingress

You should see the similar picture:

1
2
NAME                          CLASS    HOSTS   ADDRESS         PORTS   AGE
tb-broker-http-loadbalancer   <none>   *       34.111.24.134   80      3d1h

Use ADDRESS field of the tb-broker-http-loadbalancer to connect to the cluster.

You should see TBMQ login page. Use the following default credentials for System Administrator:

Username:

Password:

1
sysadmin

On the first user log-in you will be asked to change the default password to the preferred one and then re-login using the new credentials.

Validate MQTT access

To connect to the cluster via MQTT you will need to get corresponding service IP. You can do this with the command:

1
kubectl get services

You should see the similar picture:

1
2
NAME                          TYPE           CLUSTER-IP       EXTERNAL-IP              PORT(S)                         AGE
tb-broker-mqtt-loadbalancer   LoadBalancer   10.100.119.170   *******                  1883:30308/TCP,8883:31609/TCP   6m58s

Use EXTERNAL-IP field of the load-balancer to connect to the cluster via MQTT protocol.

Troubleshooting

In case of any issues you can examine service logs for errors. For example to see TBMQ logs execute the following command:

1
kubectl logs -f tb-broker-0

Use the next command to see the state of all statefulsets.

1
kubectl get statefulsets

See kubectl Cheat Sheet command reference for more details.

Upgrading

Review the release notes and upgrade instruction for detailed information on the latest changes.

Backup and restore (Optional)

While backing up your PostgreSQL database is highly recommended, it is optional before proceeding with the upgrade. For further guidance, follow the next instructions.

Upgrade to 2.0.0

For the TBMQ v2.0.0 upgrade, if you haven’t installed Redis yet, please follow step 7 to complete the installation. Only then can you proceed with the upgrade.

Upgrade to 1.3.0

For the TBMQ 1.3.0 version, the installation scripts were updated to contain a new 8084 port for MQTT over WebSockets. This is needed for the correct work with the WebSocket client page.

Please pull the v1.3.0 configuration files or modify your existing ones to include a new port entry. To find more details please visit the following link.

Once the required changes are made, you should be able to connect the MQTT client on the WebSocket client page. Otherwise, please contact us, so we can answer any questions and provide our help if needed.

Run upgrade

In case you would like to upgrade, please pull the recent changes from the latest release branch:

1
git pull origin release-2.0.1

Note: Make sure custom changes of yours if available are not lost during the merge process.

If you encounter conflicts during the merge process that are not related to your changes, we recommend accepting all the new changes from the remote branch.

You could revert the merge process by executing the following:

1
git merge --abort

And repeat the merge by accepting theirs changes.

1
git pull origin release-2.0.1 -X theirs

There are several useful options for the default merge strategy:

  • -X ours - this option forces conflicting hunks to be auto-resolved cleanly by favoring our version.
  • -X theirs - this is the opposite of ours. See more details here.

After that execute the following commands:

1
2
3
./k8s-delete-tbmq.sh
./k8s-upgrade-tbmq.sh --fromVersion=FROM_VERSION
./k8s-deploy-tbmq.sh

Where FROM_VERSION - from which version upgrade should be started. See Upgrade Instructions for valid fromVersion values.

Cluster deletion

Execute the following command to delete TBMQ nodes:

1
./k8s-delete-tbmq.sh

Execute the following command to delete all TBMQ nodes and configmaps, load balancers, etc.:

1
./k8s-delete-all.sh

Execute the following command to delete the GKE cluster:

1
gcloud container clusters delete $TB_CLUSTER_NAME --region=$GCP_REGION

Next steps