- Prerequisites
- Configure your Kubernetes environment
- Add the TBMQ Cluster Helm repository
- Modify default chart values
- Create namespace
- Install the TBMQ Helm chart
- Validate HTTP Access
- Validate MQTT Access
- Troubleshooting
- Upgrading
- Uninstalling TBMQ Helm chart
- Delete Kubernetes Cluster
- Next steps
This guide will help you to set up TBMQ Cluster using the official Helm chart on Google Cloud Platform (GCP) using Google Kubernetes Engine (GKE).
Prerequisites
To deploy TBMQ Cluster using Helm on GKE cluster, you need to have the following tools installed on your local machine:
Configure your Kubernetes environment
Configure GCP tools
See before you begin guide for more info.
Create a new Google Cloud Platform project (recommended) or choose existing one. Make sure you have selected correct project by executing the following command:
1
gcloud init
Enable GKE service
1
gcloud services enable container.googleapis.com
Define environment variables
Define environment variables that you will use in various commands later in this guide.
We assume you are using Linux. Execute the following command:
1
2
3
4
5
6
7
8
9
10
export GCP_PROJECT=$(gcloud config get-value project)
export GCP_REGION=us-central1
export GCP_ZONE=us-central1
export GCP_ZONE1=us-central1-a
export GCP_ZONE2=us-central1-b
export GCP_ZONE3=us-central1-c
export GCP_NETWORK=default
export TB_CLUSTER_NAME=tbmq-cluster
export TB_DATABASE_NAME=tbmq-db
echo "You have selected project: $GCP_PROJECT, region: $GCP_REGION, gcp zones: $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3, network: $GCP_NETWORK, cluster: $TB_CLUSTER_NAME, database: $TB_DATABASE_NAME"
where:
- first line uses gcloud command to fetch your current GCP project id. We will refer to it later in this guide using $GCP_PROJECT;
- us-central1 is one of the available compute regions. We will refer to it later in this guide using $GCP_REGION;
- default is a default GCP network name; We will refer to it later in this guide using $GCP_NETWORK;
- tbmq-cluster is the name of your cluster. You may input a different name. We will refer to it later in this guide using $TB_CLUSTER_NAME;
Configure and create GKE cluster
Create a regional cluster distributed across 3 zones with nodes of your preferred machine type.
The example below provisions one e2-standard-4 node per zone (three nodes total), but you can modify the --machine-type
and --num-nodes
to suit your workload requirements.
For a full list of available machine types and their specifications, refer to the GCP machine types documentation.
Execute the following command (recommended):
1
2
3
4
5
6
7
8
9
gcloud container clusters create $TB_CLUSTER_NAME \
--release-channel stable \
--region $GCP_REGION \
--network=$GCP_NETWORK \
--node-locations $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3 \
--enable-ip-alias \
--num-nodes=1 \
--node-labels=role=main \
--machine-type=e2-standard-4
Alternatively, you may use this guide for custom cluster setup.
Update the context of kubectl
Update the context of kubectl using command:
1
gcloud container clusters get-credentials $TB_CLUSTER_NAME --zone $GCP_ZONE
Add the TBMQ Cluster Helm repository
Before installing the chart, add the TBMQ Helm repository to your local Helm client:
1
2
helm repo add tbmq-helm-chart https://helm.thingsboard.io/tbmq
helm repo update
Modify default chart values
To customize your TBMQ deployment, first download the default values.yaml
file from the chart:
1
helm show values tbmq-helm-chart/tbmq-cluster > values.yaml
External PostgreSQL
By default, the chart installs Bitnami PostgreSQL as a sub-chart:
1
2
3
4
5
# This section will bring bitnami/postgresql (https://artifacthub.io/packages/helm/bitnami/postgresql) into this chart.
# If you want to add some extra configuration parameters, you can put them under the `postgresql` key, and they will be passed to bitnami/postgresql chart
postgresql:
# @param enabled If enabled is set to true, externalPostgresql configuration will be ignored
enabled: true
provisioning a single-node instance with configurable storage, backups, and monitoring options.
For users with an existing PostgreSQL instance, TBMQ can be configured to connect externally.
To do this, disable the built-in PostgreSQL by set postgresql.enabled: false
and specify connection details in the externalPostgresql
section.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# If you're deploying PostgreSQL externally, configure this section
externalPostgresql:
# @param host - External PostgreSQL server host
host: ""
# @param port - External PostgreSQL server port
##
port: 5432
# @param password - PostgreSQL user
##
username: "postgres"
# @param password - PostgreSQL user password
##
password: "postgres"
# @param database - PostgreSQL database name for TBMQ
##
database: "thingsboard_mqtt_broker"
If you’re deploying on GCP GKE and plan to use a Google Cloud SQL (PostgreSQL) instance, make sure to first enable the required GCP services, then follow this instructions to provision and configure your PostgreSQL instance.
Load Balancer configuration
By default, the Helm chart deploys a standard NGINX Ingress Controller for HTTP and MQTT traffic when installing TBMQ on Kubernetes.
1
2
loadbalancer:
type: "nginx"
However, since you are deploying TBMQ Cluster on GCP GKE, you need to change this value to:
1
2
loadbalancer:
type: "gcp"
This will automatically configure:
- Plain HTTP traffic to be exposed via HTTP Load Balancer.
- Plain MQTT traffic to be exposed via TCP Load Balancer.
HTTPS access
The process of configuring the load balancer using Google-managed SSL certificates is described on the official documentation page. The instructions below are extracted from the official documentation. Make sure you read prerequisites carefully before proceeding.
- Reserve a static global IP address:
1
gcloud compute addresses create tbmq-http-lb-address --global
- Get the reserved static IP address:
1
gcloud compute addresses describe tbmq-http-lb-address --global --format="get(address)"
- Configure your DNS:
You must have at least one fully qualified domain name (FQDN) configured to point to the reserved static IP address. This is required for the managed certificate to be issued successfully.
- Update the
values.yaml
file:
1
2
3
4
5
6
7
8
9
10
11
12
13
loadbalancer:
type: "gcp"
http:
enabled: true
ssl:
enabled: true
# This will be the name of the ManagedCertificate resource automatically created by the Helm chart.
certificateRef: "<your-managed-certificate-resource-name>"
domains:
# Must point to the reserved static IP.
- <your-domain-name>
# Static IP address for the GCP HTTP(S) load balancer.
staticIP: "tbmq-http-lb-address"
This will automatically issue and manage an SSL certificate via the ManagedCertificate resource created by the Helm chart and expose TBMQ securely over HTTPS.
MQTTS access
GCP Load Balancer does not support TLS termination for MQTT traffic. If you want to secure MQTT communication, you must configure Two-Way TLS (Mutual TLS or mTLS) directly on the application level (TBMQ side). Please refer to the TBMQ Helm chart documentation for details on configuring Two-Way TLS.
Create namespace
It’s a good practice to create a dedicated namespace for your TBMQ cluster deployment:
1
kubectl create namespace tbmq
1
kubectl config set-context --current --namespace=tbmq
This sets tbmq as the default namespace for your current context, so you don’t need to pass –namespace to every command.
Install the TBMQ Helm chart
Now you’re ready to install TBMQ using the Helm chart.
Make sure you’re in the same directory as your customized values.yaml
file.
1
2
3
helm install my-tbmq-cluster tbmq-helm-chart/tbmq-cluster \
-f values.yaml \
--set installation.installDbSchema=true
Once the deployment process is completed, you should see output similar to the following:
1
2
3
4
5
6
7
8
9
10
NAME: my-tbmq-cluster
LAST DEPLOYED: Wed Mar 26 17:42:49 2025
NAMESPACE: tbmq
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
TBMQ Cluster my-tbmq-cluster will be deployed in few minutes.
Info:
Namespace: tbmq
Validate HTTP Access
Now you can open TBMQ web interface in your browser using DNS name of the load balancer.
You can get DNS name of the load-balancers using the next command:
1
kubectl get ingress
You should see the similar output:
1
2
NAME CLASS HOSTS ADDRESS PORTS AGE
my-tbmq-cluster-http-lb <none> * <your-domain-name> 80 3d1h
Use ADDRESS
field of the my-tbmq-cluster-http-lb to connect to the cluster.
You should see TBMQ login page. Use the following default credentials for System Administrator:
Username:
Password:
1
sysadmin
On the first user log-in you will be asked to change the default password to the preferred one and then re-login using the new credentials.
Validate HTTPS access (if configured)
Check that the domain name is configured correctly using dig:
1
dig <your-domain-name>
Sample output:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
; <<>> DiG 9.11.3-1ubuntu1.16-Ubuntu <<>> <your-domain-name>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12513
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;YOUR_DOMAIN_NAME. IN A
;; ANSWER SECTION:
YOUR_DOMAIN_NAME. 36 IN A 34.111.24.134
;; Query time: 0 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Fri Nov 19 13:00:00 EET 2021
;; MSG SIZE rcvd: 74
Once assigned, wait for the Google-managed certificate to finish provisioning. This might take up to 60 minutes. You can check the status of the certificate using the following command:
1
kubectl describe managedcertificate <your-managed-certificate-resource-name>
The Certificate will be eventually provisioned if you have configured domain records properly. Use <your-domain-name>
to connect to the cluster.
Validate MQTT Access
To connect to the cluster via MQTT you will need to get the corresponding service IP. You can do this with the command:
1
kubectl get services
You should see the similar picture:
1
2
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-tbmq-cluster-mqtt-lb LoadBalancer 10.100.119.170 ******* 1883:30308/TCP,8883:31609/TCP 6m58s
Use EXTERNAL-IP
field of the load-balancer to connect to the cluster via MQTT protocol.
Troubleshooting
In case of any issues, you can examine service logs for errors. For example, to see TBMQ logs, execute the following command:
1
kubectl logs -f my-tbmq-cluster-tbmq-node-0
Use the next command to see the state of all statefulsets.
1
kubectl get statefulsets
See kubectl Cheat Sheet command reference for more details.
Upgrading
Helm support was introduced with the TBMQ 2.1.0 release. Upgrade options were not included in the initial version of the Helm chart and will be provided alongside a future TBMQ release. This section will be updated once a new version of TBMQ and its Helm chart become available.
Uninstalling TBMQ Helm chart
To uninstall the TBMQ Helm chart, run the following command:
1
helm delete my-tbmq-cluster
This command removes all TBMQ components associated with the release from the namespace set in your current Kubernetes context.
The helm delete
command removes only the logical resources of the TBMQ cluster.
To fully clean up all persistent data, you may also need to manually delete the associated Persistent Volume Claims (PVCs) after uninstallation:
1
kubectl delete pvc -l app.kubernetes.io/instance=my-tbmq-cluster
Delete Kubernetes Cluster
Execute the following command to delete the GKE cluster:
1
gcloud container clusters delete $TB_CLUSTER_NAME --region=$GCP_REGION
Next steps
-
Getting started guide - This guide provide quick overview of TBMQ.
-
Security guide - Learn how to enable authentication and authorization of MQTT clients.
-
Configuration guide - Learn about TBMQ configuration files and parameters.
-
MQTT client type guide - Learn about TBMQ client types.
-
Integration with ThingsBoard - Learn about how to integrate TBMQ with ThingsBoard.