- Prerequisites
- Configure your Kubernetes environment
- Add the TBMQ Cluster Helm repository
- Modify default chart values
- Create namespace
- Install the TBMQ Helm chart
- Validate HTTP Access
- Validate MQTT Access
- Troubleshooting
- Upgrading
- Uninstalling TBMQ Helm chart
- Delete Kubernetes Cluster
- Next steps
This guide will help you to set up TBMQ Cluster using the official Helm chart. Minikube used as the reference environment for the self-hosted kubernetes deployment. If you’re deploying TBMQ in a self-managed cluster without cloud-specific load balancer integrations, Minikube provides a simple way to test the setup end-to-end.
Prerequisites
To deploy TBMQ Cluster using Helm in Minikube, you need to have the following tools installed on your local machine:
Configure your Kubernetes environment
Start Minikube
1
minikube start
Install NGINX Ingress Controller
To expose HTTP(S) services in a generic Kubernetes environment like Minikube, you need to install the NGINX Ingress Controller. This example installs it using Helm and configures it with a LoadBalancer service type:
1
2
3
4
5
6
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set controller.admissionWebhooks.enabled=false \
--set controller.service.type=LoadBalancer
This will deploy the NGINX ingress controller in the default namespace and configure it to expose traffic externally via a LoadBalancer service. Before continuing, make sure the ingress controller pod is running and ready:
1
kubectl get pods -n default
You should see something like:
1
2
NAME READY STATUS RESTARTS AGE
nginx-ingress-ingress-nginx-controller-xxxxx 1/1 Running 0 1m
Start Minikube Tunnel
Since Minikube doesn’t natively support external LoadBalancer services, you need to create a tunnel to expose them outside the cluster. This is required for accessing services like the NGINX Ingress Controller and TBMQ’s MQTT LoadBalancer.
Run the following command in a separate terminal:
1
minikube tunnel
This command requires administrative privileges and may prompt for your password. It will create a network route on your machine and assign an external IP to the NGINX LoadBalancer service.
After starting the tunnel, verify that the NGINX Ingress Controller received an EXTERNAL-IP:
1
kubectl get svc -n default
Example output:
1
2
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-ingress-nginx-controller LoadBalancer 10.101.102.99 192.168.49.2 80:32023/TCP,443:32144/TCP 2m
Add the TBMQ Cluster Helm repository
Before installing the chart, add the TBMQ Helm repository to your local Helm client:
1
2
helm repo add tbmq-helm-chart https://helm.thingsboard.io/tbmq
helm repo update
Modify default chart values
To customize your TBMQ deployment, first download the default values.yaml
file from the chart:
1
helm show values tbmq-helm-chart/tbmq-cluster > values.yaml
External PostgreSQL
By default, the chart installs Bitnami PostgreSQL as a sub-chart:
1
2
3
4
5
# This section will bring bitnami/postgresql (https://artifacthub.io/packages/helm/bitnami/postgresql) into this chart.
# If you want to add some extra configuration parameters, you can put them under the `postgresql` key, and they will be passed to bitnami/postgresql chart
postgresql:
# @param enabled If enabled is set to true, externalPostgresql configuration will be ignored
enabled: true
provisioning a single-node instance with configurable storage, backups, and monitoring options.
For users with an existing PostgreSQL instance, TBMQ can be configured to connect externally.
To do this, disable the built-in PostgreSQL by set postgresql.enabled: false
and specify connection details in the externalPostgresql
section.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# If you're deploying PostgreSQL externally, configure this section
externalPostgresql:
# @param host - External PostgreSQL server host
host: ""
# @param port - External PostgreSQL server port
##
port: 5432
# @param password - PostgreSQL user
##
username: "postgres"
# @param password - PostgreSQL user password
##
password: "postgres"
# @param database - PostgreSQL database name for TBMQ
##
database: "thingsboard_mqtt_broker"
Load Balancer configuration
By default, the Helm chart deploys a standard NGINX Ingress Controller for HTTP and MQTT traffic when installing TBMQ on Kubernetes.
1
2
loadbalancer:
type: "nginx"
which is suitable for Minikube and other generic Kubernetes environments.
HTTPS access
Currently, HTTPS termination at the load balancer level is not implemented for the NGINX Ingress Controller. This functionality may be added in a future release.
MQTTS access
The NGINX Ingress Controller does not support TLS termination for TCP-based protocols like MQTT. If you want to secure MQTT communication, you must configure Two-Way TLS (Mutual TLS or mTLS) directly on the application level (TBMQ side). Please refer to the TBMQ Helm chart documentation for details on configuring Two-Way TLS.
Create namespace
It’s a good practice to create a dedicated namespace for your TBMQ cluster deployment:
1
kubectl create namespace tbmq
1
kubectl config set-context --current --namespace=tbmq
This sets tbmq as the default namespace for your current context, so you don’t need to pass –namespace to every command.
Install the TBMQ Helm chart
Now you’re ready to install TBMQ using the Helm chart.
Make sure you’re in the same directory as your customized values.yaml
file.
1
2
3
helm install my-tbmq-cluster tbmq-helm-chart/tbmq-cluster \
-f values.yaml \
--set installation.installDbSchema=true
Once the deployment process is completed, you should see output similar to the following:
1
2
3
4
5
6
7
8
9
10
NAME: my-tbmq-cluster
LAST DEPLOYED: Wed Mar 26 17:42:49 2025
NAMESPACE: tbmq
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
TBMQ Cluster my-tbmq-cluster will be deployed in few minutes.
Info:
Namespace: tbmq
Validate HTTP Access
1
kubectl get ingress my-tbmq-cluster-http-lb
You should see the similar picture:
1
2
NAME CLASS HOSTS ADDRESS PORTS AGE
my-tbmq-cluster-http-lb nginx * 10.111.137.85 80 47m
Use ADDRESS
field of the my-tbmq-cluster-http-lb
to connect to the cluster.
You should see TBMQ login page. Use the following default credentials for System Administrator:
Username:
Password:
1
sysadmin
On the first user log-in you will be asked to change the default password to the preferred one and then re-login using the new credentials.
Validate MQTT Access
If minikube tunnel is running, you should notice that a new service appears in the list, exposing MQTT traffic externally:
1
2
3
4
5
6
Status:
machine: minikube
pid: 35528
route: 10.96.0.0/12 -> 192.168.49.2
minikube: Running
services: [nginx-ingress-ingress-nginx-controller, my-tbmq-cluster-mqtt-lb]
The service my-tbmq-cluster-mqtt-lb
is the LoadBalancer used for MQTT communication. You can retrieve its EXTERNAL-IP
with:
1
kubectl get svc my-tbmq-cluster-mqtt-lb
You should see the similar picture:
1
2
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-tbmq-cluster-mqtt-lb LoadBalancer 10.101.27.40 ******* 1883:31041/TCP,8084:30151/TCP,8883:30188/TCP,8085:32706/TCP 41m
Use EXTERNAL-IP
field of the load-balancer to connect to the cluster via MQTT protocol.
Troubleshooting
In case of any issues, you can examine service logs for errors. For example, to see TBMQ logs, execute the following command:
1
kubectl logs -f my-tbmq-cluster-tbmq-node-0
Use the next command to see the state of all statefulsets.
1
kubectl get statefulsets
See kubectl Cheat Sheet command reference for more details.
Upgrading
Helm support was introduced with the TBMQ 2.1.0 release. Upgrade options were not included in the initial version of the Helm chart and will be provided alongside a future TBMQ release. This section will be updated once a new version of TBMQ and its Helm chart become available.
Uninstalling TBMQ Helm chart
To uninstall the TBMQ Helm chart, run the following command:
1
helm delete my-tbmq-cluster
This command removes all TBMQ components associated with the release from the namespace set in your current Kubernetes context.
The helm delete
command removes only the logical resources of the TBMQ cluster.
To fully clean up all persistent data, you may also need to manually delete the associated Persistent Volume Claims (PVCs) after uninstallation:
1
kubectl delete pvc -l app.kubernetes.io/instance=my-tbmq-cluster
Delete Kubernetes Cluster
Execute the following command to delete the Minikube cluster:
1
minikube delete
Next steps
-
Getting started guide - This guide provide quick overview of TBMQ.
-
Security guide - Learn how to enable authentication and authorization of MQTT clients.
-
Configuration guide - Learn about TBMQ configuration files and parameters.
-
MQTT client type guide - Learn about TBMQ client types.
-
Integration with ThingsBoard - Learn about how to integrate TBMQ with ThingsBoard.