Helm · Minikube
This guide covers deploying a TBMQ cluster using the official Helm chart. Minikube is used as the reference environment for self-hosted Kubernetes deployments. If you’re deploying TBMQ in a self-managed cluster without cloud-specific load balancer integrations, Minikube provides a practical way to validate the setup end-to-end.
Prerequisites
Section titled “Prerequisites”To deploy a TBMQ cluster using Helm in Minikube, you need the following tools installed on your local machine:
Configure your Kubernetes environment
Section titled “Configure your Kubernetes environment”Start Minikube
Section titled “Start Minikube”minikube startInstall NGINX Ingress Controller
Section titled “Install NGINX Ingress Controller”To expose HTTP(S) services in Minikube, install the NGINX Ingress Controller using Helm
with a LoadBalancer service type:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginxhelm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx \ --set controller.admissionWebhooks.enabled=false \ --set controller.service.type=LoadBalancerThis deploys the NGINX Ingress Controller in the default namespace and configures it to expose traffic externally via a LoadBalancer service.
Before continuing, verify that the ingress controller pod is running and ready:
kubectl get pods -n defaultExpected output:
NAME READY STATUS RESTARTS AGEnginx-ingress-ingress-nginx-controller-xxxxx 1/1 Running 0 1mStart Minikube Tunnel
Section titled “Start Minikube Tunnel”Since Minikube doesn’t natively support external LoadBalancer services, you need to create a tunnel to expose them outside the cluster. This is required for accessing both the NGINX Ingress Controller and the TBMQ MQTT LoadBalancer.
Run the following command in a separate terminal:
minikube tunnelAfter starting the tunnel, verify that the NGINX Ingress Controller received an EXTERNAL-IP:
kubectl get svc -n defaultExpected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEnginx-ingress-ingress-nginx-controller LoadBalancer 10.101.102.99 192.168.49.2 80:32023/TCP,443:32144/TCP 2mAdd the TBMQ Cluster Helm repository
Section titled “Add the TBMQ Cluster Helm repository”Before installing the chart, add the TBMQ Helm repository to your local Helm client:
helm repo add tbmq-helm-chart https://helm.thingsboard.io/tbmqhelm repo updateCreate namespace
Section titled “Create namespace”Create a dedicated namespace for your TBMQ cluster deployment:
kubectl create namespace tbmqkubectl config set-context --current --namespace=tbmqThis sets tbmq as the default namespace for your current context, so you don’t need to pass --namespace to every command.
Modify default chart values
Section titled “Modify default chart values”To customize your TBMQ deployment, download the default values.yaml from the chart:
helm show values tbmq-helm-chart/tbmq-cluster > values.yamlDeploy and connect to dependencies
Section titled “Deploy and connect to dependencies”Starting with chart version 2.0.0, the TBMQ Helm chart no longer bundles PostgreSQL, Kafka, or a Redis-compatible cache. This step assumes you have already deployed all three to your Minikube cluster using whatever fits your environment: an operator like CrunchyData PGO, Strimzi, or Valkey Operator, a managed service, or raw manifests. A worked end-to-end example using CrunchyData PGO, Strimzi Kafka, and Valkey is published in the chart’s Minikube deployment guide.
Point the chart at your instances by setting the connection values in values.yaml. At a minimum, set:
postgresql: host: "my-postgres.thingsboard-mqtt-broker.svc" port: 5432 database: "thingsboard_mqtt_broker" username: "postgres" existingSecret: "my-pg-secret" existingSecretPasswordKey: "password"
kafka: bootstrapServers: "my-kafka-bootstrap.thingsboard-mqtt-broker.svc:9092"
redis: connectionType: "standalone" host: "my-valkey.thingsboard-mqtt-broker.svc" port: 6379 usePassword: falseReplace the hostnames, credentials, and redis.connectionType (standalone or cluster) with values that match
your actual deployment. For the full set of supported keys, see the
Infrastructure Configuration
section of the chart documentation.
Load balancer configuration
Section titled “Load balancer configuration”The default NGINX Ingress Controller load balancer type is already suitable for Minikube and other generic Kubernetes environments — no changes are required:
loadbalancer: type: "nginx"HTTPS access
Section titled “HTTPS access”HTTPS termination at the load balancer level is not currently implemented for the NGINX Ingress Controller. This functionality may be added in a future release.
MQTTS access
Section titled “MQTTS access”The NGINX Ingress Controller does not support TLS termination for TCP-based protocols like MQTT. To secure MQTT communication, configure mutual TLS (mTLS) directly in TBMQ. Refer to the TBMQ Helm chart documentation for details on configuring mTLS.
Install the TBMQ Helm chart
Section titled “Install the TBMQ Helm chart”Make sure you’re in the same directory as your customized values.yaml file, then install TBMQ:
helm install my-tbmq-cluster tbmq-helm-chart/tbmq-cluster \ -f values.yaml \ --set installation.installDbSchema=trueOnce the deployment completes, you should see output similar to:
NAME: my-tbmq-clusterLAST DEPLOYED: Wed Mar 26 17:42:49 2025NAMESPACE: tbmqSTATUS: deployedREVISION: 1TEST SUITE: NoneNOTES:TBMQ Cluster my-tbmq-cluster will be deployed in few minutes.Info:Namespace: tbmqValidate HTTP access
Section titled “Validate HTTP access”Get the address of the HTTP load balancer:
kubectl get ingress my-tbmq-cluster-http-lbExpected output:
NAME CLASS HOSTS ADDRESS PORTS AGEmy-tbmq-cluster-http-lb nginx * 10.111.137.85 80 47mUse the ADDRESS field of my-tbmq-cluster-http-lb to open the TBMQ web interface in your browser.
You should see the TBMQ login page. Use the default System Administrator credentials:
Username:
Password:
sysadminOn first login, you are prompted to change the default password and re-login with the new credentials.
Validate MQTT access
Section titled “Validate MQTT access”If minikube tunnel is running, you should see the MQTT service appear in the tunnel status output:
Status: machine: minikube pid: 35528 route: 10.96.0.0/12 -> 192.168.49.2 minikube: Running services: [nginx-ingress-ingress-nginx-controller, my-tbmq-cluster-mqtt-lb]The service my-tbmq-cluster-mqtt-lb is the LoadBalancer used for MQTT communication. Retrieve its EXTERNAL-IP with:
kubectl get svc my-tbmq-cluster-mqtt-lbExpected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEmy-tbmq-cluster-mqtt-lb LoadBalancer 10.101.27.40 ******* 1883:31041/TCP,8084:30151/TCP,8883:30188/TCP,8085:32706/TCP 41mUse the EXTERNAL-IP field to connect to the cluster via MQTT.
Troubleshooting
Section titled “Troubleshooting”To examine service logs for errors, view TBMQ logs with:
kubectl logs -f my-tbmq-cluster-tbmq-node-0Check the state of all StatefulSets:
kubectl get statefulsetsSee the kubectl Cheat Sheet for more details.
Upgrading
Section titled “Upgrading”Chart version 2.0.0 is a breaking change from the 1.x line: the Bitnami PostgreSQL, Kafka, and Redis subcharts that
earlier versions bundled have been removed. Upgrades from chart 1.x require either keeping the existing in-cluster
Bitnami stack (annotated with helm.sh/resource-policy: keep) or provisioning a fresh third-party stack
(PostgreSQL 17, Apache Kafka 4.0.0, Valkey 8.0+). Same-edition upgrades and CE → PE cross-edition migration on the
same TBMQ version are also supported via the chart’s pre-upgrade hook.
For the full procedure, refer to the
Upgrading
section of the TBMQ Helm Chart documentation on Artifact Hub. It covers the chart 1.x → 2.0.0 migration paths, the
upgrade.upgradeDbSchema and upgrade.fromVersion flags, and the CE → PE migration steps.
Uninstalling TBMQ Helm chart
Section titled “Uninstalling TBMQ Helm chart”To uninstall the TBMQ Helm chart:
helm delete my-tbmq-clusterThis removes all TBMQ components associated with the release from the current namespace.
The helm delete command removes only the logical resources of the TBMQ cluster. Per-pod PVCs created from the
broker’s volumeClaimTemplate are intentionally not deleted by helm uninstall and carry no app= label.
Drop them explicitly by name pattern:
kubectl get pvc -n tbmq -o name | grep tbmq-node-data | xargs -I {} kubectl delete -n tbmq {}External PostgreSQL, Kafka, and Redis are owned by whatever you deployed alongside TBMQ and are not affected by
helm delete. Drop them through their own tooling if you no longer need them.
Delete Kubernetes cluster
Section titled “Delete Kubernetes cluster”To delete the Minikube cluster:
minikube delete