Skip to content
Stand with Ukraine flag

Helm · Minikube

This guide covers deploying a TBMQ cluster using the official Helm chart. Minikube is used as the reference environment for self-hosted Kubernetes deployments. If you’re deploying TBMQ in a self-managed cluster without cloud-specific load balancer integrations, Minikube provides a practical way to validate the setup end-to-end.

To deploy a TBMQ cluster using Helm in Minikube, you need the following tools installed on your local machine:

Terminal window
minikube start

To expose HTTP(S) services in Minikube, install the NGINX Ingress Controller using Helm with a LoadBalancer service type:

Terminal window
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set controller.admissionWebhooks.enabled=false \
--set controller.service.type=LoadBalancer

This deploys the NGINX Ingress Controller in the default namespace and configures it to expose traffic externally via a LoadBalancer service.

Before continuing, verify that the ingress controller pod is running and ready:

Terminal window
kubectl get pods -n default

Expected output:

NAME READY STATUS RESTARTS AGE
nginx-ingress-ingress-nginx-controller-xxxxx 1/1 Running 0 1m

Since Minikube doesn’t natively support external LoadBalancer services, you need to create a tunnel to expose them outside the cluster. This is required for accessing both the NGINX Ingress Controller and the TBMQ MQTT LoadBalancer.

Run the following command in a separate terminal:

Terminal window
minikube tunnel

After starting the tunnel, verify that the NGINX Ingress Controller received an EXTERNAL-IP:

Terminal window
kubectl get svc -n default

Expected output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-ingress-nginx-controller LoadBalancer 10.101.102.99 192.168.49.2 80:32023/TCP,443:32144/TCP 2m

Before installing the chart, add the TBMQ Helm repository to your local Helm client:

Terminal window
helm repo add tbmq-helm-chart https://helm.thingsboard.io/tbmq
helm repo update

Create a dedicated namespace for your TBMQ cluster deployment:

Terminal window
kubectl create namespace tbmq
Terminal window
kubectl config set-context --current --namespace=tbmq

This sets tbmq as the default namespace for your current context, so you don’t need to pass --namespace to every command.

To customize your TBMQ deployment, download the default values.yaml from the chart:

Terminal window
helm show values tbmq-helm-chart/tbmq-cluster > values.yaml

Starting with chart version 2.0.0, the TBMQ Helm chart no longer bundles PostgreSQL, Kafka, or a Redis-compatible cache. This step assumes you have already deployed all three to your Minikube cluster using whatever fits your environment: an operator like CrunchyData PGO, Strimzi, or Valkey Operator, a managed service, or raw manifests. A worked end-to-end example using CrunchyData PGO, Strimzi Kafka, and Valkey is published in the chart’s Minikube deployment guide.

Point the chart at your instances by setting the connection values in values.yaml. At a minimum, set:

postgresql:
host: "my-postgres.thingsboard-mqtt-broker.svc"
port: 5432
database: "thingsboard_mqtt_broker"
username: "postgres"
existingSecret: "my-pg-secret"
existingSecretPasswordKey: "password"
kafka:
bootstrapServers: "my-kafka-bootstrap.thingsboard-mqtt-broker.svc:9092"
redis:
connectionType: "standalone"
host: "my-valkey.thingsboard-mqtt-broker.svc"
port: 6379
usePassword: false

Replace the hostnames, credentials, and redis.connectionType (standalone or cluster) with values that match your actual deployment. For the full set of supported keys, see the Infrastructure Configuration section of the chart documentation.

Configure license and broker images

Section titled Configure license and broker images

The chart defaults point at the open-source broker and Integration Executor images. Switch both to the PE variants and provide a license.

In your values.yaml, update the tbmq.image and tbmq-ie.image blocks:

tbmq:
image:
repository: thingsboard/tbmq-pe-node
tag: 2.3.0PE
tbmq-ie:
image:
repository: thingsboard/tbmq-pe-integration-executor
tag: 2.3.0PE

Pre-create a Kubernetes Secret holding the license value in the namespace you plan to install into, and reference it from values.yaml:

Terminal window
kubectl create secret generic my-tbmq-license -n tbmq \
--from-literal=license-key='YOUR_LICENSE_VALUE'
license:
existingSecret: my-tbmq-license

The default NGINX Ingress Controller load balancer type is already suitable for Minikube and other generic Kubernetes environments — no changes are required:

loadbalancer:
type: "nginx"

HTTPS termination at the load balancer level is not currently implemented for the NGINX Ingress Controller. This functionality may be added in a future release.

The NGINX Ingress Controller does not support TLS termination for TCP-based protocols like MQTT. To secure MQTT communication, configure mutual TLS (mTLS) directly in TBMQ. Refer to the TBMQ Helm chart documentation for details on configuring mTLS.

Make sure you’re in the same directory as your customized values.yaml file, then install TBMQ:

Terminal window
helm install my-tbmq-cluster tbmq-helm-chart/tbmq-cluster \
-f values.yaml \
--set installation.installDbSchema=true

Once the deployment completes, you should see output similar to:

NAME: my-tbmq-cluster
LAST DEPLOYED: Wed Mar 26 17:42:49 2025
NAMESPACE: tbmq
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
TBMQ Cluster my-tbmq-cluster will be deployed in few minutes.
Info:
Namespace: tbmq

Get the address of the HTTP load balancer:

Terminal window
kubectl get ingress my-tbmq-cluster-http-lb

Expected output:

NAME CLASS HOSTS ADDRESS PORTS AGE
my-tbmq-cluster-http-lb nginx * 10.111.137.85 80 47m

Use the ADDRESS field of my-tbmq-cluster-http-lb to open the TBMQ web interface in your browser.

You should see the TBMQ login page. Use the default System Administrator credentials:

Username:

Password:

sysadmin

On first login, you are prompted to change the default password and re-login with the new credentials.

If minikube tunnel is running, you should see the MQTT service appear in the tunnel status output:

Status:
machine: minikube
pid: 35528
route: 10.96.0.0/12 -> 192.168.49.2
minikube: Running
services: [nginx-ingress-ingress-nginx-controller, my-tbmq-cluster-mqtt-lb]

The service my-tbmq-cluster-mqtt-lb is the LoadBalancer used for MQTT communication. Retrieve its EXTERNAL-IP with:

Terminal window
kubectl get svc my-tbmq-cluster-mqtt-lb

Expected output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-tbmq-cluster-mqtt-lb LoadBalancer 10.101.27.40 ******* 1883:31041/TCP,8084:30151/TCP,8883:30188/TCP,8085:32706/TCP 41m

Use the EXTERNAL-IP field to connect to the cluster via MQTT.

To examine service logs for errors, view TBMQ logs with:

Terminal window
kubectl logs -f my-tbmq-cluster-tbmq-node-0

Check the state of all StatefulSets:

Terminal window
kubectl get statefulsets

See the kubectl Cheat Sheet for more details.

Chart version 2.0.0 is a breaking change from the 1.x line: the Bitnami PostgreSQL, Kafka, and Redis subcharts that earlier versions bundled have been removed. Upgrades from chart 1.x require either keeping the existing in-cluster Bitnami stack (annotated with helm.sh/resource-policy: keep) or provisioning a fresh third-party stack (PostgreSQL 17, Apache Kafka 4.0.0, Valkey 8.0+). Same-edition upgrades and CE → PE cross-edition migration on the same TBMQ version are also supported via the chart’s pre-upgrade hook.

For the full procedure, refer to the Upgrading section of the TBMQ Helm Chart documentation on Artifact Hub. It covers the chart 1.x → 2.0.0 migration paths, the upgrade.upgradeDbSchema and upgrade.fromVersion flags, and the CE → PE migration steps.

To uninstall the TBMQ Helm chart:

Terminal window
helm delete my-tbmq-cluster

This removes all TBMQ components associated with the release from the current namespace.

The helm delete command removes only the logical resources of the TBMQ cluster. Per-pod PVCs created from the broker’s volumeClaimTemplate are intentionally not deleted by helm uninstall and carry no app= label. Drop them explicitly by name pattern:

Terminal window
kubectl get pvc -n tbmq -o name | grep tbmq-node-data | xargs -I {} kubectl delete -n tbmq {}

External PostgreSQL, Kafka, and Redis are owned by whatever you deployed alongside TBMQ and are not affected by helm delete. Drop them through their own tooling if you no longer need them.

To delete the Minikube cluster:

Terminal window
minikube delete