Skip to content
Stand with Ukraine flag

Helm · Azure AKS

This guide covers deploying a TBMQ cluster using the official Helm chart on Azure using Azure Kubernetes Service (AKS).

To deploy a TBMQ cluster using Helm on AKS, you need the following tools installed on your local machine:

After installation, log in to the Azure CLI:

Terminal window
az login

Define environment variables used in various commands throughout this guide. Execute the following command (Linux):

Terminal window
export AKS_RESOURCE_GROUP=TBMQResources
export AKS_LOCATION=eastus
export AKS_GATEWAY=tbmq-gateway
export TB_CLUSTER_NAME=tbmq-cluster
echo "Your variables are ready to create resource group $AKS_RESOURCE_GROUP in location $AKS_LOCATION
and cluster $TB_CLUSTER_NAME in it"

Where:

  • TBMQResources — a logical group in which Azure resources are deployed and managed, referenced as $AKS_RESOURCE_GROUP
  • eastus — the Azure region for the resource group, referenced as $AKS_LOCATION (see all regions with az account list-locations)
  • tbmq-gateway — the name of the Azure Application Gateway
  • tbmq-cluster — the cluster name, referenced as $TB_CLUSTER_NAME

Create the Azure Resource Group:

Terminal window
az group create --name $AKS_RESOURCE_GROUP --location $AKS_LOCATION

For more details on az group, see the Azure CLI documentation.

Create the AKS cluster:

Terminal window
az aks create --resource-group $AKS_RESOURCE_GROUP \
--name $TB_CLUSTER_NAME \
--generate-ssh-keys \
--enable-addons ingress-appgw \
--appgw-name $AKS_GATEWAY \
--appgw-subnet-cidr "10.225.0.0/24" \
--node-vm-size Standard_D4s_v6 \
--node-count 3

Key parameters:

  • --node-count — number of nodes in the node pool; can be changed later with az aks scale (default: 3)
  • --enable-addons — enables the Application Gateway add-on, used as the HTTP load balancer for TBMQ
  • --node-vm-size — size of Virtual Machines for Kubernetes nodes (default: Standard_DS2_v2)
  • --generate-ssh-keys — generates SSH key files if missing, stored in ~/.ssh

For the full list of az aks create options, see the Azure CLI reference. Alternatively, use the Azure portal quickstart for a guided setup.

Connect kubectl to the newly created cluster:

Terminal window
az aks get-credentials --resource-group $AKS_RESOURCE_GROUP --name $TB_CLUSTER_NAME

Verify the connection:

Terminal window
kubectl get nodes

You should see the cluster’s node list.

Before installing the chart, add the TBMQ Helm repository to your local Helm client:

Terminal window
helm repo add tbmq-helm-chart https://helm.thingsboard.io/tbmq
helm repo update

Create a dedicated namespace for your TBMQ cluster deployment:

Terminal window
kubectl create namespace tbmq
Terminal window
kubectl config set-context --current --namespace=tbmq

This sets tbmq as the default namespace for your current context, so you don’t need to pass --namespace to every command.

To customize your TBMQ deployment, download the default values.yaml from the chart:

Terminal window
helm show values tbmq-helm-chart/tbmq-cluster > values.yaml

Starting with chart version 2.0.0, the TBMQ Helm chart no longer bundles PostgreSQL, Kafka, or a Redis-compatible cache. This step assumes you have already deployed all three so they’re reachable from the tbmq namespace. On Azure, the common choices are managed services: Azure Database for PostgreSQL, Azure Event Hubs with the Kafka protocol enabled, and Azure Cache for Redis. You can also self-host any of them inside the AKS cluster with operators like CrunchyData PGO, Strimzi, or Valkey. For an Azure Database for PostgreSQL provisioning walkthrough, see the Azure cluster setup guide.

Point the chart at your instances by setting the connection values in values.yaml. At a minimum, set:

postgresql:
host: "my-tbmq-db.postgres.database.azure.com"
port: 5432
database: "thingsboard_mqtt_broker"
username: "postgres"
existingSecret: "my-pg-secret"
existingSecretPasswordKey: "password"
kafka:
bootstrapServers: "my-eventhubs.servicebus.windows.net:9093"
redis:
connectionType: "standalone"
host: "my-tbmq-cache.redis.cache.windows.net"
port: 6379
existingSecret: "my-redis-secret"
existingSecretPasswordKey: "redis-password"

Replace the hostnames, credentials, and redis.connectionType (standalone or cluster) with values that match your actual deployment. For the full set of supported keys, see the Infrastructure Configuration section of the chart documentation.

Configure license and broker images

Section titled Configure license and broker images

The chart defaults point at the open-source broker and Integration Executor images. Switch both to the PE variants and provide a license.

In your values.yaml, update the tbmq.image and tbmq-ie.image blocks:

tbmq:
image:
repository: thingsboard/tbmq-pe-node
tag: 2.3.0PE
tbmq-ie:
image:
repository: thingsboard/tbmq-pe-integration-executor
tag: 2.3.0PE

Pre-create a Kubernetes Secret holding the license value in the namespace you plan to install into, and reference it from values.yaml:

Terminal window
kubectl create secret generic my-tbmq-license -n tbmq \
--from-literal=license-key='YOUR_LICENSE_VALUE'
license:
existingSecret: my-tbmq-license

By default, the Helm chart deploys a standard NGINX Ingress Controller for HTTP and MQTT traffic. Since you are deploying on Azure AKS, change the load balancer type:

loadbalancer:
type: "azure"

This automatically configures:

  • Plain HTTP traffic exposed via Azure Application Gateway
  • Plain MQTT traffic exposed via Azure Load Balancer

To enable TLS for HTTP traffic, set loadbalancer.http.ssl.enabled to true and update loadbalancer.http.ssl.certificateRef with the name of the SSL certificate already configured in your Azure Application Gateway:

loadbalancer:
type: "azure"
http:
enabled: true
ssl:
enabled: true
certificateRef: "<your-appgw-ssl-certificate-name>"

Azure Load Balancer does not support TLS termination for MQTT traffic. To secure MQTT communication, configure mutual TLS (mTLS) directly in TBMQ. Refer to the TBMQ Helm chart documentation for details on configuring mTLS.

Make sure you’re in the same directory as your customized values.yaml file, then install TBMQ:

Terminal window
helm install my-tbmq-cluster tbmq-helm-chart/tbmq-cluster \
-f values.yaml \
--set installation.installDbSchema=true

Once the deployment completes, you should see output similar to:

NAME: my-tbmq-cluster
LAST DEPLOYED: Wed Mar 26 17:42:49 2025
NAMESPACE: tbmq
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
TBMQ Cluster my-tbmq-cluster will be deployed in few minutes.
Info:
Namespace: tbmq

Get the DNS name of the load balancers:

Terminal window
kubectl get ingress

Expected output:

NAME CLASS HOSTS ADDRESS PORTS AGE
my-tbmq-cluster-http-lb <none> * 20.123.45.67 80 3d1h

Use the ADDRESS field of my-tbmq-cluster-http-lb to open the TBMQ web interface in your browser.

You should see the TBMQ login page. Use the default System Administrator credentials:

Username:

Password:

sysadmin

On first login, you are prompted to change the default password and re-login with the new credentials.

The service my-tbmq-cluster-mqtt-lb is the LoadBalancer used for MQTT communication. Retrieve its EXTERNAL-IP with:

Terminal window
kubectl get services

Expected output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-tbmq-cluster-mqtt-lb LoadBalancer 10.100.119.170 ******* 1883:30308/TCP,8084:30309/TCP,8883:31609/TCP,8085:31610/TCP 6m58s

Use the EXTERNAL-IP field to connect to the cluster via MQTT.

To examine service logs for errors, view TBMQ logs with:

Terminal window
kubectl logs -f my-tbmq-cluster-tbmq-node-0

Check the state of all StatefulSets:

Terminal window
kubectl get statefulsets

See the kubectl Cheat Sheet for more details.

Chart version 2.0.0 is a breaking change from the 1.x line: the Bitnami PostgreSQL, Kafka, and Redis subcharts that earlier versions bundled have been removed. Upgrades from chart 1.x require either keeping the existing in-cluster Bitnami stack (annotated with helm.sh/resource-policy: keep) or provisioning a fresh third-party stack (PostgreSQL 17, Apache Kafka 4.0.0, Valkey 8.0+). Same-edition upgrades and CE → PE cross-edition migration on the same TBMQ version are also supported via the chart’s pre-upgrade hook.

For the full procedure, refer to the Upgrading section of the TBMQ Helm Chart documentation on Artifact Hub. It covers the chart 1.x → 2.0.0 migration paths, the upgrade.upgradeDbSchema and upgrade.fromVersion flags, and the CE → PE migration steps.

To uninstall the TBMQ Helm chart:

Terminal window
helm delete my-tbmq-cluster

This removes all TBMQ components associated with the release from the current namespace.

The helm delete command removes only the logical resources of the TBMQ cluster. Per-pod PVCs created from the broker’s volumeClaimTemplate are intentionally not deleted by helm uninstall and carry no app= label. Drop them explicitly by name pattern:

Terminal window
kubectl get pvc -n tbmq -o name | grep tbmq-node-data | xargs -I {} kubectl delete -n tbmq {}

Azure Database for PostgreSQL, Event Hubs, and Managed Redis resources are owned by Azure and are not affected by helm delete. Drop them through their own portal or IaC tooling if you no longer need them.

To delete the AKS cluster:

Terminal window
az aks delete --resource-group $AKS_RESOURCE_GROUP --name $TB_CLUSTER_NAME