Skip to content
Stand with Ukraine flag

Azure

This guide covers setting up TBMQ in cluster mode on Azure AKS.

Install kubectl, helm, and az tools.

Then log in to the Azure CLI:

Terminal window
az login

Clone TBMQ K8S repository

Section titled Clone TBMQ K8S repository
Terminal window
git clone -b release-2.3.0 https://github.com/thingsboard/tbmq-pe-k8s.git
cd tbmq-pe-k8s/azure

Define environment variables used throughout this guide. Execute the following command on Linux:

Terminal window
export AKS_RESOURCE_GROUP=TBMQResources
export AKS_LOCATION=eastus
export AKS_GATEWAY=tbmq-gateway
export TB_CLUSTER_NAME=tbmq-cluster
export TB_DATABASE_NAME=tbmq-db
echo "Variables ready to create resource group $AKS_RESOURCE_GROUP in location $AKS_LOCATION
and cluster $TB_CLUSTER_NAME with database $TB_DATABASE_NAME"

Where:

  • TBMQResources — a logical group in which Azure resources are deployed and managed (AKS_RESOURCE_GROUP).
  • eastus — the region for the resource group (AKS_LOCATION). List all regions with az account list-locations.
  • tbmq-gateway — the name of the Azure Application Gateway.
  • tbmq-cluster — the cluster name (TB_CLUSTER_NAME).
  • tbmq-db — the database server name (TB_DATABASE_NAME).

Create the Azure Resource Group:

Terminal window
az group create --name $AKS_RESOURCE_GROUP --location $AKS_LOCATION

Create the AKS cluster:

Terminal window
az aks create --resource-group $AKS_RESOURCE_GROUP \
--name $TB_CLUSTER_NAME \
--generate-ssh-keys \
--enable-addons ingress-appgw \
--appgw-name $AKS_GATEWAY \
--appgw-subnet-cidr "10.225.0.0/24" \
--node-vm-size Standard_D4s_v6 \
--node-count 3

Key parameters:

  • --node-count — number of nodes (default: 3; adjust with az aks scale later).
  • --enable-addons ingress-appgw — enables the Azure Application Gateway ingress controller.
  • --node-vm-size — VM size for cluster nodes (default: Standard_DS2_v2).
  • --generate-ssh-keys — generates SSH keys stored in ~/.ssh/.

Full parameter list: az aks create reference.

Alternatively, use the Azure portal quickstart guide.

Connect kubectl to the new cluster:

Terminal window
az aks get-credentials --resource-group $AKS_RESOURCE_GROUP --name $TB_CLUSTER_NAME

Verify:

Terminal window
kubectl get nodes

You should see the cluster’s node list.

Set up PostgreSQL on Azure following the official guide with these requirements:

  • PostgreSQL version 17.x
  • Database accessible from the TBMQ cluster
  • Initial database name: thingsboard_mqtt_broker
  • Enable High availability

Alternatively, use the az CLI (replace POSTGRESS_USER and POSTGRESS_PASS):

Terminal window
az postgres flexible-server create --location $AKS_LOCATION --resource-group $AKS_RESOURCE_GROUP \
--name $TB_DATABASE_NAME --admin-user POSTGRESS_USER --admin-password POSTGRESS_PASS \
--public-access 0.0.0.0 --storage-size 32 \
--version 17 -d thingsboard_mqtt_broker

Key parameters:

  • --location — region (from az account list-locations)
  • --admin-user / --admin-password — credentials (password: 8–128 chars with uppercase, lowercase, numbers, special chars)
  • --public-access 0.0.0.0 — allows access from all Azure resources; set to None to restrict
  • --storage-size — 32 GiB minimum, 16 TiB maximum
  • --high-availabilityDisabled or Enabled (only set at creation time)

Full parameter reference: az postgres flexible-server create.

Example response:

{
"host": "tbmq-db.postgres.database.azure.com",
"databaseName": "thingsboard_mqtt_broker",
"username": "postgres",
"version": "17"
}

Note the host value. Edit tbmq-db-configmap.yml and replace YOUR_AZURE_POSTGRES_ENDPOINT_URL with the host, and set YOUR_AZURE_POSTGRES_USER and YOUR_AZURE_POSTGRES_PASSWORD accordingly:

Terminal window
nano tbmq-db-configmap.yml

Create a dedicated namespace for the TBMQ cluster:

Terminal window
kubectl apply -f tbmq-namespace.yml
kubectl config set-context $(kubectl config current-context) --namespace=thingsboard-mqtt-broker

TBMQ relies on Valkey to store messages for DEVICE persistent clients and to reduce database load during authentication. Without caching, every new connection triggers a database query, which can overload the database under high connection rates.

Choose one of the following options:

Once your cache is ready, update tbmq-cache-configmap.yml:

For standalone mode:

REDIS_CONNECTION_TYPE: "standalone"
REDIS_HOST: "YOUR_VALKEY_ENDPOINT_URL_WITHOUT_PORT"
#REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"

For cluster mode:

REDIS_CONNECTION_TYPE: "cluster"
REDIS_NODES: "COMMA_SEPARATED_LIST_OF_NODES"
#REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"
# Recommended in Kubernetes for handling dynamic IPs and failover:
#REDIS_LETTUCE_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"
#REDIS_JEDIS_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"

The official Azure guide assumes a fresh environment. Since you’ve already set up your resources, adapt as follows:

  • Skip az group create and az aks create — already done.
  • Azure Key Vault (AKV) and Container Registry (ACR) — optional; you can skip them for simplicity.
  • Node pools — a dedicated Valkey pool is optional. You can use your existing node pool.
  • Namespace — deploy Valkey into thingsboard-mqtt-broker to keep all components together.

If you skip Azure Key Vault, create the Kubernetes secret manually:

Terminal window
VALKEY_PASSWORD=$(openssl rand -base64 32)
echo "Generated Password: $VALKEY_PASSWORD"
kubectl create secret generic valkey-password \
--namespace thingsboard-mqtt-broker \
--from-literal=valkey-password-file.conf=$'requirepass '"$VALKEY_PASSWORD"$'\nprimaryauth '"$VALKEY_PASSWORD"

When creating the ConfigMap and StatefulSets (primaries and replicas), adapt the Azure examples:

  • Namespace: use thingsboard-mqtt-broker
  • Affinity: remove nodeSelector/nodeAffinity for dedicated pools if using a shared pool; use podAntiAffinity to spread pods
  • Image: use valkey/valkey:8.0 (avoid :latest in production)
  • Secret volume: replace the CSI/Key Vault driver config with a standard Kubernetes secret reference
  1. Create headless services and Pod Disruption Budget (PDB).
  2. Run Valkey cluster creation commands to join nodes.
  3. Verify pod roles and replication status.

Set these values in your TBMQ configuration:

  • REDIS_NODES: headless service DNS, e.g. valkey-cluster:6379
  • REDIS_PASSWORD: the password generated above

Run the installation script (provisions DB tables, indexes, etc.):

Terminal window
./k8s-install-tbmq.sh

After completion, you should see:

INFO o.t.m.b.i.ThingsboardMqttBrokerInstallService - Installation finished successfully!

Before proceeding, ensure you have an active TBMQ license. If you don't have one yet, visit the Pricing page, choose a pay-as-you-go subscription or a perpetual license, and use the calculator to size your deployment — session and throughput limits, production and development instances, and any add-ons — to obtain your license key.

Configure the license key

Section titled Configure the license key

Create a Kubernetes secret with your license key:

Terminal window
export TBMQ_LICENSE_KEY=YOUR_LICENSE_KEY_HERE
kubectl create -n thingsboard-mqtt-broker secret generic tbmq-license --from-literal=license-key=$TBMQ_LICENSE_KEY

TBMQ requires a running Kafka cluster. Choose one of the following options:

Runs as a StatefulSet with 3 pods in KRaft dual-role mode (each node acts as both controller and broker). Suitable for a lightweight, self-managed Kafka setup.

See the full deployment guide.

Quick steps:

Terminal window
kubectl apply -f kafka/tbmq-kafka.yml

In tbmq.yml and tbmq-ie.yml, uncomment the section marked:

# Uncomment the following lines to connect to Apache Kafka

Option 2. Deploy a Kafka cluster with the Strimzi Operator

Section titled “Option 2. Deploy a Kafka cluster with the Strimzi Operator”

Uses the Strimzi Cluster Operator for easier upgrades, scaling, and operational management.

See the full deployment guide.

Install the Strimzi operator:

Terminal window
helm install tbmq-kafka -f kafka/operator/values-strimzi-kafka-operator.yaml oci://quay.io/strimzi-helm/strimzi-kafka-operator --version 0.47.0

Deploy the Kafka cluster:

Terminal window
kubectl apply -f kafka/operator/kafka-cluster.yaml

In tbmq.yml and tbmq-ie.yml, uncomment the section marked:

# Uncomment the following lines to connect to Strimzi

Deploy TBMQ:

Terminal window
./k8s-deploy-tbmq.sh

After a few minutes, check pod status:

Terminal window
kubectl get pods

You should see tbmq-0 and tbmq-1 pods, each in the READY state.

You have two options:

  • HTTP — no HTTPS support. Suitable for development only.
  • HTTPS — SSL termination. Recommended for production. Automatically redirects HTTP to HTTPS.
Terminal window
kubectl apply -f receipts/http-load-balancer.yml

Check provisioning status:

Terminal window
kubectl get ingress

Once ready:

NAME CLASS HOSTS ADDRESS PORTS AGE
tbmq-http-loadbalancer <none> * 34.111.24.134 80 7m25s

Add a certificate to the Azure Application Gateway:

Terminal window
az network application-gateway ssl-cert create \
--resource-group $(az aks show --name $TB_CLUSTER_NAME --resource-group $AKS_RESOURCE_GROUP --query nodeResourceGroup | tr -d '"') \
--gateway-name $AKS_GATEWAY \
--name TBMQHTTPSCert \
--cert-file YOUR_CERT \
--cert-password YOUR_CERT_PASS

Deploy:

Terminal window
kubectl apply -f receipts/https-load-balancer.yml

Create a TCP load balancer that forwards traffic on ports 1883 and 8883:

Terminal window
kubectl apply -f receipts/mqtt-load-balancer.yml

Follow this guide to create a .pem certificate file. Save it as server.pem in the working directory.

Create a ConfigMap from your PEM files:

Terminal window
kubectl create configmap tbmq-mqtts-config \
--from-file=server.pem=YOUR_PEM_FILENAME \
--from-file=mqttserver_key.pem=YOUR_PEM_KEY_FILENAME \
-o yaml --dry-run=client | kubectl apply -f -

Where:

  • YOUR_PEM_FILENAME — path to your server certificate file
  • YOUR_PEM_KEY_FILENAME — path to your server certificate private key file

Uncomment all sections marked with “Uncomment the following lines to enable two-way MQTTS” in tbmq.yml:

Terminal window
kubectl apply -f tbmq.yml

Open the TBMQ web interface using the DNS name of the load balancer:

Terminal window
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
tbmq-http-loadbalancer <none> * 34.111.24.134 80 3d1h

Use the ADDRESS of tbmq-http-loadbalancer to access the UI.

You should see the TBMQ login page. Use the default System Administrator credentials:

Username:

Password:

sysadmin

On first login, you are prompted to change the default password and re-login with the new credentials.

The service tbmq-mqtt-loadbalancer is the LoadBalancer used for MQTT communication. Retrieve its EXTERNAL-IP with:

Terminal window
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tbmq-mqtt-loadbalancer LoadBalancer 10.100.119.170 ******* 1883:30308/TCP,8883:31609/TCP 6m58s

Use the EXTERNAL-IP field to connect to the cluster via MQTT.

View TBMQ pod logs:

Terminal window
kubectl logs -f tbmq-0

Check the state of all StatefulSets:

Terminal window
kubectl get statefulsets

See the kubectl Cheat Sheet for more details.

  1. Check the version-specific notes below for any preparation your target version requires.
  2. Back up your database (optional but recommended).
  3. Run the upgrade commands.

For full version history and supported upgrade paths, see the upgrade instructions page. If the documentation does not cover your specific upgrade path, contact us for guidance.

If there are no version-specific notes for your upgrade path, skip directly to Run upgrade.

Backing up your PostgreSQL database before upgrading is highly recommended but optional. For guidance, follow the Azure PostgreSQL backup and restore instructions.

This is a standard upgrade from v2.2.0. No third-party component changes are required — the official images are already in use since v2.2.0.

Proceed with the upgrade.

Upgrade from TBMQ to TBMQ PE

Section titled Upgrade from TBMQ to TBMQ PE

CE-to-PE migration is supported for the same version only. If you are on an earlier CE version, upgrade TBMQ CE to the latest version first. For all supported paths, see the upgrade instructions.

Before upgrading, merge your current configuration with the latest TBMQ PE K8S scripts. Don't forget to configure the license key.

Run the following commands to stop TBMQ, migrate the database, and redeploy:

Terminal window
./k8s-delete-tbmq.sh
./k8s-upgrade-tbmq.sh --fromVersion=ce
./k8s-deploy-tbmq.sh

Pull the latest changes from the release branch:

Terminal window
git pull origin release-2.3.0

Note: Make sure any custom changes are not lost during the merge.

After pulling, run the upgrade script:

Terminal window
./k8s-upgrade-tbmq.sh

Delete TBMQ nodes:

Terminal window
./k8s-delete-tbmq.sh

Delete all TBMQ nodes, ConfigMaps, and load balancers:

Terminal window
./k8s-delete-all.sh

Delete the AKS cluster:

Terminal window
az aks delete --resource-group $AKS_RESOURCE_GROUP --name $TB_CLUSTER_NAME