Azure
This guide covers setting up TBMQ in cluster mode on Azure AKS.
Prerequisites
Section titled “Prerequisites”Install kubectl, helm, and az tools.
Then log in to the Azure CLI:
az loginClone TBMQ repository
Section titled Clone TBMQ repositorygit clone -b release-2.3.0 https://github.com/thingsboard/tbmq.gitcd tbmq/k8s/azureDefine environment variables
Section titled “Define environment variables”Define environment variables used throughout this guide. Execute the following command on Linux:
export AKS_RESOURCE_GROUP=TBMQResourcesexport AKS_LOCATION=eastusexport AKS_GATEWAY=tbmq-gatewayexport TB_CLUSTER_NAME=tbmq-clusterexport TB_DATABASE_NAME=tbmq-dbecho "Variables ready to create resource group $AKS_RESOURCE_GROUP in location $AKS_LOCATIONand cluster $TB_CLUSTER_NAME with database $TB_DATABASE_NAME"Where:
TBMQResources— a logical group in which Azure resources are deployed and managed (AKS_RESOURCE_GROUP).eastus— the region for the resource group (AKS_LOCATION). List all regions withaz account list-locations.tbmq-gateway— the name of the Azure Application Gateway.tbmq-cluster— the cluster name (TB_CLUSTER_NAME).tbmq-db— the database server name (TB_DATABASE_NAME).
Configure and create AKS cluster
Section titled “Configure and create AKS cluster”Create the Azure Resource Group:
az group create --name $AKS_RESOURCE_GROUP --location $AKS_LOCATIONCreate the AKS cluster:
az aks create --resource-group $AKS_RESOURCE_GROUP \ --name $TB_CLUSTER_NAME \ --generate-ssh-keys \ --enable-addons ingress-appgw \ --appgw-name $AKS_GATEWAY \ --appgw-subnet-cidr "10.225.0.0/24" \ --node-vm-size Standard_D4s_v6 \ --node-count 3Key parameters:
--node-count— number of nodes (default: 3; adjust withaz aks scalelater).--enable-addons ingress-appgw— enables the Azure Application Gateway ingress controller.--node-vm-size— VM size for cluster nodes (default:Standard_DS2_v2).--generate-ssh-keys— generates SSH keys stored in~/.ssh/.
Full parameter list: az aks create reference.
Alternatively, use the Azure portal quickstart guide.
Update kubectl context
Section titled “Update kubectl context”Connect kubectl to the new cluster:
az aks get-credentials --resource-group $AKS_RESOURCE_GROUP --name $TB_CLUSTER_NAMEVerify:
kubectl get nodesYou should see the cluster’s node list.
Provision PostgreSQL DB
Section titled “Provision PostgreSQL DB”Set up PostgreSQL on Azure following the official guide with these requirements:
- PostgreSQL version 17.x
- Database accessible from the TBMQ cluster
- Initial database name:
thingsboard_mqtt_broker - Enable High availability
Alternatively, use the az CLI (replace POSTGRESS_USER and POSTGRESS_PASS):
az postgres flexible-server create --location $AKS_LOCATION --resource-group $AKS_RESOURCE_GROUP \ --name $TB_DATABASE_NAME --admin-user POSTGRESS_USER --admin-password POSTGRESS_PASS \ --public-access 0.0.0.0 --storage-size 32 \ --version 17 -d thingsboard_mqtt_brokerKey parameters:
--location— region (fromaz account list-locations)--admin-user/--admin-password— credentials (password: 8–128 chars with uppercase, lowercase, numbers, special chars)--public-access 0.0.0.0— allows access from all Azure resources; set toNoneto restrict--storage-size— 32 GiB minimum, 16 TiB maximum--high-availability—DisabledorEnabled(only set at creation time)
Full parameter reference: az postgres flexible-server create.
Example response:
{ "host": "tbmq-db.postgres.database.azure.com", "databaseName": "thingsboard_mqtt_broker", "username": "postgres", "version": "17"}Note the host value. Edit tbmq-db-configmap.yml and replace YOUR_AZURE_POSTGRES_ENDPOINT_URL with the host,
and set YOUR_AZURE_POSTGRES_USER and YOUR_AZURE_POSTGRES_PASSWORD accordingly:
nano tbmq-db-configmap.ymlCreate namespace
Section titled “Create namespace”Create a dedicated namespace for the TBMQ cluster:
kubectl apply -f tbmq-namespace.ymlkubectl config set-context $(kubectl config current-context) --namespace=thingsboard-mqtt-brokerAzure Cache for Valkey
Section titled “Azure Cache for Valkey”TBMQ relies on Valkey to store messages for DEVICE persistent clients and to reduce database load during authentication. Without caching, every new connection triggers a database query, which can overload the database under high connection rates.
Choose one of the following options:
Once your cache is ready, update tbmq-cache-configmap.yml:
For standalone mode:
REDIS_CONNECTION_TYPE: "standalone"REDIS_HOST: "YOUR_VALKEY_ENDPOINT_URL_WITHOUT_PORT"#REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"For cluster mode:
REDIS_CONNECTION_TYPE: "cluster"REDIS_NODES: "COMMA_SEPARATED_LIST_OF_NODES"#REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"# Recommended in Kubernetes for handling dynamic IPs and failover:#REDIS_LETTUCE_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"#REDIS_JEDIS_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"Deploying a Valkey cluster on AKS
Section titled “Deploying a Valkey cluster on AKS”The official Azure guide assumes a fresh environment. Since you’ve already set up your resources, adapt as follows:
- Skip
az group createandaz aks create— already done. - Azure Key Vault (AKV) and Container Registry (ACR) — optional; you can skip them for simplicity.
- Node pools — a dedicated Valkey pool is optional. You can use your existing node pool.
- Namespace — deploy Valkey into
thingsboard-mqtt-brokerto keep all components together.
Create the secret
Section titled “Create the secret”If you skip Azure Key Vault, create the Kubernetes secret manually:
VALKEY_PASSWORD=$(openssl rand -base64 32)echo "Generated Password: $VALKEY_PASSWORD"
kubectl create secret generic valkey-password \ --namespace thingsboard-mqtt-broker \ --from-literal=valkey-password-file.conf=$'requirepass '"$VALKEY_PASSWORD"$'\nprimaryauth '"$VALKEY_PASSWORD"Deploy StatefulSets
Section titled “Deploy StatefulSets”When creating the ConfigMap and StatefulSets (primaries and replicas), adapt the Azure examples:
- Namespace: use
thingsboard-mqtt-broker - Affinity: remove
nodeSelector/nodeAffinityfor dedicated pools if using a shared pool; usepodAntiAffinityto spread pods - Image: use
valkey/valkey:8.0(avoid:latestin production) - Secret volume: replace the CSI/Key Vault driver config with a standard Kubernetes secret reference
Finalize
Section titled “Finalize”- Create headless services and Pod Disruption Budget (PDB).
- Run Valkey cluster creation commands to join nodes.
- Verify pod roles and replication status.
Set these values in your TBMQ configuration:
REDIS_NODES: headless service DNS, e.g.valkey-cluster:6379REDIS_PASSWORD: the password generated above
Installation
Section titled “Installation”Run the installation script (provisions DB tables, indexes, etc.):
./k8s-install-tbmq.shAfter completion, you should see:
INFO o.t.m.b.i.ThingsboardMqttBrokerInstallService - Installation finished successfully!Provision Kafka
Section titled “Provision Kafka”TBMQ requires a running Kafka cluster. Choose one of the following options:
Option 1. Deploy an Apache Kafka cluster
Section titled “Option 1. Deploy an Apache Kafka cluster”Runs as a StatefulSet with 3 pods in KRaft dual-role mode (each node acts as both controller and broker). Suitable for a lightweight, self-managed Kafka setup.
See the full deployment guide.
Quick steps:
kubectl apply -f kafka/tbmq-kafka.ymlIn tbmq.yml and tbmq-ie.yml, uncomment the section marked:
# Uncomment the following lines to connect to Apache KafkaOption 2. Deploy a Kafka cluster with the Strimzi Operator
Section titled “Option 2. Deploy a Kafka cluster with the Strimzi Operator”Uses the Strimzi Cluster Operator for easier upgrades, scaling, and operational management.
See the full deployment guide.
Install the Strimzi operator:
helm install tbmq-kafka -f kafka/operator/values-strimzi-kafka-operator.yaml oci://quay.io/strimzi-helm/strimzi-kafka-operator --version 0.47.0Deploy the Kafka cluster:
kubectl apply -f kafka/operator/kafka-cluster.yamlIn tbmq.yml and tbmq-ie.yml, uncomment the section marked:
# Uncomment the following lines to connect to StrimziStart TBMQ
Section titled “Start TBMQ”Deploy TBMQ:
./k8s-deploy-tbmq.shAfter a few minutes, check pod status:
kubectl get podsYou should see tbmq-0 and tbmq-1 pods, each in the READY state.
Configure load balancers
Section titled “Configure load balancers”Configure HTTP(S) load balancer
Section titled “Configure HTTP(S) load balancer”You have two options:
- HTTP — no HTTPS support. Suitable for development only.
- HTTPS — SSL termination. Recommended for production. Automatically redirects HTTP to HTTPS.
HTTP load balancer
Section titled “HTTP load balancer”kubectl apply -f receipts/http-load-balancer.ymlCheck provisioning status:
kubectl get ingressOnce ready:
NAME CLASS HOSTS ADDRESS PORTS AGEtbmq-http-loadbalancer <none> * 34.111.24.134 80 7m25sHTTPS load balancer
Section titled “HTTPS load balancer”Add a certificate to the Azure Application Gateway:
az network application-gateway ssl-cert create \ --resource-group $(az aks show --name $TB_CLUSTER_NAME --resource-group $AKS_RESOURCE_GROUP --query nodeResourceGroup | tr -d '"') \ --gateway-name $AKS_GATEWAY \ --name TBMQHTTPSCert \ --cert-file YOUR_CERT \ --cert-password YOUR_CERT_PASSDeploy:
kubectl apply -f receipts/https-load-balancer.ymlConfigure MQTT load balancer
Section titled “Configure MQTT load balancer”Create a TCP load balancer that forwards traffic on ports 1883 and 8883:
kubectl apply -f receipts/mqtt-load-balancer.ymlMQTT over SSL
Section titled “MQTT over SSL”Follow this guide to create a .pem certificate file.
Save it as server.pem in the working directory.
Create a ConfigMap from your PEM files:
kubectl create configmap tbmq-mqtts-config \ --from-file=server.pem=YOUR_PEM_FILENAME \ --from-file=mqttserver_key.pem=YOUR_PEM_KEY_FILENAME \ -o yaml --dry-run=client | kubectl apply -f -Where:
YOUR_PEM_FILENAME— path to your server certificate fileYOUR_PEM_KEY_FILENAME— path to your server certificate private key file
Uncomment all sections marked with “Uncomment the following lines to enable two-way MQTTS” in tbmq.yml:
kubectl apply -f tbmq.ymlValidate the setup
Section titled “Validate the setup”Open the TBMQ web interface using the DNS name of the load balancer:
kubectl get ingressNAME CLASS HOSTS ADDRESS PORTS AGEtbmq-http-loadbalancer <none> * 34.111.24.134 80 3d1hUse the ADDRESS of tbmq-http-loadbalancer to access the UI.
You should see the TBMQ login page. Use the default System Administrator credentials:
Username:
Password:
sysadminOn first login, you are prompted to change the default password and re-login with the new credentials.
Validate MQTT access
Section titled “Validate MQTT access”The service tbmq-mqtt-loadbalancer is the LoadBalancer used for MQTT communication. Retrieve its EXTERNAL-IP with:
kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEtbmq-mqtt-loadbalancer LoadBalancer 10.100.119.170 ******* 1883:30308/TCP,8883:31609/TCP 6m58sUse the EXTERNAL-IP field to connect to the cluster via MQTT.
Troubleshooting
Section titled “Troubleshooting”View TBMQ pod logs:
kubectl logs -f tbmq-0Check the state of all StatefulSets:
kubectl get statefulsetsSee the kubectl Cheat Sheet for more details.
Upgrading
Section titled “Upgrading”- Check the version-specific notes below for any preparation your target version requires.
- Back up your database (optional but recommended).
- Run the upgrade commands.
For full version history and supported upgrade paths, see the upgrade instructions page. If the documentation does not cover your specific upgrade path, contact us for guidance.
If there are no version-specific notes for your upgrade path, skip directly to Run upgrade.
Backup and restore (optional)
Section titled “Backup and restore (optional)”Backing up your PostgreSQL database before upgrading is highly recommended but optional. For guidance, follow the Azure PostgreSQL backup and restore instructions.
Upgrade to 2.3.0
Section titled Upgrade to 2.3.0This release migrates all third-party components from Bitnami images to official open-source alternatives. Review the third-party component updates for full details.
Then proceed with the upgrade.
Upgrade to 2.2.0
Section titled Upgrade to 2.2.0This release migrates MQTT authentication from YAML/env configuration into the database.
The upgrade script reads from database-setup.yml. Variables from tbmq.yml are not applied during the upgrade — only the values in database-setup.yml are used. Ensure this file reflects your active configuration.
Supported variables in database-setup.yml
SECURITY_MQTT_BASIC_ENABLED(true|false)SECURITY_MQTT_SSL_ENABLED(true|false)SECURITY_MQTT_SSL_SKIP_VALIDITY_CHECK_FOR_CLIENT_CERT(true|false) — usuallyfalse
Once the file is verified, proceed with the upgrade.
Run upgrade
Section titled “Run upgrade”Pull the latest changes from the release branch:
git pull origin release-2.3.0Note: Make sure any custom changes are not lost during the merge.
After pulling, run the upgrade script:
./k8s-upgrade-tbmq.shCluster deletion
Section titled “Cluster deletion”Delete TBMQ nodes:
./k8s-delete-tbmq.shDelete all TBMQ nodes, ConfigMaps, and load balancers:
./k8s-delete-all.shDelete the AKS cluster:
az aks delete --resource-group $AKS_RESOURCE_GROUP --name $TB_CLUSTER_NAME