Stand with Ukraine flag
Pricing Try it now
PE MQTT Broker
Installation > Cluster on Azure (Kubernetes)
Getting Started Documentation
Architecture API FAQ
On this page

Deploy TBMQ PE Cluster on Azure with Kubernetes

This guide will help you set up TBMQ PE in AKS.

Prerequisites

Install and configure tools

To deploy TBMQ on the AKS cluster, you will need to install kubectl, helm, and az tools.

After installation is done, you need to log in to the cli using the next command:

1
az login

Clone TBMQ PE K8S repository

1
2
git clone -b release-2.2.0 https://github.com/thingsboard/tbmq-pe-k8s.git
cd tbmq-pe-k8s/azure

Define environment variables

Define environment variables that you will use in various commands later in this guide.

We assume you are using Linux. Execute the following command:

1
2
3
4
5
6
7
export AKS_RESOURCE_GROUP=TBMQResources
export AKS_LOCATION=eastus
export AKS_GATEWAY=tbmq-gateway
export TB_CLUSTER_NAME=tbmq-cluster
export TB_DATABASE_NAME=tbmq-db
echo "You variables ready to create resource group $AKS_RESOURCE_GROUP in location $AKS_LOCATION 
and cluster in it $TB_CLUSTER_NAME with database $TB_DATABASE_NAME"

where:

  • TBMQResources — a logical group in which Azure resources are deployed and managed. We will refer to it later in this guide using AKS_RESOURCE_GROUP;
  • eastus — is the location where you want to create the resource group. We will refer to it later in this guide using AKS_LOCATION. You can see all locations list by executing az account list-locations;
  • tbmq-gateway — the name of Azure application gateway;
  • tbmq-cluster — cluster name. We will refer to it later in this guide using TB_CLUSTER_NAME;
  • tbmq-db — is the name of your database server. You may input a different name. We will refer to it later in this guide using TB_DATABASE_NAME.

Configure and create AKS cluster

Before creating the AKS cluster, we need to create Azure Resource Group. We will use Azure CLI for this:

1
az group create --name $AKS_RESOURCE_GROUP --location $AKS_LOCATION

To see more info about az group please follow the next link.

After the Resource group is created, we can create the AKS cluster by using the next command:

1
2
3
4
5
6
7
8
az aks create --resource-group $AKS_RESOURCE_GROUP \
    --name $TB_CLUSTER_NAME \
    --generate-ssh-keys \
    --enable-addons ingress-appgw \
    --appgw-name $AKS_GATEWAY \
    --appgw-subnet-cidr "10.225.0.0/24" \
    --node-vm-size Standard_D4s_v6 \
    --node-count 3

az aks create has two required parameters - name and resource-group (we use variables that we have set earlier), and a lot of not required parameters (defaults values will be used if not set). A few of them are:

  • node-count - Number of nodes in the Kubernetes node pool. After creating a cluster, you can change the size of its node pool with az aks scale (default value is 3);
  • enable-addons - Enable the Kubernetes addons in a comma-separated list (use az aks addon list to get an available addons list);
  • node-osdisk-size - OS disk type to be used for machines in a given agent pool: Ephemeral or Managed. Defaults to ‘Ephemeral’ when possible in conjunction with VM size and OS disk size. May not be changed for this pool after creation;
  • node-vm-size (or -s) - Size of Virtual Machines to create as Kubernetes nodes (default value is Standard_DS2_v2);
  • generate-ssh-keys - Generate SSH public and private key files if missing. The keys will be stored in the ~/.ssh directory.

From the command above, we add AKS addon for ApplicationGateway. We will use this gateway as Path-Based Load Balancer for the TBMQ.

Full list af az aks create options can be found here.

Alternatively, you may use this guide for custom cluster setup.

Update the context of kubectl

When the cluster is created, we can connect kubectl to it using the next command:

1
az aks get-credentials --resource-group $AKS_RESOURCE_GROUP --name $TB_CLUSTER_NAME

For validation, you can execute the following command:

1
kubectl get nodes

You should see cluster`s nodes list.

Provision PostgreSQL DB

You’ll need to set up PostgreSQL on Azure. You may follow this guide, but take into account the following requirements:

  • Keep your postgresql password in a safe place. We will refer to it later in this guide using YOUR_AZURE_POSTGRES_PASSWORD;
  • Make sure your Azure Database for PostgreSQL version is 17.x;
  • Make sure your Azure Database for PostgreSQL instance is accessible from the TBMQ cluster;
  • Make sure you use “thingsboard_mqtt_broker” as the initial database name.

Note: Use “High availability” enabled. It enables a lot of useful settings by default.

Another way by which you can create Azure Database for PostgreSQL is using az tool (don’t forget to replace ‘POSTGRESS_USER’ and ‘POSTGRESS_PASS’ with your username and password):

1
2
3
4
az postgres flexible-server create --location $AKS_LOCATION --resource-group $AKS_RESOURCE_GROUP \
  --name $TB_DATABASE_NAME --admin-user POSTGRESS_USER --admin-password POSTGRESS_PASS \
  --public-access 0.0.0.0 --storage-size 32 \
  --version 17 -d thingsboard_mqtt_broker

az postgres flexible-server create has a lot of parameters, a few of them are:

  • location — Location. Values from: az account list-locations;
  • resource-group (or -g) — Name of the resource group;
  • name — Name of the server. The name can contain only lowercase letters, numbers, and the hyphen (-) character. Minimum 3 characters and maximum 63 characters;
  • admin-user — Administrator username for the server. Once set, it cannot be changed;
  • admin-password — The password of the administrator. Minimum 8 characters and maximum 128 characters. Password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters;
  • public-access — Determines the public access. Enter single or range of IP addresses to be included in the allowed list of IPs. IP address ranges must be dash-separated and not contain any spaces. Specifying 0.0.0.0 allows public access from any resources deployed within Azure to access your server. Setting it to “None” sets the server in public access mode but does not create a firewall rule;
  • storage-size — The storage capacity of the server. Minimum is 32 GiB and maximum is 16 TiB;
  • version — Server major version;
  • high-availability — enable or disable high-availability feature. High availability can only be set during flexible server creation (accepted values: Disabled, Enabled. Default value: Disabled);
  • database-name (or -d) — The name of the database to be created when provisioning the database server.

You can see the full parameters list here.

Example of response:

1
2
3
4
5
6
7
8
9
10
11
12
13
{
  "connectionString": "postgresql://postgres:postgres@$tbmq-db.postgres.database.azure.com/postgres?sslmode=require",
  "databaseName": "thingsboard_mqtt_broker",
  "firewallName": "AllowAllAzureServicesAndResourcesWithinAzureIps_2021-11-17_15-45-6",
  "host": "tbmq-db.postgres.database.azure.com",
  "id": "/subscriptions/daff3288-1d5d-47c7-abf0-bfb7b738a18c/resourceGroups/myResourceGroup/providers/Microsoft.DBforPostgreSQL/flexibleServers/thingsboard_mqtt_broker",
  "location": "East US",
  "password": "postgres",
  "resourceGroup": "TBMQResources",
  "skuname": "Standard_D2s_v3",
  "username": "postgres",
  "version": "17"
}

Note the value of the host from the command output (tbmq-db.postgres.database.azure.com in our case). Also, note username and password (postgres) from the command.

Edit the database settings file and replace YOUR_AZURE_POSTGRES_ENDPOINT_URL with the host value, YOUR_AZURE_POSTGRES_USER and YOUR_AZURE_POSTGRES_PASSWORD with the correct values:

1
nano tbmq-db-configmap.yml

Create Namespace

Let’s create a dedicated namespace for our TBMQ cluster deployment to ensure better resource isolation and management.

1
2
kubectl apply -f tbmq-namespace.yml
kubectl config set-context $(kubectl config current-context) --namespace=thingsboard-mqtt-broker

Azure Cache for Valkey

TBMQ PE relies on Valkey to store messages for DEVICE persistent clients. The cache also improves performance by reducing the number of direct database reads, especially when authentication is enabled and multiple clients connect at once. Without caching, every new connection triggers a database query to validate MQTT client credentials, which can cause the unnecessary load under high connection rates.

Doc info icon

Note: Starting from TBMQ PE v2.2.0, Valkey 8.0 is officially supported. Azure currently does not provide a managed Valkey service. However, Valkey is fully compatible with Redis 7.2.x, which is supported on Azure Cache for Redis Enterprise and Enterprise Flash SKUs. The Basic, Standard, and Premium SKUs only support up to Redis 6.x, and are therefore not recommended for TBMQ deployments. To ensure compatibility with TBMQ PE v2.2.0 and later, deploy your own Valkey cluster or use an Enterprise-tier SKU.

You can choose one of the following paths depending on your environment:

Once your Azure Cache is ready, update the cache configuration in tbmq-cache-configmap.yml with the correct endpoint values:

  • For standalone Redis: Uncomment and set the following values. Make sure the REDIS_HOST value does not include the port (:6379).

    1
    2
    3
    
    REDIS_CONNECTION_TYPE: "standalone"
    REDIS_HOST: "YOUR_VALKEY_ENDPOINT_URL_WITHOUT_PORT"
    #REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"
    
  • For Valkey cluster: Provide a comma-separated list of “host:port” node endpoints to bootstrap from.

    1
    2
    3
    4
    5
    6
    
    REDIS_CONNECTION_TYPE: "cluster"
    REDIS_NODES: "COMMA_SEPARATED_LIST_OF_NODES"
    #REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"
    # Recommended in Kubernetes for handling dynamic IPs and failover:
    #REDIS_LETTUCE_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"
    #REDIS_JEDIS_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"
    

Hints for Valkey Cluster Creation

The official Azure documentation for creating a Valkey cluster assumes a completely new environment. Since you have already set up your resources earlier in this guide, you should adapt the instructions as follows:

  • Skip Infrastructure Creation: You have already created the Resource Group and AKS cluster. You may skip the steps regarding az group create and az aks create.
  • Optional Services: You can choose to skip creating an Azure Key Vault (AKV) instance and Azure Container Registry (ACR) to simplify the setup.
  • Node Pools: Creating a dedicated node pool for Valkey is optional. While dedicated pools offer better resource isolation, you can use your existing node pool for this deployment.
  • Namespace: We recommend deploying the Valkey cluster into the same namespace as TBMQ (e.g., thingsboard-mqtt-broker) rather than creating a separate valkey namespace. This keeps all components unified.

Creating the Secret

If you choose not to use Azure Key Vault, you must create a generic Kubernetes secret manually. This secret must be formatted exactly as the Valkey container expects (with specific keys and line breaks).

Example: Manual Secret Creation
1
2
3
4
5
6
7
8
9
# 1. Generate a random password (or set your own)
VALKEY_PASSWORD=$(openssl rand -base64 32)
echo "Generated Password: $VALKEY_PASSWORD"

# 2. Create the secret directly in Kubernetes
# We format it exactly how the container expects: 'requirepass' on line 1, 'primaryauth' on line 2
kubectl create secret generic valkey-password \
  --namespace thingsboard-mqtt-broker \
  --from-literal=valkey-password-file.conf=$'requirepass '"$VALKEY_PASSWORD"$'\nprimaryauth '"$VALKEY_PASSWORD"

Deploying StatefulSets (Primaries and Replicas)

Proceed with creating the ConfigMap, Primary cluster pods, and Replica cluster pods. You will need to modify the Azure documentation examples to fit your environment:

  • Namespace: Ensure all resources point to your defined namespace (e.g., thingsboard-mqtt-broker).
  • Affinity: Update the affinity section. If you are using a shared node pool, remove the specific nodeSelector or nodeAffinity requirements. Instead, use podAntiAffinity to spread pods across nodes where possible.
  • Image: If skipping ACR, use the public Docker image: image: "valkey/valkey:8.0". Note: Avoid using the :latest tag for production stability; stick to a specific version.
  • Secret Volume: Update the volume configuration to use the standard Kubernetes secret created in the previous step, replacing the CSI/Key Vault driver configuration.
Example: Modified StatefulSet for Primary pods
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
kubectl apply -f - <<EOF
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: valkey-masters
  namespace: thingsboard-mqtt-broker
spec:
  serviceName: "valkey-masters"
  replicas: 3
  selector:
    matchLabels:
      app: valkey
  template:
    metadata:
      labels:
        app: valkey
        appCluster: valkey-masters
    spec:
      terminationGracePeriodSeconds: 20
      affinity:
        # Removed nodeAffinity (dedicated pool requirement)
        # Soft Anti-Affinity to prefer spreading pods but allow scheduling on available nodes
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - valkey
              topologyKey: kubernetes.io/hostname
      containers:
      - name: role-master-checker
        image: "valkey/valkey:8.0"
        command:
          - "/bin/bash"
          - "-c"
        args:
          [
            "while true; do role=\$(valkey-cli --pass \$(cat /etc/valkey-password/valkey-password-file.conf | awk '{print \$2; exit}') role | awk '{print \$1; exit}');     if [ \"\$role\" = \"slave\" ]; then valkey-cli --pass \$(cat /etc/valkey-password/valkey-password-file.conf | awk '{print \$2; exit}') cluster failover; fi; sleep 30; done"
          ]
        volumeMounts:
        - name: valkey-password
          mountPath: /etc/valkey-password
          readOnly: true
      - name: valkey
        image: "valkey/valkey:8.0"
        env:
        - name: VALKEY_PASSWORD_FILE
          value: "/etc/valkey-password/valkey-password-file.conf"
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        command:
          - "valkey-server"
        args:
          - "/conf/valkey.conf"
          - "--cluster-announce-ip"
          - "\$(MY_POD_IP)"
        resources:
          requests:
            cpu: "100m"
            memory: "100Mi"
        ports:
            - name: valkey
              containerPort: 6379
              protocol: "TCP"
            - name: cluster
              containerPort: 16379
              protocol: "TCP"
        volumeMounts:
        - name: conf
          mountPath: /conf
          readOnly: false
        - name: data
          mountPath: /data
          readOnly: false
        - name: valkey-password
          mountPath: /etc/valkey-password
          readOnly: true
      volumes:
      - name: valkey-password
        # Replaced CSI/KeyVault with standard Kubernetes Secret
        secret:
          secretName: valkey-password
      - name: conf
        configMap:
          name: valkey-cluster
          defaultMode: 0755
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: managed-csi
      resources:
        requests:
          storage: 20Gi
EOF

Finalizing the Setup

  1. Services & PDB: Create the headless services and the Pod Disruption Budget (PDB) as outlined in the documentation.
  2. Initialization: Run the Valkey cluster creation commands to join the nodes.
  3. Verification: Verify the roles of the pods and the replication status to ensure the cluster is healthy.

TBMQ Configuration

Once the cluster is verified, update your TBMQ configuration values:

  • REDIS_NODES: Set this to the headless service DNS, e.g., valkey-cluster:6379.
  • REDIS_PASSWORD: Use the password you generated during secret creation (or the value of $VALKEY_PASSWORD).

Installation

Execute the following command to run the initial setup of the database. This command will launch a short-living TBMQ pod to provision necessary DB tables, indexes, etc.

1
./k8s-install-tbmq.sh

After this command is finished, you should see the next line in the console:

1
INFO  o.t.m.b.i.ThingsboardMqttBrokerInstallService - Installation finished successfully!
Doc info icon

Otherwise, please check if you set the PostgreSQL URL and PostgreSQL password in the tbmq-db-configmap.yml correctly.

Get the license key

Before proceeding, make sure you’ve selected your subscription plan or chosen to purchase a perpetual license. If you haven’t done this yet, please visit the Pricing page to compare available options and obtain your license key.

Note: Throughout this guide, we’ll refer to your license key as YOUR_LICENSE_KEY_HERE.

Configure the license key

Create a k8s secret with your license key:

1
2
export TBMQ_LICENSE_KEY=YOUR_LICENSE_KEY_HERE 
kubectl create -n thingsboard-mqtt-broker secret generic tbmq-license --from-literal=license-key=$TBMQ_LICENSE_KEY
Doc info icon

Don’t forget to replace YOUR_LICENSE_KEY_HERE with the value of your license key.

Provision Kafka

TBMQ requires a running Kafka cluster. You can set up Kafka in two ways:

  • Deploy a self-managed Apache Kafka cluster
  • Deploy a managed Kafka cluster with the Strimzi Operator

Choose the option that best fits your environment and operational needs.

Option 1. Deploy an Apache Kafka Cluster

  • Runs as a StatefulSet with 3 pods in KRaft dual-role mode (each node acts as both controller and broker).
  • Suitable if you want a lightweight, self-managed Kafka setup.

  • See the full deployment guide here.

Quick steps:

1
kubectl apply -f kafka/tbmq-kafka.yml

Update TBMQ configuration files (tbmq.yml and tbmq-ie.yml) and uncomment the section marked:

1
# Uncomment the following lines to connect to Apache Kafka

Option 2. Deploy a Kafka Cluster with the Strimzi Operator

Quick steps:

Install the Strimzi operator:

1
helm install tbmq-kafka -f kafka/operator/values-strimzi-kafka-operator.yaml oci://quay.io/strimzi-helm/strimzi-kafka-operator --version 0.47.0

Deploy the Kafka cluster:

1
kubectl apply -f kafka/operator/kafka-cluster.yaml

Update TBMQ configuration files (tbmq.yml and tbmq-ie.yml) and uncomment the section marked:

1
# Uncomment the following lines to connect to Strimzi

Starting

Execute the following command to deploy the broker:

1
./k8s-deploy-tbmq.sh

After a few minutes, you may execute the next command to check the state of all pods.

1
kubectl get pods

If everything went fine, you should be able to see tbmq-0 and tbmq-1 pods. Every pod should be in the READY state.

Configure Load Balancers

Configure HTTP(S) Load Balancer

Configure HTTP(S) Load Balancer to access the web interface of your TBMQ PE instance. Basically, you have 2 possible options of configuration:

  • http — Load Balancer without HTTPS support. Recommended for development. The only advantage is simple configuration and minimum costs. May be a good option for development server but definitely not suitable for production.
  • https — Load Balancer with HTTPS support. Recommended for production. Acts as an SSL termination point. You may easily configure it to issue and maintain a valid SSL certificate. Automatically redirects all non-secure (HTTP) traffic to secure (HTTPS) port.

See links/instructions below on how to configure each of the suggested options.

HTTP Load Balancer

Execute the following command to deploy plain http load balancer:

1
kubectl apply -f receipts/http-load-balancer.yml

The process of load balancer provisioning may take some time. You may periodically check the status of the load balancer using the following command:

1
kubectl get ingress

Once provisioned, you should see the similar output:

1
2
NAME                     CLASS    HOSTS   ADDRESS         PORTS   AGE
tbmq-http-loadbalancer   <none>   *       34.111.24.134   80      7m25s

HTTPS Load Balancer

For using ssl certificates, we can add our certificate directly in Azure ApplicationGateway using the following command:

1
2
3
4
5
6
az network application-gateway ssl-cert create \
   --resource-group $(az aks show --name $TB_CLUSTER_NAME --resource-group $AKS_RESOURCE_GROUP --query nodeResourceGroup | tr -d '"') \
   --gateway-name $AKS_GATEWAY\
   --name TBMQHTTPSCert \
   --cert-file YOUR_CERT \
   --cert-password YOUR_CERT_PASS

Execute the following command to deploy plain https load balancer:

1
kubectl apply -f receipts/https-load-balancer.yml

Configure MQTT Load Balancer

Configure MQTT load balancer to be able to use MQTT protocol to connect devices.

Create TCP load balancer using the following command:

1
kubectl apply -f receipts/mqtt-load-balancer.yml

The load balancer will forward all TCP traffic for ports 1883 and 8883.

MQTT over SSL

Follow this guide to create a .pem file with the SSL certificate. Store the file as server.pem in the working directory.

You’ll need to create a config-map with your PEM file, you can do it by calling command:

1
2
3
4
kubectl create configmap tbmq-mqtts-config \
 --from-file=server.pem=YOUR_PEM_FILENAME \
 --from-file=mqttserver_key.pem=YOUR_PEM_KEY_FILENAME \
 -o yaml --dry-run=client | kubectl apply -f -
  • where YOUR_PEM_FILENAME is the name of your server certificate file.
  • where YOUR_PEM_KEY_FILENAME is the name of your server certificate private key file.

Then, uncomment all sections in the ‘tbmq.yml’ file that are marked with “Uncomment the following lines to enable two-way MQTTS”.

Execute command to apply changes:

1
kubectl apply -f tbmq.yml

Validate the setup

Now you can open the TBMQ web interface in your browser using the DNS name of the load balancer.

You can get the DNS name of the load-balancers using the next command:

1
kubectl get ingress

You should see the similar picture:

1
2
NAME                     CLASS    HOSTS   ADDRESS         PORTS   AGE
tbmq-http-loadbalancer   <none>   *       34.111.24.134   80      3d1h

Use ADDRESS field of the tbmq-http-loadbalancer to connect to the cluster.

You should see TBMQ login page. Use the following default credentials for System Administrator:

Username:

Password:

1
sysadmin

On the first user log-in you will be asked to change the default password to the preferred one and then re-login using the new credentials.

Validate MQTT access

To connect to the cluster via MQTT, you will need to get the corresponding service IP. You can do this with the command:

1
kubectl get services

You should see the similar picture:

1
2
NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP              PORT(S)                         AGE
tbmq-mqtt-loadbalancer   LoadBalancer   10.100.119.170   *******                  1883:30308/TCP,8883:31609/TCP   6m58s

Use EXTERNAL-IP field of the load-balancer to connect to the cluster via MQTT protocol.

Troubleshooting

In case of any issues, you can examine service logs for errors. For example, to see TBMQ logs, execute the following command:

1
kubectl logs -f tbmq-0

Use the next command to see the state of all statefulsets.

1
kubectl get statefulsets

See kubectl Cheat Sheet command reference for more details.

Upgrading

Review the release notes and upgrade instruction for detailed information on the latest changes.

If the documentation does not cover the specific upgrade instructions for your case, please contact us so we can provide further guidance.

Backup and restore (Optional)

While backing up your PostgreSQL database is highly recommended, it is optional before proceeding with the upgrade. For further guidance, follow the next instructions.

Upgrade from TBMQ CE to TBMQ PE (v2.2.0)

To upgrade your existing TBMQ Community Edition (CE) to TBMQ Professional Edition (PE), ensure you are running the latest TBMQ CE 2.2.0 version before starting the process. Merge your current configuration with the latest TBMQ PE K8S scripts. Do not forget to configure the license key.

Run the following commands, including the upgrade script to migrate PostgreSQL database data from CE to PE:

1
2
3
./k8s-delete-tbmq.sh
./k8s-upgrade-tbmq.sh --fromVersion=ce
./k8s-deploy-tbmq.sh

Cluster deletion

Execute the following command to delete TBMQ nodes:

1
./k8s-delete-tbmq.sh

Execute the following command to delete all TBMQ nodes and configmaps, load balancers, etc.:

1
./k8s-delete-all.sh

Execute the following command to delete the AKS cluster:

1
az aks delete --resource-group $AKS_RESOURCE_GROUP --name $TB_CLUSTER_NAME

Next steps