Stand with Ukraine flag
Try it now Pricing
MQTT Broker
Cluster setup using Helm Chart on AKS
Getting Started Documentation Installation Architecture API FAQ
On this page

Cluster setup using Helm Chart on AKS

This guide will help you to set up TBMQ Cluster using the official Helm chart on Azure using Azure Kubernetes Service (AKS).

Prerequisites

To deploy TBMQ Cluster using Helm on AKS cluster, you need to have the following tools installed on your local machine:

Configure your Kubernetes environment

Configure AZ tools

After installation is done, you need to login to the cli using the next command:

1
az login

Define environment variables

Define environment variables that you will use in various commands later in this guide.

We assume you are using Linux. Execute the following command:

1
2
3
4
5
6
7
export AKS_RESOURCE_GROUP=TBMQResources
export AKS_LOCATION=eastus
export AKS_GATEWAY=tbmq-gateway
export TB_CLUSTER_NAME=tbmq-cluster
export TB_DATABASE_NAME=tbmq-db
echo "You variables ready to create resource group $AKS_RESOURCE_GROUP in location $AKS_LOCATION 
and cluster in it $TB_CLUSTER_NAME with database $TB_DATABASE_NAME"

where:

  • TBMQResources - a logical group in which Azure resources are deployed and managed. We will refer to it later in this guide using $AKS_RESOURCE_GROUP;
  • eastus - is the location where you want to create resource group. We will refer to it later in this guide using $AKS_LOCATION. You can see all locations list by executing az account list-locations;
  • tbmq-gateway - the name of Azure application gateway;
  • tbmq-cluster - cluster name. We will refer to it later in this guide using $TB_CLUSTER_NAME;

Configure and create AKS cluster

Before creating the AKS cluster, we need to create Azure Resource Group. We will use Azure CLI for this:

1
az group create --name $AKS_RESOURCE_GROUP --location $AKS_LOCATION

To see more info about az group please follow the next link.

After the Resource group is created, we can create AKS cluster by using the next command:

1
2
3
4
5
6
7
8
az aks create --resource-group $AKS_RESOURCE_GROUP \
    --name $TB_CLUSTER_NAME \
    --generate-ssh-keys \
    --enable-addons ingress-appgw \
    --appgw-name $AKS_GATEWAY \
    --appgw-subnet-cidr "10.2.0.0/16" \
    --node-vm-size Standard_DS3_v2 \
    --node-count 3

az aks create has two required parameters - name and resource-group (we use variables that we have set earlier), and a lot of not required parameters (defaults values will be used if not set). A few of them are:

  • node-count - Number of nodes in the Kubernetes node pool. After creating a cluster, you can change the size of its node pool with az aks scale (default value is 3);
  • enable-addons - Enable the Kubernetes addons in a comma-separated list (use az aks addon list to get an available addons list);
  • node-osdisk-size - OS disk type to be used for machines in a given agent pool: Ephemeral or Managed. Defaults to ‘Ephemeral’ when possible in conjunction with VM size and OS disk size. May not be changed for this pool after creation;
  • node-vm-size (or -s) - Size of Virtual Machines to create as Kubernetes nodes (default value is Standard_DS2_v2);
  • generate-ssh-keys - Generate SSH public and private key files if missing. The keys will be stored in the ~/.ssh directory.

From the command above, we add AKS addon for ApplicationGateway. We will use this gateway as Path-Based Load Balancer for the TBMQ.

Full list af az aks create options can be found here.

Alternatively, you may use this guide for custom cluster setup.

Update the context of kubectl

When the cluster is created, we can connect kubectl to it using the next command:

1
az aks get-credentials --resource-group $AKS_RESOURCE_GROUP --name $TB_CLUSTER_NAME

For validation, you can execute the following command:

1
kubectl get nodes

You should see cluster`s nodes list.

Add the TBMQ Cluster Helm repository

Before installing the chart, add the TBMQ Helm repository to your local Helm client:

1
2
helm repo add tbmq-helm-chart https://helm.thingsboard.io/tbmq
helm repo update

Modify default chart values

To customize your TBMQ deployment, first download the default values.yaml file from the chart:

1
helm show values tbmq-helm-chart/tbmq-cluster > values.yaml
Doc info icon

Do not modify installation.installDbSchema directly in the values.yaml. This parameter is only required during the first installation to initialize the TBMQ database schema. Instead, we will pass it explicitly using --set option in the helm install command.

External PostgreSQL

By default, the chart installs Bitnami PostgreSQL as a sub-chart:

1
2
3
4
5
# This section will bring bitnami/postgresql (https://artifacthub.io/packages/helm/bitnami/postgresql) into this chart.
#  If you want to add some extra configuration parameters, you can put them under the `postgresql` key, and they will be passed to bitnami/postgresql chart
postgresql:
  # @param enabled If enabled is set to true, externalPostgresql configuration will be ignored
  enabled: true

provisioning a single-node instance with configurable storage, backups, and monitoring options. For users with an existing PostgreSQL instance, TBMQ can be configured to connect externally. To do this, disable the built-in PostgreSQL by set postgresql.enabled: false and specify connection details in the externalPostgresql section.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# If you're deploying PostgreSQL externally, configure this section
externalPostgresql:
  # @param host - External PostgreSQL server host
  host: ""
  # @param port - External PostgreSQL server port
  ##
  port: 5432
  # @param password - PostgreSQL user
  ##
  username: "postgres"
  # @param password - PostgreSQL user password
  ##
  password: "postgres"
  # @param database - PostgreSQL database name for TBMQ
  ##
  database: "thingsboard_mqtt_broker"

If you’re deploying on Azure AKS and plan to use Azure Database for PostgreSQL, follow this guide to provision and configure your PostgreSQL instance.

Load Balancer configuration

By default, the Helm chart deploys a standard NGINX Ingress Controller for HTTP and MQTT traffic when installing TBMQ on Kubernetes.

1
2
loadbalancer:
  type: "nginx"

Since you are deploying TBMQ Cluster on Azure AKS, you need to change this value to:

1
2
loadbalancer:
  type: "azure"

This will automatically configure:

  • Plain HTTP traffic to be exposed via Azure Application Gateway.
  • Plain MQTT traffic to be exposed via Azure Load Balancer.

HTTPS access

To enable TLS for HTTP traffic, you must set loadbalancer.http.ssl.enabled to true and update the loadbalancer.http.ssl.certificateRef with name of the SSL certificate already configured in your Azure Application Gateway.

See the example below:

1
2
3
4
5
6
7
loadbalancer:
  type: "azure"
  http:
    enabled: true
    ssl:
      enabled: true
      certificateRef: "<your-appgw-ssl-certificate-name>"

MQTTS access

Azure Load Balancer does not support TLS termination for MQTT traffic. If you want to secure MQTT communication, you must configure Two-Way TLS (Mutual TLS or mTLS) directly on the application level (TBMQ side). Please refer to the TBMQ Helm chart documentation for details on configuring Two-Way TLS.

Create namespace

It’s a good practice to create a dedicated namespace for your TBMQ cluster deployment:

1
kubectl create namespace tbmq
1
kubectl config set-context --current --namespace=tbmq

This sets tbmq as the default namespace for your current context, so you don’t need to pass –namespace to every command.

Install the TBMQ Helm chart

Now you’re ready to install TBMQ using the Helm chart. Make sure you’re in the same directory as your customized values.yaml file.

1
2
3
helm install my-tbmq-cluster tbmq-helm-chart/tbmq-cluster \
  -f values.yaml \
  --set installation.installDbSchema=true
Doc info icon

my-tbmq-cluster is the Helm release name. You can change it to any name of your choice, which will be used to reference this deployment in future Helm commands.

Once the deployment process is completed, you should see output similar to the following:

1
2
3
4
5
6
7
8
9
10
NAME: my-tbmq-cluster
LAST DEPLOYED: Wed Mar 26 17:42:49 2025
NAMESPACE: tbmq
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
TBMQ Cluster my-tbmq-cluster will be deployed in few minutes.
Info:
Namespace: tbmq

Validate HTTP Access

Now you can open TBMQ web interface in your browser using DNS name of the load balancer.

You can get DNS name of the load-balancers using the next command:

1
kubectl get ingress

You should see the similar picture:

1
2
NAME                          CLASS    HOSTS   ADDRESS         PORTS   AGE
my-tbmq-cluster-http-lb       <none>   *       34.111.24.134   80      3d1h

Use ADDRESS field of the my-tbmq-cluster-http-lb to connect to the cluster.

You should see TBMQ login page. Use the following default credentials for System Administrator:

Username:

Password:

1
sysadmin

On the first user log-in you will be asked to change the default password to the preferred one and then re-login using the new credentials.

Validate MQTT Access

To connect to the cluster via MQTT you will need to get corresponding service IP. You can do this with the command:

1
kubectl get services

You should see the similar picture:

1
2
NAME                          TYPE           CLUSTER-IP       EXTERNAL-IP              PORT(S)                         AGE
my-tbmq-cluster-mqtt-lb       LoadBalancer   10.100.119.170   *******                  1883:30308/TCP,8883:31609/TCP   6m58s

Use EXTERNAL-IP field of the load-balancer to connect to the cluster via MQTT protocol.

Troubleshooting

In case of any issues, you can examine service logs for errors. For example, to see TBMQ logs, execute the following command:

1
kubectl logs -f my-tbmq-cluster-tbmq-node-0

Use the next command to see the state of all statefulsets.

1
kubectl get statefulsets

See kubectl Cheat Sheet command reference for more details.

Upgrading

Helm support was introduced with the TBMQ 2.1.0 release. Upgrade options were not included in the initial version of the Helm chart and will be provided alongside a future TBMQ release. This section will be updated once a new version of TBMQ and its Helm chart become available.

Uninstalling TBMQ Helm chart

To uninstall the TBMQ Helm chart, run the following command:

1
helm delete my-tbmq-cluster

This command removes all TBMQ components associated with the release from the namespace set in your current Kubernetes context.

The helm delete command removes only the logical resources of the TBMQ cluster. To fully clean up all persistent data, you may also need to manually delete the associated Persistent Volume Claims (PVCs) after uninstallation:

1
kubectl delete pvc -l app.kubernetes.io/instance=my-tbmq-cluster

Delete Kubernetes Cluster

Execute the following command to delete the AKS cluster:

1
az aks delete --resource-group $AKS_RESOURCE_GROUP --name $TB_CLUSTER_NAME

Next steps