- Prerequisites
- Configure your Kubernetes environment
- Add the TBMQ Cluster Helm repository
- Modify default chart values
- Create namespace
- Install the TBMQ Helm chart
- Validate HTTP Access
- Validate MQTT Access
- Troubleshooting
- Upgrading
- Uninstalling TBMQ Helm chart
- Delete Kubernetes Cluster
- Next steps
This guide will help you to set up TBMQ Cluster using the official Helm chart on AWS using Elastic Kubernetes Service (EKS).
Prerequisites
To deploy TBMQ Cluster using Helm on EKS cluster, you need to have the following tools installed on your local machine:
Afterward, you need to configure Access Key, Secret Key and default region. To get Access and Secret keys, please follow this guide. The default region should be the ID of the region where you’d like to deploy the cluster.
Configure your Kubernetes environment
Configure AWS tools
1
aws configure
Configuration overview
To deploy the EKS cluster, we recommend using a pre-defined EKS cluster configuration file. Please download it using next command:
1
curl -o cluster.yml https://raw.githubusercontent.com/thingsboard/tbmq/release-2.1.0/k8s/helm/aws/cluster.yml
Here are the fields you can change depending on your needs:
region
- should be the AWS region where you want your cluster to be located (the default value isus-east-1
)availabilityZones
- should specify the exact IDs of the region’s availability zones (the default value is[ us-east-1a,us-east-1b,us-east-1c ]
)instanceType
- type of the instances for node groups. Change per workload (e.g., TBMQ, Redis, Kafka).desiredCapacity
- number of nodes per node group. Defaults are suggested for testing.volumeType
- type of EBS volume for EC2 nodes. Defaults togp3
.
Refer to Amazon EC2 Instance types to choose the right instance types for your production workloads.
PostgreSQL consideration (Optional Node Group)
TBMQ Helm chart supports external PostgreSQL, so you might not need this node group:
1
2
3
4
5
6
7
8
- name: tbmq-postgresql
instanceType: c7a.large
desiredCapacity: 1
maxSize: 1
minSize: 0
labels: { role: postgresql }
volumeType: gp3
volumeSize: 20
You can safely remove this section if:
- You’re using Amazon RDS or an existing PostgreSQL service.
- You want to keep your database outside the EKS cluster.
IAM setup for OIDC and AWS Load Balancer Controller
By including the following block in your cluster.yml
, you automatically enable IAM roles for service accounts and provision the AWS Load Balancer Controller service account:
1
2
3
4
5
6
7
8
iam:
withOIDC: true
serviceAccounts:
- metadata:
name: aws-load-balancer-controller
namespace: kube-system
wellKnownPolicies:
awsLoadBalancerController: true
This configuration:
- Enables OIDC integration (required for IAM roles for service accounts).
- Automatically creates a service account with the appropriate IAM policies for the AWS Load Balancer Controller.
This is the easiest and most integrated way to set things up when using eksctl
.
Alternative, if you prefer not to manage IAM inside cluster.yml
, or your organization requires manual IAM policy creation, you can set it up manually after cluster creation.
Follow these official guides from AWS:
- Create an IAM OIDC provider for your cluster
- Route internet traffic with AWS Load Balancer Controller
Add-ons explained
The addons
section in cluster.yml
automatically installs and configures essential components that extend the base functionality of your EKS cluster.
- aws-ebs-csi-driver - enables dynamic provisioning of Amazon EBS volumes using the EBS CSI driver.
TBMQ components like Redis and Kafka require persistent storage. This driver allows Kubernetes to provision
gp3
volumes on-demand when a PersistentVolumeClaim is created. - aws-efs-csi-driver - allows your workloads to use Amazon EFS (Elastic File System) as a persistent volume via the EFS CSI driver. TBMQ doesn’t require EFS, but this is useful if you want shared access to the same volume from multiple pods (e.g., for shared logs, config files, or stateful workloads with horizontal scaling).
- vpc-cni - installs the Amazon VPC CNI plugin, which enables Kubernetes pods to have native VPC networking. Provides each pod with its own IP address from the VPC subnet. Essential for efficient pod-to-pod and pod-to-external communication.
- coredns - provides internal DNS resolution for Kubernetes services via CoreDNS.
- kube-proxy - manages network rules on each node to handle service routing, via kube-proxy.
Create EKS cluster
1
eksctl create cluster -f cluster.yml
Create GP3 storage class and make it default
When provisioning persistent storage in Amazon EKS, the gp3
volume type is the modern, recommended default. It offers superior performance, cost-efficiency, and flexibility compared to gp2
.
Please download the storage class configuration file:
1
curl -o gp3-def-sc.yml https://raw.githubusercontent.com/thingsboard/tbmq/release-2.1.0/k8s/helm/aws/gp3-def-sc.yml
Apply the configuration:
1
kubectl apply -f gp3-def-sc.yml
If a gp2
StorageClass exists, it may conflict with gp3
. You can either make gp2
storage class non-default:
1
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
Or delete the gp2
StorageClass (if unused):
1
kubectl delete storageclass gp2
Check the gp3
storage class available and marked as default:
1
kubectl get sc
You should see similar output:
1
2
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp3 (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 30s
Attach Policy
If you’ve created your EKS cluster using the provided cluster.yml
, then the following are already configured automatically:
- OIDC provider is enabled (withOIDC: true)
- Service account aws-load-balancer-controller is created in the kube-system namespace.
- The account is annotated for IAM access and linked with the well-known AWS-managed policy.
However, you must manually attach the AWSLoadBalancerControllerIAMPolicy (or your custom policy) to the IAM role created by eksctl.
- Find the role created by
eksctl
:
1
2
3
aws iam list-roles \
--query "Roles[?contains(RoleName, 'eksctl-tbmq')].RoleName" \
--output text
Looks for something like:
1
eksctl-tbmq-addon-iamserviceaccount-kube-syst-Role1-J9l4M87BqmNu
- Attach the policy:
Replace both YOUR_AWS_ACCOUNT_ID
and ROLE_NAME
with your actual AWS account ID and the IAM role name found in the previous step:
1
2
3
aws iam attach-role-policy \
--policy-arn arn:aws:iam::YOUR_AWS_ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy \
--role-name ROLE_NAME
You can verify the attachment with:
1
aws iam list-attached-role-policies --role-name ROLE_NAME
You should see similar output:
1
ATTACHEDPOLICIES arn:aws:iam::YOUR_AWS_ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy AWSLoadBalancerControllerIAMPolicy
Create AWS Load Balancer Controller
To support Network Load Balancer (NLB) and Application Load Balancer (ALB) provisioning via Kubernetes annotations, you’ll need to deploy the AWS Load Balancer Controller into your EKS cluster.
1
2
helm repo add eks https://aws.github.io/eks-charts
helm repo update
After that, install the controller into the kube-system
namespace and associates it with your cluster:
1
2
3
4
5
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
--namespace kube-system \
--set clusterName=tbmq \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
Verify that the controller is installed:
1
kubectl get deployment -n kube-system aws-load-balancer-controller
An example output is as follows.
1
2
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 2/2 2 2 84s
Add the TBMQ Cluster Helm repository
Before installing the chart, add the TBMQ Helm repository to your local Helm client:
1
2
helm repo add tbmq-helm-chart https://helm.thingsboard.io/tbmq
helm repo update
Modify default chart values
To customize your TBMQ deployment, first download the default values.yaml
file from the chart:
1
helm show values tbmq-helm-chart/tbmq-cluster > values.yaml
Update nodeSelector for Pods Scheduling
To ensure high availability and proper scheduling in your EKS-based TBMQ cluster,
you must assign TBMQ components to specific node groups using the nodeSelector
field in your Helm values.yml
.
Your cluster.yml
already defines dedicated node groups with role-based labels.
For example for tbmq-node
mananged node group you have:
1
labels: { role: tbmq }
You must map each component to the appropriate node group using these labels.
Here’s how to explicitly assign each component:
- TBMQ Broker:
1
2
3
tbmq:
nodeSelector:
role: tbmq
- TBMQ Integration Executor:
1
2
3
tbmq-ie:
nodeSelector:
role: tbmq-ie
- Kafka Controller Nodes:
1
2
3
4
kafka:
controller:
nodeSelector:
role: kafka
- Redis Cluster Nodes:
1
2
3
4
redis-cluster:
redis:
nodeSelector:
role: redis
- PostgreSQL (if not using external DB):
1
2
3
4
5
6
7
8
9
postgresql:
primary:
nodeSelector:
role: postgresql
backup:
cronjob:
nodeSelector:
role: postgresql
External PostgreSQL
By default, the chart installs Bitnami PostgreSQL as a sub-chart:
1
2
3
4
5
# This section will bring bitnami/postgresql (https://artifacthub.io/packages/helm/bitnami/postgresql) into this chart.
# If you want to add some extra configuration parameters, you can put them under the `postgresql` key, and they will be passed to bitnami/postgresql chart
postgresql:
# @param enabled If enabled is set to true, externalPostgresql configuration will be ignored
enabled: true
provisioning a single-node instance with configurable storage, backups, and monitoring options.
For users with an existing PostgreSQL instance, TBMQ can be configured to connect externally.
To do this, disable the built-in PostgreSQL by set postgresql.enabled: false
and specify connection details in the externalPostgresql
section.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# If you're deploying PostgreSQL externally, configure this section
externalPostgresql:
# @param host - External PostgreSQL server host
host: ""
# @param port - External PostgreSQL server port
##
port: 5432
# @param password - PostgreSQL user
##
username: "postgres"
# @param password - PostgreSQL user password
##
password: "postgres"
# @param database - PostgreSQL database name for TBMQ
##
database: "thingsboard_mqtt_broker"
If you’re deploying on Amazon EKS and plan to use AWS RDS for PostgreSQL, follow this guide to provision and configure your RDS instance.
Load Balancer configuration
By default, the Helm chart deploys a standard NGINX Ingress Controller for HTTP and MQTT traffic when installing TBMQ on Kubernetes.
1
2
loadbalancer:
type: "nginx"
Since you are deploying the TBMQ cluster on AWS EKS, you need to change this value to:
1
2
loadbalancer:
type: "aws"
This will automatically configure:
- Plain HTTP traffic to be exposed via AWS Application Load Balancer (ALB).
- Plain MQTT traffic to be exposed via AWS Network Load Balancer (NLB).
HTTPS access
Use AWS Certificate Manager to create or import SSL certificate. Note your certificate ARN.
Next, you must set loadbalancer.http.ssl.enabled
to true
and update the loadbalancer.http.ssl.certificateRef
with the ACM certificate ARN configured before.
See the example below:
1
2
3
4
5
6
7
loadbalancer:
type: "aws"
http:
enabled: true
ssl:
enabled: true
certificateRef: "<your-acm-certificate-arn-for-alb>"
MQTTS access
TLS termination (One-way TLS)
The simplest way to configure MQTTS is to make your MQTT load balancer (AWS NLB) to act as a TLS termination point. This way we set up the one-way TLS connection, where the traffic between your devices and load balancers is encrypted, and the traffic between your load balancer and TBMQ is not encrypted. There should be no security issues, since the ALB/NLB is running in your VPC. The only major disadvantage of this option is that you can’t use “X.509 certificate” MQTT client credentials, since information about client certificate is not transferred from the load balancer to the TBMQ.
Use AWS Certificate Manager to create or import SSL certificate. Note your certificate ARN.
Next, you must set loadbalancer.mqtt.tlsTermination.enabled
to true
and update the loadbalancer.mqtt.tlsTermination.certificateRef
with the ACM certificate ARN configured before.
See the example below:
1
2
3
4
5
6
7
loadbalancer:
type: "aws"
mqtt:
enabled: true
tlsTermination:
enabled: true
certificateRef: "<your-acm-certificate-arn-for-nlb>"
Mutual TLS (Two-Way TLS or mTLS)
The more complex way to enable MQTTS is to obtain valid (signed) TLS certificate and configure it in the TBMQ. The main advantage of this option is that you may use it in combination with “X.509 certificate” MQTT client credentials.
Please refer to the TBMQ Helm chart documentation for details on configuring Two-Way TLS.
Create namespace
It’s a good practice to create a dedicated namespace for your TBMQ cluster deployment:
1
kubectl create namespace tbmq
1
kubectl config set-context --current --namespace=tbmq
This sets tbmq as the default namespace for your current context, so you don’t need to pass –namespace to every command.
Install the TBMQ Helm chart
Now you’re ready to install TBMQ using the Helm chart.
Make sure you’re in the same directory as your customized values.yaml
file.
1
2
3
helm install my-tbmq-cluster tbmq-helm-chart/tbmq-cluster \
-f values.yaml \
--set installation.installDbSchema=true
Once the deployment process is completed, you should see output similar to the following:
1
2
3
4
5
6
7
8
9
10
NAME: my-tbmq-cluster
LAST DEPLOYED: Wed Mar 26 17:42:49 2025
NAMESPACE: tbmq
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
TBMQ Cluster my-tbmq-cluster will be deployed in few minutes.
Info:
Namespace: tbmq
Validate HTTP Access
Now you can open TBMQ web interface in your browser using DNS name of the load balancer.
You can get DNS name of the load-balancers using the next command:
1
kubectl get ingress
You should see the similar picture:
1
2
NAME CLASS HOSTS ADDRESS PORTS AGE
my-tbmq-cluster-http-lb <none> * k8s-tbmq-mytbmq-000aba1305-222186756.eu-west-1.elb.amazonaws.com 80 3d1h
Use ADDRESS
field of the my-tbmq-cluster-http-lb
to connect to the cluster.
You should see TBMQ login page. Use the following default credentials for System Administrator:
Username:
Password:
1
sysadmin
On the first user log-in you will be asked to change the default password to the preferred one and then re-login using the new credentials.
Validate MQTT Access
To connect to the cluster via MQTT you will need to get the corresponding service IP. You can do this with the command:
1
kubectl get services
You should see the similar picture:
1
2
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-tbmq-cluster-mqtt-lb LoadBalancer 10.100.119.170 k8s-tbmq-mytbmq-b9f99d1ab6-1049a98ba4e28403.elb.eu-west-1.amazonaws.com 1883:30308/TCP,8883:31609/TCP 6m58s
Use EXTERNAL-IP
field of the load-balancer to connect to the cluster via MQTT protocol.
Troubleshooting
In case of any issues, you can examine service logs for errors. For example, to see TBMQ logs, execute the following command:
1
kubectl logs -f my-tbmq-cluster-tbmq-node-0
Use the next command to see the state of all statefulsets.
1
kubectl get statefulsets
See kubectl Cheat Sheet command reference for more details.
Upgrading
Helm support was introduced with the TBMQ 2.1.0 release. Upgrade options were not included in the initial version of the Helm chart and will be provided alongside a future TBMQ release. This section will be updated once a new version of TBMQ and its Helm chart become available.
Uninstalling TBMQ Helm chart
To uninstall the TBMQ Helm chart, run the following command:
1
helm delete my-tbmq-cluster
This command removes all TBMQ components associated with the release from the namespace set in your current Kubernetes context.
The helm delete
command removes only the logical resources of the TBMQ cluster.
To fully clean up all persistent data, you may also need to manually delete the associated Persistent Volume Claims (PVCs) after uninstallation:
1
kubectl delete pvc -l app.kubernetes.io/instance=my-tbmq-cluster
Delete Kubernetes Cluster
Execute the following command to delete the EKS cluster:
1
eksctl delete cluster -r us-east-1 -n tbmq -w
Next steps
-
Getting started guide - This guide provide quick overview of TBMQ.
-
Security guide - Learn how to enable authentication and authorization of MQTT clients.
-
Configuration guide - Learn about TBMQ configuration files and parameters.
-
MQTT client type guide - Learn about TBMQ client types.
-
Integration with ThingsBoard - Learn about how to integrate TBMQ with ThingsBoard.