Skip to content
Stand with Ukraine flag

Helm · AWS EKS

This guide covers deploying a TBMQ cluster using the official Helm chart on AWS using Elastic Kubernetes Service (EKS).

To deploy a TBMQ cluster using Helm on EKS, you need the following tools installed on your local machine:

Afterward, configure your Access Key, Secret Key, and default region. To get Access and Secret keys, follow this guide. The default region should be the ID of the region where you’d like to deploy the cluster.

Terminal window
aws configure

To deploy the EKS cluster, use the pre-defined EKS cluster configuration file. Download it using the following command:

Terminal window
curl -o cluster.yml https://raw.githubusercontent.com/thingsboard/tbmq/release-2.3.0/k8s/helm/aws/cluster.yml

Here are the fields you can change depending on your needs:

  • region — AWS region where you want your cluster to be located (default: us-east-1)
  • availabilityZones — exact IDs of the region’s availability zones (default: [ us-east-1a,us-east-1b,us-east-1c ])
  • instanceType — type of EC2 instances for node groups; change per workload (e.g., TBMQ, Redis, Kafka)
  • desiredCapacity — number of nodes per node group; defaults are suggested for testing
  • volumeType — type of EBS volume for EC2 nodes; defaults to gp3

Refer to Amazon EC2 Instance types to choose the right instance types for your production workloads.

Starting with chart version 2.0.0, the TBMQ Helm chart no longer bundles Kafka, Redis, or PostgreSQL. The cluster.yml still provisions dedicated node groups for each one so you can self-host them on EKS with role-based scheduling. Keep the matching node group only for the dependencies you plan to run inside the cluster:

  • tbmq-kafka: Keep when self-hosting Kafka, like with the Strimzi operator. Remove when using Amazon MSK or an existing managed Kafka service.
  • tbmq-redis: Keep when self-hosting a Redis-compatible cache, like Valkey via its operator or Helm chart. Remove when using Amazon ElastiCache or an existing managed service.
  • tbmq-postgresql: Keep when self-hosting PostgreSQL, like with the CrunchyData PGO operator. Remove when using Amazon RDS or an existing managed PostgreSQL service.

Each block has the same shape; the tbmq-postgresql group is shown here as an example:

- name: tbmq-postgresql
instanceType: c7a.large
desiredCapacity: 1
maxSize: 1
minSize: 0
labels: { role: postgresql }
volumeType: gp3
volumeSize: 20

Removing a node group at this stage just trims cluster.yml before eksctl create cluster -f cluster.yml runs.

IAM setup for OIDC and AWS Load Balancer Controller

Section titled “IAM setup for OIDC and AWS Load Balancer Controller”

By including the following block in your cluster.yml, you automatically enable IAM roles for service accounts and provision the AWS Load Balancer Controller service account:

iam:
withOIDC: true
serviceAccounts:
- metadata:
name: aws-load-balancer-controller
namespace: kube-system
wellKnownPolicies:
awsLoadBalancerController: true

This configuration:

  • Enables OIDC integration (required for IAM roles for service accounts).
  • Automatically creates a service account with the appropriate IAM policies for the AWS Load Balancer Controller.

This is the most streamlined approach when using eksctl.

Alternatively, if you prefer not to manage IAM inside cluster.yml, or your organization requires manual IAM policy creation, you can set it up manually after cluster creation. Follow these official AWS guides:

The addons section in cluster.yml automatically installs and configures essential components that extend the base functionality of your EKS cluster:

  • aws-ebs-csi-driver — enables dynamic provisioning of Amazon EBS volumes using the EBS CSI driver. TBMQ components like Redis and Kafka require persistent storage. This driver allows Kubernetes to provision gp3 volumes on-demand when a PersistentVolumeClaim is created.
  • aws-efs-csi-driver — allows workloads to use Amazon EFS as a persistent volume via the EFS CSI driver. TBMQ doesn’t require EFS, but it’s useful for shared access to the same volume from multiple pods.
  • vpc-cni — installs the Amazon VPC CNI plugin, enabling Kubernetes pods to have native VPC networking with their own IP address.
  • coredns — provides internal DNS resolution for Kubernetes services via CoreDNS.
  • kube-proxy — manages network rules on each node to handle service routing via kube-proxy.
Terminal window
eksctl create cluster -f cluster.yml

Create gp3 storage class and make it default

Section titled “Create gp3 storage class and make it default”

The gp3 EBS volume type is the recommended default for Amazon EKS, offering better performance, cost efficiency, and flexibility compared to gp2.

Download the storage class configuration file:

Terminal window
curl -o gp3-def-sc.yml https://raw.githubusercontent.com/thingsboard/tbmq/release-2.3.0/k8s/helm/aws/gp3-def-sc.yml

Apply the configuration:

Terminal window
kubectl apply -f gp3-def-sc.yml

If a gp2 StorageClass exists, it may conflict with gp3. Either make gp2 non-default:

Terminal window
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

Or delete the gp2 StorageClass (if unused):

Terminal window
kubectl delete storageclass gp2

Verify that the gp3 storage class is available and marked as default:

Terminal window
kubectl get sc

Expected output:

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp3 (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 30s

If you created your EKS cluster using the provided cluster.yml, the following are already configured automatically:

  • OIDC provider is enabled (withOIDC: true)
  • Service account aws-load-balancer-controller is created in the kube-system namespace
  • The account is annotated for IAM access and linked with the well-known AWS-managed policy

However, you must manually attach the AWSLoadBalancerControllerIAMPolicy (or your custom policy) to the IAM role created by eksctl.

Find the role created by eksctl for the aws-load-balancer-controller service account. The query filters on the iamserviceaccount-kube-syst substring, because eksctl create cluster also provisions other roles whose names start with eksctl-tbmq- (cluster service role, node instance roles, addon roles for EBS/EFS/VPC CNI):

Terminal window
aws iam list-roles \
--query "Roles[?contains(RoleName, 'eksctl-tbmq-addon-iamserviceaccount-kube-syst')].RoleName" \
--output text

The output looks something like:

eksctl-tbmq-addon-iamserviceaccount-kube-syst-Role1-J9l4M87BqmNu

Attach the policy — replace YOUR_AWS_ACCOUNT_ID and ROLE_NAME with your actual values:

Terminal window
aws iam attach-role-policy \
--policy-arn arn:aws:iam::YOUR_AWS_ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy \
--role-name ROLE_NAME

Verify the attachment:

Terminal window
aws iam list-attached-role-policies --role-name ROLE_NAME

Expected output:

ATTACHEDPOLICIES arn:aws:iam::YOUR_AWS_ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy AWSLoadBalancerControllerIAMPolicy

To support NLB and ALB provisioning via Kubernetes annotations, deploy the AWS Load Balancer Controller:

Terminal window
helm repo add eks https://aws.github.io/eks-charts
helm repo update

Install the controller into the kube-system namespace:

Terminal window
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
--namespace kube-system \
--set clusterName=tbmq \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller

Verify that the controller is installed:

Terminal window
kubectl get deployment -n kube-system aws-load-balancer-controller

Expected output:

NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 2/2 2 2 84s

Before installing the chart, add the TBMQ Helm repository to your local Helm client:

Terminal window
helm repo add tbmq-helm-chart https://helm.thingsboard.io/tbmq
helm repo update

Create a dedicated namespace for your TBMQ cluster deployment:

Terminal window
kubectl create namespace tbmq
Terminal window
kubectl config set-context --current --namespace=tbmq

This sets tbmq as the default namespace for your current context, so you don’t need to pass --namespace to every command.

To customize your TBMQ deployment, download the default values.yaml from the chart:

Terminal window
helm show values tbmq-helm-chart/tbmq-cluster > values.yaml

The chart manages two component StatefulSets: the broker (tbmq) and the Integration Executor (tbmq-ie). Pin both to the matching node groups your cluster.yml provisions:

tbmq:
nodeSelector:
role: tbmq
tbmq-ie:
nodeSelector:
role: tbmq-ie

Scheduling for your PostgreSQL, Kafka, and Redis-compatible deployments is configured through their own tooling. The chart no longer manages those workloads.

Starting with chart version 2.0.0, the TBMQ Helm chart no longer bundles PostgreSQL, Kafka, or a Redis-compatible cache. This step assumes you have already deployed all three so they’re reachable from the tbmq namespace. On AWS, the common choices are managed services: Amazon RDS for PostgreSQL, Amazon MSK for Kafka, and Amazon ElastiCache for Redis. You can also self-host any of them on the dedicated tbmq-postgresql, tbmq-kafka, and tbmq-redis node groups defined in cluster.yml. For an RDS-specific provisioning walkthrough, see the AWS cluster setup guide.

Point the chart at your instances by setting the connection values in values.yaml. At a minimum, set:

postgresql:
host: "my-tbmq-db.xxxxxx.us-east-1.rds.amazonaws.com"
port: 5432
database: "thingsboard_mqtt_broker"
username: "postgres"
existingSecret: "my-pg-secret"
existingSecretPasswordKey: "password"
kafka:
bootstrapServers: "b-1.my-msk.xxxxxx.kafka.us-east-1.amazonaws.com:9092,b-2.my-msk.xxxxxx.kafka.us-east-1.amazonaws.com:9092"
redis:
connectionType: "cluster"
nodes: "my-cache.xxxxxx.0001.use1.cache.amazonaws.com:6379,my-cache.xxxxxx.0002.use1.cache.amazonaws.com:6379"
existingSecret: "my-redis-secret"
existingSecretPasswordKey: "redis-password"

Replace the hostnames, credentials, and redis.connectionType (standalone or cluster) with values that match your actual deployment. For the full set of supported keys, see the Infrastructure Configuration section of the chart documentation.

Configure license and broker images

Section titled Configure license and broker images

The chart defaults point at the open-source broker and Integration Executor images. Switch both to the PE variants and provide a license.

In your values.yaml, update the tbmq.image and tbmq-ie.image blocks:

tbmq:
image:
repository: thingsboard/tbmq-pe-node
tag: 2.3.0PE
tbmq-ie:
image:
repository: thingsboard/tbmq-pe-integration-executor
tag: 2.3.0PE

Pre-create a Kubernetes Secret holding the license value in the namespace you plan to install into, and reference it from values.yaml:

Terminal window
kubectl create secret generic my-tbmq-license -n tbmq \
--from-literal=license-key='YOUR_LICENSE_VALUE'
license:
existingSecret: my-tbmq-license

By default, the Helm chart deploys a standard NGINX Ingress Controller for HTTP and MQTT traffic. Since you are deploying on AWS EKS, change the load balancer type:

loadbalancer:
type: "aws"

This automatically configures:

  • Plain HTTP traffic exposed via AWS Application Load Balancer (ALB)
  • Plain MQTT traffic exposed via AWS Network Load Balancer (NLB)

Use AWS Certificate Manager to create or import an SSL certificate. Note your certificate ARN.

Set loadbalancer.http.ssl.enabled to true and update loadbalancer.http.ssl.certificateRef with the ACM certificate ARN:

loadbalancer:
type: "aws"
http:
enabled: true
ssl:
enabled: true
certificateRef: "<your-acm-certificate-arn-for-alb>"

The most common way to configure MQTTS is to use the AWS NLB as a TLS termination point. This sets up one-way TLS — traffic between devices and the load balancer is encrypted, while traffic between the load balancer and TBMQ runs unencrypted within your VPC.

Use AWS Certificate Manager to create or import an SSL certificate. Note your certificate ARN.

Set loadbalancer.mqtt.tlsTermination.enabled to true and update loadbalancer.mqtt.tlsTermination.certificateRef:

loadbalancer:
type: "aws"
mqtt:
enabled: true
tlsTermination:
enabled: true
certificateRef: "<your-acm-certificate-arn-for-nlb>"

For full mTLS, obtain a valid signed TLS certificate and configure it in TBMQ. This option supports X.509 certificate MQTT client credentials.

Refer to the TBMQ Helm chart documentation for details on configuring mTLS.

Make sure you’re in the same directory as your customized values.yaml file, then install TBMQ:

Terminal window
helm install my-tbmq-cluster tbmq-helm-chart/tbmq-cluster \
-f values.yaml \
--set installation.installDbSchema=true

Once the deployment completes, you should see output similar to:

NAME: my-tbmq-cluster
LAST DEPLOYED: Wed Mar 26 17:42:49 2025
NAMESPACE: tbmq
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
TBMQ Cluster my-tbmq-cluster will be deployed in few minutes.
Info:
Namespace: tbmq

Get the DNS name of the load balancers:

Terminal window
kubectl get ingress

Expected output:

NAME CLASS HOSTS ADDRESS PORTS AGE
my-tbmq-cluster-http-lb <none> * k8s-tbmq-mytbmq-000aba1305-222186756.eu-west-1.elb.amazonaws.com 80 3d1h

Use the ADDRESS field of my-tbmq-cluster-http-lb to open the TBMQ web interface in your browser.

You should see the TBMQ login page. Use the default System Administrator credentials:

Username:

Password:

sysadmin

On first login, you are prompted to change the default password and re-login with the new credentials.

The service my-tbmq-cluster-mqtt-lb is the LoadBalancer used for MQTT communication. Retrieve its EXTERNAL-IP with:

Terminal window
kubectl get services

Expected output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-tbmq-cluster-mqtt-lb LoadBalancer 10.100.119.170 k8s-tbmq-mytbmq-b9f99d1ab6-1049a98ba4e28403.elb.eu-west-1.amazonaws.com 1883:30308/TCP,8883:31609/TCP 6m58s

Use the EXTERNAL-IP field to connect to the cluster via MQTT.

To examine service logs for errors, view TBMQ logs with:

Terminal window
kubectl logs -f my-tbmq-cluster-tbmq-node-0

Check the state of all StatefulSets:

Terminal window
kubectl get statefulsets

See the kubectl Cheat Sheet for more details.

Chart version 2.0.0 is a breaking change from the 1.x line: the Bitnami PostgreSQL, Kafka, and Redis subcharts that earlier versions bundled have been removed. Upgrades from chart 1.x require either keeping the existing in-cluster Bitnami stack (annotated with helm.sh/resource-policy: keep) or provisioning a fresh third-party stack (PostgreSQL 17, Apache Kafka 4.0.0, Valkey 8.0+). Same-edition upgrades and CE → PE cross-edition migration on the same TBMQ version are also supported via the chart’s pre-upgrade hook.

For the full procedure, refer to the Upgrading section of the TBMQ Helm Chart documentation on Artifact Hub. It covers the chart 1.x → 2.0.0 migration paths, the upgrade.upgradeDbSchema and upgrade.fromVersion flags, and the CE → PE migration steps.

To uninstall the TBMQ Helm chart:

Terminal window
helm delete my-tbmq-cluster

This removes all TBMQ components associated with the release from the current namespace.

The helm delete command removes only the logical resources of the TBMQ cluster. Per-pod PVCs created from the broker’s volumeClaimTemplate are intentionally not deleted by helm uninstall and carry no app= label. Drop them explicitly by name pattern:

Terminal window
kubectl get pvc -n tbmq -o name | grep tbmq-node-data | xargs -I {} kubectl delete -n tbmq {}

Amazon RDS, MSK, and ElastiCache resources are owned by AWS and are not affected by helm delete. Drop them through their own console or IaC tooling if you no longer need them.

To delete the EKS cluster:

Terminal window
eksctl delete cluster -r us-east-1 -n tbmq -w