- Prerequisites
- Clone TBMQ PE K8S repository
- Configure and create EKS cluster
- Create an AWS load-balancer controller
- Amazon PostgreSQL DB Configuration
- Amazon MSK Configuration
- Amazon ElastiCache (Valkey) Configuration
- Configure links to the Kafka/Postgres/Valkey
- Installation
- Get the license key
- Configure the license key
- Configure gp3 as the Default Storage Class in Your EKS Cluster
- Starting
- Configure Load Balancers
- Validate the setup
- Upgrading
- Cluster deletion
- Next steps
This guide will help you set up TBMQ PE in AWS EKS.
Prerequisites
Install and configure tools
To deploy TBMQ on EKS cluster you’ll need to install kubectl, eksctl and awscli tools.
Afterward you need to configure Access Key, Secret Key and default region. To get Access and Secret keys please follow this guide. The default region should be the ID of the region where you’d like to deploy the cluster.
1
aws configure
Clone TBMQ PE K8S repository
1
2
git clone -b release-2.2.0 https://github.com/thingsboard/tbmq-pe-k8s.git
cd tbmq-pe-k8s/aws
Configure and create EKS cluster
In the cluster.yml file you will find a sample cluster configuration.
You can adjust the following fields according to your requirements:
-
region– the AWS region where the cluster will be created. Default:us-east-1. -
availabilityZones– the availability zones within the chosen region. Default:[us-east-1a, us-east-1b, us-east-1c]. -
managedNodeGroups– defines the node groups used by the cluster. By default, there are two groups: one for TBMQ core services and another for TBMQ Integration Executors. If preferred, you may co-locate both workloads in the same node group. -
instanceType– the EC2 instance type for TBMQ and TBMQ IE nodes. Default:m7a.large.
Note: If you don’t make any changes to instanceType and desiredCapacity fields, the EKS will deploy 4 nodes of type m7a.large.
Command to create the AWS cluster:
1
eksctl create cluster -f cluster.yml
Create an AWS load-balancer controller
Once the cluster is ready, you’ll need to create AWS load-balancer controller. You can do it by following this guide. The cluster provisioning scripts will create several load balancers:
- tbmq-http-loadbalancer - AWS ALB that is responsible for the web UI and REST API;
- tbmq-mqtt-loadbalancer - AWS NLB that is responsible for the MQTT communication.
Provisioning of the AWS load-balancer controller is a very important step that is required for those load balancers to work properly.
Amazon PostgreSQL DB Configuration
You’ll need to provision a PostgreSQL database on Amazon RDS. One recommended way is to follow the official AWS RDS setup guide.
Recommendations:
- PostgreSQL version: Use version 17.x.
- Template: Use Production for real workloads. It enables important settings by default to improve resilience and reliability; reserve Dev/Test only for non-critical testing.
- Availability: Enable Multi-AZ deployment to ensure automatic failover and minimize downtime.
- Credentials: Change the default
usernameand set (or auto-generate) a securepassword. Be sure to store the password safely for future use. - Instance configuration: Use a small general-purpose Graviton instance (e.g., db.m7g.large) — TBMQ’s PostgreSQL load is modest; right-size first, optimize later.
- Scaling: Scale vertically (instance class/size) if sustained CPU >80% or active connections near limits; change type during a maintenance window.
- Storage: Choose gp3 or io1 volumes for production; avoid magnetic storage.
- Connectivity: Ensure your RDS database is accessible from your EKS cluster.
A straightforward approach is to create the database in the same VPC and subnets as your TBMQ cluster, and assign the
eksctl-tbmq-cluster-ClusterSharedNodeSecurityGroup-*security group to the RDS instance. See the screenshots below for guidance. - Parameter group: Create a custom parameter group for your instance. This makes it easier to adjust database parameters later without affecting other databases.
- Monitoring: Enable enhanced monitoring and set up CloudWatch alarms for key metrics.
Amazon MSK Configuration
You’ll need to provision an Amazon MSK cluster. To do this, open the AWS Console, navigate to MSK, click Create cluster, and select Custom create mode. You should see a screen similar to this:
Recommendations:
- Cluster type: Select Provisioned for full control over broker capacity and configuration.
- Kafka version: Use Apache Kafka 4.0.x — this version has been fully validated with TBMQ.
- Metadata mode: Choose KRaft (controller quorum) for simplified operations and improved resiliency compared to ZooKeeper.
- Instance type: Start with m7g.large brokers (or equivalent) for a good balance of performance and cost; scale up later if required.
- Cluster configuration: Create a custom configuration to simplify future parameter changes without needing to recreate the cluster.
- Networking: Deploy the MSK cluster in the same VPC as your TBMQ cluster, using private subnets to minimize exposure.
Attach the security group
eksctl-tbmq-cluster-ClusterSharedNodeSecurityGroup-*to allow connectivity from EKS nodes. - Security: Allow Unauthenticated access and Plaintext communication. Adjust later if you need stricter security policies.
- Monitoring: Use the default monitoring options or enable enhanced topic-level monitoring for detailed Kafka metrics.
Carefully review the full cluster configuration, then proceed with cluster creation.
Amazon ElastiCache (Valkey) Configuration
TBMQ relies on Valkey to store messages for DEVICE persistent clients. The cache also improves performance by reducing the number of direct database reads, especially when authentication is enabled and multiple clients connect at once. Without caching, every new connection triggers a database query to validate MQTT client credentials, which can cause the unnecessary load under high connection rates.
To set up Valkey, open the AWS Console → ElastiCache → Valkey caches → Create cache.
Recommendations:
- Engine: Select Valkey (recommended) as the engine type.
- Deployment option: Choose Design your own cache → Cluster cache to customize node type, shard count, and replicas.
- Cluster mode:
- Set to Enabled if you configure TBMQ with
REDIS_CONNECTION_TYPE=cluster(in this guide, we follow this approach). - Set to Disabled if you configure TBMQ with
REDIS_CONNECTION_TYPE=standalone.
- Set to Enabled if you configure TBMQ with
- Engine version: Use 8.x, fully supported and compatible with Redis OSS v7.
- Node type: Start with cache.r7g.large (13 GB memory, good network performance). A smaller type with at least 1 GB RAM can be used for dev/test environments.
- Shards: For production, configure 3 shards with 1 replica per shard to balance durability and scalability.
- Parameter groups: Use the default Valkey 8.x group or create a custom parameter group for easier tuning later.
- Networking:
- Deploy into the same VPC as your TBMQ cluster.
- Use private subnets to avoid exposure to the internet.
- Assign the security group
eksctl-tbmq-cluster-ClusterSharedNodeSecurityGroup-*to allow secure communication between EKS nodes and Valkey.
- Security: Disable encryption at rest and in transit if you plan to use plaintext/unauthenticated connections. Enable them if stricter security is required.
- Backups: Enable automatic backups to protect persistent cache data. Choose a retention period that matches your recovery needs (e.g., 1–7 days). This ensures you can restore the cache in case of accidental data loss or cluster issues.
Configure links to the Kafka/Postgres/Valkey
Amazon RDS (PostgreSQL)
When the RDS PostgreSQL instance switches to the Available state, open the AWS Console and copy its Endpoint.
Update the SPRING_DATASOURCE_URL field in tbmq-db-configmap.yml by replacing the placeholder RDS_URL_HERE with the copied endpoint.
Also, set the following environment variables with your RDS credentials:
SPRING_DATASOURCE_USERNAME→ your PostgreSQL usernameSPRING_DATASOURCE_PASSWORD→ your PostgreSQL password
Amazon MSK (Kafka)
When the MSK cluster becomes Active, retrieve the list of bootstrap brokers with:
1
aws kafka get-bootstrap-brokers --region us-east-1 --cluster-arn $CLUSTER_ARN
Here, $CLUSTER_ARN is the Amazon Resource Name of your MSK cluster.
Copy the value from BootstrapBrokerString and set it as the TB_KAFKA_SERVERS environment variable in tbmq.yml and tbmq-ie.yml.
Alternatively, click View client information in the MSK Console and copy the plaintext bootstrap servers from the UI.
Amazon ElastiCache (Valkey)
When the Valkey cluster reaches the Available state, open Cluster details and copy the connection endpoints:
- For standalone mode: use the Primary endpoint (without the
:6379port suffix) → YOUR_VALKEY_ENDPOINT_URL_WITHOUT_PORT. - For cluster mode: use the Cluster configuration endpoint → YOUR_VALKEY_CLUSTER_ENDPOINT_URL.
Next, edit tbmq-cache-configmap.yml:
-
If running standalone:
1 2 3
REDIS_CONNECTION_TYPE: "standalone" REDIS_HOST: "YOUR_VALKEY_ENDPOINT_URL_WITHOUT_PORT" #REDIS_PASSWORD: "YOUR_REDIS_PASSWORD"
-
If running cluster:
1 2 3 4 5 6
REDIS_CONNECTION_TYPE: "cluster" REDIS_NODES: "YOUR_VALKEY_CLUSTER_ENDPOINT_URL" #REDIS_PASSWORD: "YOUR_REDIS_PASSWORD" # Recommended for Kubernetes clusters to handle dynamic IP changes and failover: #REDIS_LETTUCE_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true" #REDIS_JEDIS_CLUSTER_TOPOLOGY_REFRESH_ENABLED: "true"
Installation
Execute the following command to run the installation:
1
./k8s-install-tbmq.sh
After this command is finished, you should see the next line in the console:
1
INFO o.t.m.b.i.ThingsboardMqttBrokerInstallService - Installation finished successfully!
Get the license key
Before proceeding, make sure you’ve selected your subscription plan or chosen to purchase a perpetual license. If you haven’t done this yet, please visit the Pricing page to compare available options and obtain your license key.
Note: Throughout this guide, we’ll refer to your license key as YOUR_LICENSE_KEY_HERE.
Configure the license key
Create a k8s secret with your license key:
1
2
export TBMQ_LICENSE_KEY=YOUR_LICENSE_KEY_HERE
kubectl create -n thingsboard-mqtt-broker secret generic tbmq-license --from-literal=license-key=$TBMQ_LICENSE_KEY
Configure gp3 as the Default Storage Class in Your EKS Cluster
To ensure that all newly created PersistentVolumeClaims (PVCs) in your EKS cluster use gp3-backed Amazon EBS volumes, you must create the gp3 StorageClass and set it as the default.
This section walks you through applying the gp3 StorageClass manifest, disabling or removing the existing gp2 class if present, and verifying that gp3 is now the cluster’s default.
Before proceeding, follow the official AWS EBS CSI Driver instructions to install the driver on your EKS cluster. Once the add-on is successfully installed, you can configure gp3 as the default StorageClass.
The gp3 EBS volume type is the recommended default for Amazon EKS, offering better performance, cost efficiency, and flexibility compared to gp2.
Please download the storage class configuration file:
1
curl -o gp3-def-sc.yml https://raw.githubusercontent.com/thingsboard/tbmq/release-2.2.0/k8s/helm/aws/gp3-def-sc.yml
Apply the configuration:
1
kubectl apply -f gp3-def-sc.yml
If a gp2 StorageClass exists, it may conflict with gp3. You can either make gp2 storage class non-default:
1
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
Or delete the gp2 StorageClass (if unused):
1
kubectl delete storageclass gp2
Check the gp3 storage class available and marked as default:
1
kubectl get sc
You should see the similar output:
1
2
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp3 (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 30s
Starting
Execute the following command to deploy the broker:
1
./k8s-deploy-tbmq.sh
After a few minutes, you may execute the next command to check the state of all pods.
1
kubectl get pods
If everything went fine, you should be able to see tbmq-0 and tbmq-1 pods. Every pod should be in the READY state.
Configure Load Balancers
Configure HTTP(S) Load Balancer
Configure HTTP(S) Load Balancer to access the web interface of your TBMQ PE instance. Basically, you have 2 possible options of configuration:
- http — Load Balancer without HTTPS support. Recommended for development. The only advantage is simple configuration and minimum costs. May be a good option for development server but definitely not suitable for production.
- https — Load Balancer with HTTPS support. Recommended for production. Acts as an SSL termination point. You may easily configure it to issue and maintain a valid SSL certificate. Automatically redirects all non-secure (HTTP) traffic to secure (HTTPS) port.
See links/instructions below on how to configure each of the suggested options.
HTTP Load Balancer
Execute the following command to deploy plain http load balancer:
1
kubectl apply -f receipts/http-load-balancer.yml
The process of load balancer provisioning may take some time. You may periodically check the status of the load balancer using the following command:
1
kubectl get ingress
Once provisioned, you should see a similar output:
1
2
NAME CLASS HOSTS ADDRESS PORTS AGE
tbmq-http-loadbalancer <none> * k8s-thingsbo-tbmq-000aba1305-222186756.eu-west-1.elb.amazonaws.com 80 3d1h
HTTPS Load Balancer
Use AWS Certificate Manager to create or import SSL certificate. Note your certificate ARN.
Edit the load balancer configuration and replace YOUR_HTTPS_CERTIFICATE_ARN with your certificate ARN:
1
nano receipts/https-load-balancer.yml
Execute the following command to deploy plain https load balancer:
1
kubectl apply -f receipts/https-load-balancer.yml
Configure MQTT Load Balancer
Configure MQTT load balancer to be able to use MQTT protocol to connect devices.
Create TCP load balancer using the following command:
1
kubectl apply -f receipts/mqtt-load-balancer.yml
The load balancer will forward all TCP traffic for ports 1883 and 8883.
One-way TLS
The simplest way to configure MQTTS is to make your MQTT load balancer (AWS NLB) to act as a TLS termination point. This way we set up the one-way TLS connection, where the traffic between your devices and load balancers is encrypted, and the traffic between your load balancer and TBMQ is not encrypted. There should be no security issues, since the ALB/NLB is running in your VPC. The only major disadvantage of this option is that you can’t use “X.509 certificate” MQTT client credentials, since information about the client certificate is not transferred from the load balancer to the TBMQ.
To enable the one-way TLS:
Use AWS Certificate Manager to create or import SSL certificate. Note your certificate ARN.
Edit the load balancer configuration and replace YOUR_MQTTS_CERTIFICATE_ARN with your certificate ARN:
1
nano receipts/mqtts-load-balancer.yml
Execute the following command to deploy plain MQTTS load balancer:
1
kubectl apply -f receipts/mqtts-load-balancer.yml
Two-way TLS
The more complex way to enable MQTTS is to obtain valid (signed) TLS certificate and configure it in the TBMQ. The main advantage of this option is that you may use it in combination with “X.509 certificate” MQTT client credentials.
To enable the two-way TLS:
Follow this guide to create a .pem file with the SSL certificate. Store the file as server.pem in the working directory.
You’ll need to create a config-map with your PEM file, you can do it by calling command:
1
2
3
4
kubectl create configmap tbmq-mqtts-config \
--from-file=server.pem=YOUR_PEM_FILENAME \
--from-file=mqttserver_key.pem=YOUR_PEM_KEY_FILENAME \
-o yaml --dry-run=client | kubectl apply -f -
- where YOUR_PEM_FILENAME is the name of your server certificate file.
- where YOUR_PEM_KEY_FILENAME is the name of your server certificate private key file.
Then, uncomment all sections in the ‘tbmq.yml’ file that are marked with “Uncomment the following lines to enable two-way MQTTS”.
Execute command to apply changes:
1
kubectl apply -f tbmq.yml
Finally, deploy the “transparent” load balancer:
1
kubectl apply -f receipts/mqtt-load-balancer.yml
Validate the setup
Now you can open the TBMQ web interface in your browser using the DNS name of the load balancer.
You can get the DNS name of the load-balancers using the next command:
1
kubectl get ingress
You should see the similar picture:
1
2
NAME CLASS HOSTS ADDRESS PORTS AGE
tbmq-http-loadbalancer <none> * k8s-thingsbo-tbmq-000aba1305-222186756.eu-west-1.elb.amazonaws.com 80 3d1h
Use ADDRESS field of the tbmq-http-loadbalancer to connect to the cluster.
You should see TBMQ login page. Use the following default credentials for System Administrator:
Username:
Password:
1
sysadmin
On the first user log-in you will be asked to change the default password to the preferred one and then re-login using the new credentials.
Validate MQTT access
To connect to the cluster via MQTT, you will need to get the corresponding service IP. You can do this with the command:
1
kubectl get services
You should see the similar picture:
1
2
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tbmq-mqtt-loadbalancer LoadBalancer 10.100.119.170 k8s-thingsbo-tbmq-b9f99d1ab6-1049a98ba4e28403.elb.eu-west-1.amazonaws.com 1883:30308/TCP,8883:31609/TCP 6m58s
Use EXTERNAL-IP field of the load-balancer to connect to the cluster via MQTT protocol.
Troubleshooting
In case of any issues, you can examine service logs for errors. For example, to see TBMQ logs, execute the following command:
1
kubectl logs -f tbmq-0
Use the next command to see the state of all statefulsets.
1
kubectl get statefulsets
See kubectl Cheat Sheet command reference for more details.
Upgrading
Review the release notes and upgrade instruction for detailed information on the latest changes.
If the documentation does not cover the specific upgrade instructions for your case, please contact us so we can provide further guidance.
Backup and restore (Optional)
While backing up your PostgreSQL database is highly recommended, it is optional before proceeding with the upgrade. For further guidance, follow the next instructions.
Upgrade from TBMQ CE to TBMQ PE (v2.2.0)
To upgrade your existing TBMQ Community Edition (CE) to TBMQ Professional Edition (PE), ensure you are running the latest TBMQ CE 2.2.0 version before starting the process. Merge your current configuration with the latest TBMQ PE K8S scripts. Do not forget to configure the license key.
Run the following commands, including the upgrade script to migrate PostgreSQL database data from CE to PE:
1
2
3
./k8s-delete-tbmq.sh
./k8s-upgrade-tbmq.sh --fromVersion=ce
./k8s-deploy-tbmq.sh
Cluster deletion
Execute the following command to delete TBMQ nodes:
1
./k8s-delete-tbmq.sh
Execute the following command to delete all TBMQ nodes and configmaps:
1
./k8s-delete-all.sh
Execute the following command to delete the EKS cluster (you should change the name of the cluster and the region if those differ):
1
eksctl delete cluster -r us-east-1 -n tbmq -w
Next steps
-
Getting started guide - This guide provide quick overview of TBMQ.
-
Security guide - Learn how to enable authentication and authorization for MQTT clients.
-
Configuration guide - Learn about TBMQ configuration files and parameters.
-
MQTT client type guide - Learn about TBMQ client types.
-
Integration with ThingsBoard - Learn about how to integrate TBMQ with ThingsBoard.