Skip to content
Stand with Ukraine flag

AWS EKS Microservices Setup

This guide walks you through deploying ThingsBoard CE in microservices mode on AWS EKS. We use Amazon RDS for managed PostgreSQL, Amazon MSK for managed Kafka, and Amazon ElastiCache for managed Redis.

Install kubectl, eksctl, and AWS CLI.

Configure your AWS credentials. To get Access and Secret keys, follow this guide. The default region should be the ID of the region where you want to deploy the cluster.

Terminal window
aws configure

Step 1. Clone ThingsBoard CE K8S scripts repository

Section titled “Step 1. Clone ThingsBoard CE K8S scripts repository”
Terminal window
git clone -b release-4.3 https://github.com/thingsboard/thingsboard-ce-k8s.git
cd thingsboard-ce-k8s/aws/microservices

In the cluster.yml file you can find the suggested cluster configuration. Key fields you can change:

FieldDefaultDescription
regionus-east-1AWS region for the cluster
availabilityZones[us-east-1a, us-east-1b, us-east-1c]Region availability zones
instanceTypem5.xlargeEC2 instance type for nodes

Create the cluster:

Terminal window
eksctl create cluster -f cluster.yml

Step 3. Create AWS load-balancer controller

Section titled “Step 3. Create AWS load-balancer controller”

Once the cluster is ready, create the AWS load-balancer controller by following this guide.

The cluster provisioning scripts create several load balancers:

Load BalancerTypePurpose
tb-http-loadbalancerALBWeb UI, REST API, HTTP transport
tb-mqtt-loadbalancerNLBMQTT transport
tb-coap-loadbalancerNLBCoAP transport
tb-edge-loadbalancerNLBEdge instances connectivity

Set up PostgreSQL on Amazon RDS. Follow this guide, but take into account the following requirements:

  • Keep your PostgreSQL password in a safe place. We will refer to it later as YOUR_RDS_PASSWORD.
  • Make sure your PostgreSQL version is latest 16.x.
  • Deploy the RDS instance in the same VPC and use the eksctl-thingsboard-cluster-ClusterSharedNodeSecurityGroup-* security group.
  • Use “thingsboard” as the initial database name.

Once the database switches to the Available state, navigate to Connectivity and Security and copy the endpoint value (YOUR_RDS_ENDPOINT_URL).

Using Cassandra is optional. We recommend it if you plan to insert more than 5K data points per second or want to optimize storage space.

Create 3 separate node pools with 1 node per zone:

Terminal window
eksctl create nodegroup --config-file=<path> --include='cassandra-*'

Deploy Cassandra:

Terminal window
kubectl apply -f tb-namespace.yml
kubectl config set-context $(kubectl config current-context) --namespace=thingsboard
kubectl apply -f receipts/cassandra.yml

Update DB settings (replace YOUR_AWS_REGION):

Terminal window
echo " DATABASE_TS_TYPE: cassandra" >> tb-node-db-configmap.yml
echo " CASSANDRA_URL: cassandra:9042" >> tb-node-db-configmap.yml
echo " CASSANDRA_LOCAL_DATACENTER: YOUR_AWS_REGION" >> tb-node-db-configmap.yml

Create keyspace:

Terminal window
kubectl exec -it cassandra-0 -- bash -c "cqlsh -e \
\"CREATE KEYSPACE IF NOT EXISTS thingsboard \
WITH replication = { \
'class' : 'NetworkTopologyStrategy', \
'us-east' : '3' \
};\""

Step 5. Amazon MSK configuration (optional)

Section titled “Step 5. Amazon MSK configuration (optional)”

ThingsBoard uses Kafka as an external queue for exchanging data between microservices. By default, the deployment uses local Kafka, but ThingsBoard is also compatible with Amazon MSK.

Steps to create a basic Kafka MSK cluster:

  • Open the AWS console, go to MSK and click Create Cluster.
  • Select Custom creation method.
  • Specify a name and select Cluster typeProvisioned.
  • Select Apache Kafka version 3.8.x (Express brokers) or 4.0.x (Standard brokers).
  • Choose kafka.m7.large or similar instance types.
  • Deploy the MSK instance in the same VPC as the ThingsBoard cluster.
  • Use the default security settings with Plaintext mode enabled.

Once the MSK cluster switches to the Active state, navigate to Details and click View client information. Copy the bootstrap server information in plaintext.

Edit tb-kafka.yml, find the StatefulSet tb-kafka, and set spec.replicas to 0. Edit tb-kafka-configmap.yml and replace TB_KAFKA_SERVERS with your MSK endpoint.

Step 6. Amazon ElastiCache configuration (optional)

Section titled “Step 6. Amazon ElastiCache configuration (optional)”

ThingsBoard uses cache to improve performance. By default, the deployment uses a local Valkey cache, but ThingsBoard is also compatible with Amazon ElastiCache.

  • Navigate to ElastiCache Valkey caches and click Create.
  • Specify Valkey Engine version 8.x and a node type with at least 1 GB of RAM.
  • Deploy in the same VPC as the ThingsBoard cluster.
  • Disable Enable automatic backups.

Once the Valkey cluster switches to Available, copy the Endpoint field without the “:6379” port suffix.

Edit tb-valkey.yml and set spec.replicas to 0. Edit tb-cache-configmap.yml and replace REDIS_HOST with your Valkey endpoint.

Step 7. CPU and memory resources allocation

Section titled “Step 7. CPU and memory resources allocation”

The scripts have preconfigured values of resources for each service. You can change them in .yml files under the resources section.

ServiceCPUMemory
TB Node1.02Gi
TB HTTP Transport0.50.5Gi
TB MQTT Transport0.50.5Gi
TB CoAP Transport0.50.5Gi
TB Web UI0.1100Mi
JS Executor0.1100Mi
Zookeeper0.10.5Gi
Terminal window
./k8s-install-tb.sh --loadDemo

Where --loadDemo is an optional argument to load additional demo data.

After this command finishes you should see:

Installation finished successfully!

Deploy ThingsBoard services:

Terminal window
./k8s-deploy-resources.sh

After a few minutes, call kubectl get pods. You should see:

  • 5x tb-pe-js-executor
  • 2x tb-pe-web-ui
  • 1x tb-pe-node
  • 1x tb-pe-web-report
  • 3x zookeeper

Every pod should be in the READY state.

Deploy the transport microservices you need:

Terminal window
# HTTP Transport (optional)
kubectl apply -f transports/tb-http-transport.yml
# MQTT Transport (optional)
kubectl apply -f transports/tb-mqtt-transport.yml
# CoAP Transport (optional)
kubectl apply -f transports/tb-coap-transport.yml
# LwM2M Transport (optional)
kubectl apply -f transports/tb-lwm2m-transport.yml
# SNMP Transport (optional)
kubectl apply -f transports/tb-snmp-transport.yml
Terminal window
kubectl apply -f receipts/http-load-balancer.yml

Check the status:

Terminal window
kubectl get ingress

Use the address to access the HTTP web UI (port 80) and connect devices via HTTP API.

Default credentials:

Use AWS Certificate Manager to create or import an SSL certificate. Replace YOUR_HTTPS_CERTIFICATE_ARN in receipts/https-load-balancer.yml, then deploy:

Terminal window
kubectl apply -f receipts/https-load-balancer.yml

10.2 Configure MQTT load balancer (optional)

Section titled “10.2 Configure MQTT load balancer (optional)”
Terminal window
kubectl apply -f receipts/mqtt-load-balancer.yml

The load balancer forwards all TCP traffic for ports 1883 and 8883.

For one-way TLS, use AWS Certificate Manager and edit receipts/mqtts-load-balancer.yml. For two-way TLS, follow the MQTT over SSL guide.

10.3 Configure UDP load balancer (optional)

Section titled “10.3 Configure UDP load balancer (optional)”
Terminal window
kubectl apply -f receipts/udp-load-balancer.yml

The load balancer forwards UDP traffic for ports 5683–5688 (CoAP and LwM2M protocols).

10.4 Configure Edge load balancer (optional)

Section titled “10.4 Configure Edge load balancer (optional)”
Terminal window
kubectl apply -f receipts/edge-load-balancer.yml

The load balancer forwards all TCP traffic on port 7070.

Terminal window
kubectl get ingress
Terminal window
kubectl get service
Terminal window
kubectl logs -f tb-node-0

See the kubectl Cheat Sheet for more details.

Merge your local changes with the latest release branch. If a database upgrade is needed:

Terminal window
./k8s-upgrade-tb.sh --fromVersion=[FROM_VERSION]

See Upgrade Instructions for valid fromVersion values. Upgrade versions one by one.

Once completed, re-deploy resources:

Terminal window
./k8s-deploy-resources.sh
Terminal window
./k8s-delete-resources.sh
./k8s-delete-all.sh
eksctl delete cluster -r us-east-1 -n thingsboard -w