Skip to content
Stand with Ukraine flag

AKS Monolith Setup

This guide walks you through deploying ThingsBoard CE in monolith mode on Azure Kubernetes Service (AKS). We use Azure Database for PostgreSQL as the managed database.

Install kubectl and az CLI tools.

Log in to Azure:

Terminal window
az login

Step 1. Clone ThingsBoard CE K8S scripts repository

Section titled “Step 1. Clone ThingsBoard CE K8S scripts repository”
Terminal window
git clone -b release-4.3 https://github.com/thingsboard/thingsboard-ce-k8s.git
cd thingsboard-ce-k8s/azure/monolith
Terminal window
export AKS_RESOURCE_GROUP=ThingsBoardResources
export AKS_LOCATION=eastus
export AKS_GATEWAY=tb-gateway
export TB_CLUSTER_NAME=tb-cluster
export TB_DATABASE_NAME=tb-db
export TB_REDIS_NAME=tb-redis
echo "Resource group: $AKS_RESOURCE_GROUP, location: $AKS_LOCATION, cluster: $TB_CLUSTER_NAME, database: $TB_DATABASE_NAME"
VariableDefaultDescription
AKS_RESOURCE_GROUPThingsBoardResourcesAzure Resource Group name
AKS_LOCATIONeastusAzure region. Run az account list-locations for options
AKS_GATEWAYtb-gatewayAzure Application Gateway name
TB_CLUSTER_NAMEtb-clusterAKS cluster name
TB_DATABASE_NAMEtb-dbPostgreSQL server name
TB_REDIS_NAMEtb-redisValkey/Redis cache name

Create the Azure Resource Group:

Terminal window
az group create --name $AKS_RESOURCE_GROUP --location $AKS_LOCATION

See az group for more info.

Create the AKS cluster with 1 node:

Terminal window
az aks create --resource-group $AKS_RESOURCE_GROUP \
--name $TB_CLUSTER_NAME \
--generate-ssh-keys \
--enable-addons ingress-appgw \
--appgw-name $AKS_GATEWAY \
--appgw-subnet-cidr "10.2.0.0/16" \
--node-vm-size Standard_DS3_v2 \
--node-count 1

Key parameters:

  • node-count — number of nodes per pool (default: 3)
  • node-vm-size — VM size (default: Standard_DS2_v2)
  • enable-addons — enables Application Gateway as a path-based load balancer
  • generate-ssh-keys — generates SSH keys if missing (stored in ~/.ssh)

See az aks create for the full parameter list. Alternatively, follow the portal-based cluster setup guide.

Terminal window
az aks get-credentials --resource-group $AKS_RESOURCE_GROUP --name $TB_CLUSTER_NAME

Verify the connection:

Terminal window
kubectl get nodes

You need to set up PostgreSQL on Azure. ThingsBoard uses it as the main database.

You may follow the Azure portal guide, keeping these requirements in mind:

  • PostgreSQL version 16.x
  • The instance must be accessible from the AKS cluster
  • Use thingsboard as the initial database name
  • High availability enabled is recommended for production

Alternatively, create using the CLI (replace POSTGRES_USER and POSTGRES_PASS with your credentials):

Terminal window
az postgres flexible-server create --location $AKS_LOCATION --resource-group $AKS_RESOURCE_GROUP \
--name $TB_DATABASE_NAME --admin-user POSTGRES_USER --admin-password POSTGRES_PASS \
--public-access 0.0.0.0 --storage-size 32 \
--version 16 -d thingsboard

Note the host value from the command output (e.g. tb-db.postgres.database.azure.com). Also note the username and password.

Edit tb-node-db-configmap.yml and replace YOUR_AZURE_POSTGRES_ENDPOINT_URL, YOUR_AZURE_POSTGRES_USER, and YOUR_AZURE_POSTGRES_PASSWORD with the correct values:

Terminal window
nano tb-node-db-configmap.yml

Using Cassandra is optional. We recommend it if you plan to insert more than 5K data points per second or want to optimize storage space.

Create 3 separate node pools with 1 node per zone. At least 4 vCPUs and 16 GB of RAM is recommended.

Terminal window
az aks nodepool add --resource-group $AKS_RESOURCE_GROUP --cluster-name $TB_CLUSTER_NAME --name tbcassandra1 --node-count 1 --zones 1 --labels role=cassandra
az aks nodepool add --resource-group $AKS_RESOURCE_GROUP --cluster-name $TB_CLUSTER_NAME --name tbcassandra2 --node-count 1 --zones 2 --labels role=cassandra
az aks nodepool add --resource-group $AKS_RESOURCE_GROUP --cluster-name $TB_CLUSTER_NAME --name tbcassandra3 --node-count 1 --zones 3 --labels role=cassandra
Terminal window
kubectl apply -f tb-namespace.yml
kubectl config set-context $(kubectl config current-context) --namespace=thingsboard
kubectl apply -f receipts/cassandra.yml
Terminal window
echo " DATABASE_TS_TYPE: cassandra" >> tb-node-db-configmap.yml
echo " CASSANDRA_URL: cassandra:9042" >> tb-node-db-configmap.yml
echo " CASSANDRA_LOCAL_DATACENTER: dc1" >> tb-node-db-configmap.yml
Terminal window
kubectl exec -it cassandra-0 -- bash -c "cqlsh -e \
\"CREATE KEYSPACE IF NOT EXISTS thingsboard \
WITH replication = { \
'class' : 'NetworkTopologyStrategy', \
'dc1' : '3' \
};\""

Run the initial database setup:

Terminal window
./k8s-install-tb.sh --loadDemo

Where --loadDemo is an optional argument to load additional demo data.

After this command finishes you should see:

Installation finished successfully!
Terminal window
./k8s-deploy-resources.sh

After a few minutes, call kubectl get pods. If everything went fine, you should see tb-node-0 pod in the READY state.

You have 2 options:

  • HTTP — recommended for development. Simple configuration and minimum costs.
  • HTTPS — recommended for production. Requires an SSL certificate uploaded to Application Gateway.
Terminal window
kubectl apply -f receipts/http-load-balancer.yml

Check the status:

Terminal window
kubectl get ingress

Once provisioned, you should see output similar to:

NAME CLASS HOSTS ADDRESS PORTS AGE
tb-http-loadbalancer <none> * 34.111.24.134 80 7m25s

Default credentials:

Upload your SSL certificate to Application Gateway:

Terminal window
az network application-gateway ssl-cert create \
--resource-group $(az aks show --name $TB_CLUSTER_NAME --resource-group $AKS_RESOURCE_GROUP --query nodeResourceGroup | tr -d '"') \
--gateway-name $AKS_GATEWAY \
--name ThingsBoardHTTPCert \
--cert-file YOUR_CERT \
--cert-password YOUR_CERT_PASS

Deploy the HTTPS load balancer:

Terminal window
kubectl apply -f receipts/https-load-balancer.yml

Check the status:

Terminal window
kubectl get ingress

8.2 Configure MQTT load balancer (optional)

Section titled “8.2 Configure MQTT load balancer (optional)”
Terminal window
kubectl apply -f receipts/mqtt-load-balancer.yml

The load balancer forwards all TCP traffic for ports 1883 and 8883.

For MQTT over SSL, follow the MQTT over SSL guide to configure the required environment variables in tb-node.yml.

8.3 Configure UDP load balancer (optional)

Section titled “8.3 Configure UDP load balancer (optional)”
Terminal window
kubectl apply -f receipts/udp-load-balancer.yml

The load balancer forwards all UDP traffic for ports:

PortProtocol
5683CoAP non-secure
5684CoAP secure DTLS
5685LwM2M non-secure
5686LwM2M secure DTLS
5687LwM2M bootstrap DTLS
5688LwM2M bootstrap secure DTLS

For CoAP over DTLS, follow the CoAP over DTLS guide. For LwM2M over DTLS, follow the LwM2M over DTLS guide.

8.4 Configure Edge load balancer (optional)

Section titled “8.4 Configure Edge load balancer (optional)”
Terminal window
kubectl apply -f receipts/edge-load-balancer.yml

The load balancer forwards all TCP traffic on port 7070.

Terminal window
kubectl get ingress
kubectl get service

Two load balancers are available:

  • tb-mqtt-loadbalancer — for TCP (MQTT) protocol
  • tb-udp-loadbalancer — for UDP (CoAP/LwM2M) protocol
Terminal window
kubectl logs -f tb-node-0

See the kubectl Cheat Sheet for more details.

Terminal window
./k8s-delete-resources.sh
./k8s-upgrade-tb.sh --fromVersion=[FROM_VERSION]
./k8s-deploy-resources.sh

Where FROM_VERSION is the starting version. See Upgrade Instructions for valid values. Upgrade versions one by one.

Delete ThingsBoard pods and load balancers:

Terminal window
./k8s-delete-resources.sh

Delete all data including database:

Terminal window
./k8s-delete-all.sh

Delete the AKS cluster:

Terminal window
az aks delete --resource-group $AKS_RESOURCE_GROUP --name $TB_CLUSTER_NAME