Skip to content
Stand with Ukraine flag

GKE Microservices Setup

This guide walks you through deploying ThingsBoard CE in microservices mode on Google Kubernetes Engine. We use Google Cloud SQL for managed PostgreSQL.

Install kubectl and gcloud. See before you begin for more info.

Create a new GCP project (recommended) or choose an existing one:

Terminal window
gcloud init
Terminal window
gcloud services enable container.googleapis.com sql-component.googleapis.com sqladmin.googleapis.com

Step 1. Clone ThingsBoard CE K8S scripts repository

Section titled “Step 1. Clone ThingsBoard CE K8S scripts repository”
Terminal window
git clone -b release-4.3 https://github.com/thingsboard/thingsboard-ce-k8s.git
cd thingsboard-ce-k8s/gcp/microservices
Terminal window
export GCP_PROJECT=$(gcloud config get-value project)
export GCP_REGION=us-central1
export GCP_ZONE=us-central1
export GCP_ZONE1=us-central1-a
export GCP_ZONE2=us-central1-b
export GCP_ZONE3=us-central1-c
export GCP_NETWORK=default
export TB_CLUSTER_NAME=tb-ce-msa
export TB_DATABASE_NAME=tb-db
echo "Project: $GCP_PROJECT, region: $GCP_REGION, zones: $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3, network: $GCP_NETWORK, cluster: $TB_CLUSTER_NAME, database: $TB_DATABASE_NAME"
VariableDefaultDescription
GCP_PROJECT(auto-detected)Your GCP project ID
GCP_REGIONus-central1Compute region
GCP_ZONE1/2/3us-central1-a/b/cAvailability zones for the regional cluster
GCP_NETWORKdefaultGCP network name
TB_CLUSTER_NAMEtb-ce-msaGKE cluster name
TB_DATABASE_NAMEtb-dbCloud SQL instance name

Create a regional cluster distributed across 3 zones. The example provisions one e2-standard-4 node per zone (3 nodes total). You can modify the machine type and node count to suit your workload. See GCP machine types for options.

Terminal window
gcloud container clusters create $TB_CLUSTER_NAME \
--release-channel stable \
--region $GCP_REGION \
--network=$GCP_NETWORK \
--node-locations $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3 \
--enable-ip-alias \
--num-nodes=1 \
--node-labels=role=main \
--machine-type=e2-standard-4

Alternatively, see the regional cluster setup guide.

Terminal window
gcloud container clusters get-credentials $TB_CLUSTER_NAME --region $GCP_REGION

5.1 Google Cloud SQL (PostgreSQL) instance

Section titled “5.1 Google Cloud SQL (PostgreSQL) instance”

Enable service networking:

Terminal window
gcloud services enable servicenetworking.googleapis.com --project=$GCP_PROJECT
gcloud compute addresses create google-managed-services-$GCP_NETWORK \
--global \
--purpose=VPC_PEERING \
--prefix-length=16 \
--network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK
gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=google-managed-services-$GCP_NETWORK \
--network=$GCP_NETWORK \
--project=$GCP_PROJECT
Terminal window
gcloud beta sql instances create $TB_DATABASE_NAME \
--database-version=POSTGRES_16 \
--region=$GCP_REGION --availability-type=regional \
--no-assign-ip --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK \
--cpu=2 --memory=7680MB

Note the IP address (YOUR_DB_IP_ADDRESS) from the command output.

Terminal window
gcloud sql users set-password postgres \
--instance=$TB_DATABASE_NAME \
--password=secret
Terminal window
gcloud sql databases create thingsboard --instance=$TB_DATABASE_NAME

Using Cassandra is optional. We recommend it if you plan to insert more than 5K data points per second.

Create 3 separate node pools:

Terminal window
gcloud container node-pools create cassandra1 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE1 \
--node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4
gcloud container node-pools create cassandra2 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE2 \
--node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4
gcloud container node-pools create cassandra3 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE3 \
--node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4

Deploy Cassandra:

Terminal window
kubectl apply -f tb-namespace.yml
kubectl config set-context $(kubectl config current-context) --namespace=thingsboard
kubectl apply -f receipts/cassandra.yml

Update DB settings:

Terminal window
echo " DATABASE_TS_TYPE: cassandra" >> tb-node-db-configmap.yml
echo " CASSANDRA_URL: cassandra:9042" >> tb-node-db-configmap.yml
echo " CASSANDRA_LOCAL_DATACENTER: $GCP_REGION" >> tb-node-db-configmap.yml

Create keyspace:

Terminal window
kubectl exec -it cassandra-0 -- bash -c "cqlsh -e \
\"CREATE KEYSPACE IF NOT EXISTS thingsboard \
WITH replication = { \
'class' : 'NetworkTopologyStrategy', \
'us-central1' : '3' \
};\""

Edit tb-node-db-configmap.yml and replace YOUR_DB_IP_ADDRESS and YOUR_DB_PASSWORD:

Terminal window
nano tb-node-db-configmap.yml

Run the installation:

Terminal window
./k8s-install-tb.sh --loadDemo

After this command finishes you should see:

Installation finished successfully!

Deploy thirdparty components (Zookeeper, Kafka, Redis) and main ThingsBoard microservices:

Terminal window
./k8s-deploy-resources.sh

After a few minutes, call kubectl get pods. You should see tb-node-0 pod in the READY state.

Deploy the transport microservices you need. Omit protocols you don’t use to save resources:

Terminal window
# HTTP Transport (optional)
kubectl apply -f transports/tb-http-transport.yml
# MQTT Transport (optional)
kubectl apply -f transports/tb-mqtt-transport.yml
# CoAP Transport (optional)
kubectl apply -f transports/tb-coap-transport.yml
# LwM2M Transport (optional)
kubectl apply -f transports/tb-lwm2m-transport.yml
# SNMP Transport (optional)
kubectl apply -f transports/tb-snmp-transport.yml

You have 3 options:

  • HTTP — recommended for development.
  • HTTPS — recommended for production. Uses Google-managed SSL certificate.
  • Transparent — forwards traffic to ThingsBoard. Requires your own SSL certificate.
Terminal window
kubectl apply -f receipts/http-load-balancer.yml

Check the status:

Terminal window
kubectl get ingress

Default credentials:

Reserve a static IP:

Terminal window
gcloud compute addresses create thingsboard-http-lb-address --global

Edit receipts/https-load-balancer.yml, replace PUT_YOUR_DOMAIN_HERE, then deploy:

Terminal window
kubectl apply -f receipts/https-load-balancer.yml

Wait for the certificate to provision (up to 60 minutes):

Terminal window
kubectl describe managedcertificate managed-cert

8.2 Configure MQTT load balancer (optional)

Section titled “8.2 Configure MQTT load balancer (optional)”
Terminal window
kubectl apply -f receipts/mqtt-load-balancer.yml

For MQTT over SSL, follow the MQTT over SSL guide to configure transport/tb-mqtt-transport.yml.

8.3 Configure CoAP load balancer (optional)

Section titled “8.3 Configure CoAP load balancer (optional)”
Terminal window
kubectl apply -f receipts/coap-load-balancer.yml

The load balancer forwards UDP traffic for ports 5683 (CoAP non-secure) and 5684 (CoAP secure DTLS).

For CoAP over DTLS, follow the CoAP over DTLS guide to configure transport/tb-coap-transport.yml.

8.4 Configure LwM2M load balancer (optional)

Section titled “8.4 Configure LwM2M load balancer (optional)”
Terminal window
kubectl apply -f receipts/lwm2m-load-balancer.yml

The load balancer forwards UDP traffic for ports 5685–5688.

For LwM2M over DTLS, follow the LwM2M over DTLS guide to configure transport/tb-lwm2m-transport.yml.

8.5 Configure Edge load balancer (optional)

Section titled “8.5 Configure Edge load balancer (optional)”
Terminal window
kubectl apply -f receipts/edge-load-balancer.yml

The load balancer forwards all TCP traffic on port 7070.

Terminal window
kubectl get ingress
kubectl get service
Terminal window
kubectl logs -f tb-node-0

See the kubectl Cheat Sheet for more details.

Terminal window
./k8s-delete-resources.sh
./k8s-upgrade-tb.sh --fromVersion=[FROM_VERSION]
./k8s-deploy-resources.sh

Where FROM_VERSION is the starting version. See Upgrade Instructions for valid values. Upgrade versions one by one.

Terminal window
./k8s-delete-resources.sh
./k8s-delete-all.sh