Skip to content
Stand with Ukraine flag

GKE Monolith Setup

This guide walks you through deploying ThingsBoard PE in monolith mode on Google Kubernetes Engine. We use Google Cloud SQL for managed PostgreSQL.

Install kubectl and gcloud. See before you begin for more info.

Create a new GCP project (recommended) or choose an existing one:

Terminal window
gcloud init
Terminal window
gcloud services enable container.googleapis.com sql-component.googleapis.com sqladmin.googleapis.com

Verify that you can pull the images from Docker Hub:

Terminal window
docker pull thingsboard/tb-pe-node:4.3.1.1PE
docker pull thingsboard/tb-pe-web-report:4.3.1.1PE

Step 1. Clone ThingsBoard PE K8S scripts repository

Section titled “Step 1. Clone ThingsBoard PE K8S scripts repository”
Terminal window
git clone -b release-4.3 https://github.com/thingsboard/thingsboard-pe-k8s.git --depth 1
cd thingsboard-pe-k8s/gcp/monolith
Terminal window
export GCP_PROJECT=$(gcloud config get-value project)
export GCP_REGION=us-central1
export GCP_ZONE=us-central1-a
export GCP_NETWORK=default
export TB_CLUSTER_NAME=tb-pe
export TB_DATABASE_NAME=tb-db
echo "Project: $GCP_PROJECT, region: $GCP_REGION, zone: $GCP_ZONE, network: $GCP_NETWORK, cluster: $TB_CLUSTER_NAME, database: $TB_DATABASE_NAME"
VariableDefaultDescription
GCP_PROJECT(auto-detected)Your GCP project ID
GCP_REGIONus-central1Compute region
GCP_ZONEus-central1-aCompute zone (must match region)
GCP_NETWORKdefaultGCP network name
TB_CLUSTER_NAMEtb-peGKE cluster name
TB_DATABASE_NAMEtb-dbCloud SQL instance name

Create a zonal cluster with 1 node of e2-standard-4 machine type:

Terminal window
gcloud container clusters create $TB_CLUSTER_NAME \
--release-channel stable \
--zone $GCP_ZONE \
--node-locations $GCP_ZONE \
--network=$GCP_NETWORK \
--enable-ip-alias \
--num-nodes=1 \
--node-labels=role=main \
--machine-type=e2-standard-4

Alternatively, see the custom cluster setup guide.

Terminal window
gcloud container clusters get-credentials $TB_CLUSTER_NAME --zone $GCP_ZONE

5.1 Google Cloud SQL (PostgreSQL) instance

Section titled “5.1 Google Cloud SQL (PostgreSQL) instance”

Enable service networking:

Terminal window
gcloud services enable servicenetworking.googleapis.com --project=$GCP_PROJECT
gcloud compute addresses create google-managed-services-$GCP_NETWORK \
--global \
--purpose=VPC_PEERING \
--prefix-length=16 \
--network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK
gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=google-managed-services-$GCP_NETWORK \
--network=$GCP_NETWORK \
--project=$GCP_PROJECT

Create a PostgreSQL 16 instance. Recommendations:

  • Use the same region and VPC network as your GKE cluster
  • Use private IP address and disable public IP
  • Use highly available instance for production, single zone for development
  • At least 2 vCPUs and 7.5 GB RAM
Terminal window
gcloud beta sql instances create $TB_DATABASE_NAME \
--database-version=POSTGRES_16 \
--region=$GCP_REGION --availability-type=regional \
--no-assign-ip --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK \
--cpu=2 --memory=7680MB

Note the IP address (YOUR_DB_IP_ADDRESS) from the command output.

Terminal window
gcloud sql users set-password postgres \
--instance=$TB_DATABASE_NAME \
--password=secret
Terminal window
gcloud sql databases create thingsboard --instance=$TB_DATABASE_NAME

Using Cassandra is optional. We recommend it if you plan to insert more than 5K data points per second or want to optimize storage space.

Create 3 separate node pools with 1 node per zone. At least 4 vCPUs and 16 GB of RAM is recommended.

Terminal window
gcloud container node-pools create cassandra1 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE1 \
--node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4
gcloud container node-pools create cassandra2 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE2 \
--node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4
gcloud container node-pools create cassandra3 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE3 \
--node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4
Terminal window
kubectl apply -f tb-namespace.yml
kubectl config set-context $(kubectl config current-context) --namespace=thingsboard
kubectl apply -f receipts/cassandra.yml
Terminal window
echo " DATABASE_TS_TYPE: cassandra" >> tb-node-db-configmap.yml
echo " CASSANDRA_URL: cassandra:9042" >> tb-node-db-configmap.yml
echo " CASSANDRA_LOCAL_DATACENTER: $GCP_REGION" >> tb-node-db-configmap.yml
Terminal window
kubectl exec -it cassandra-0 -- bash -c "cqlsh -e \
\"CREATE KEYSPACE IF NOT EXISTS thingsboard \
WITH replication = { \
'class' : 'NetworkTopologyStrategy', \
'us-central1' : '3' \
};\""

We assume you have already chosen your subscription plan or decided to purchase a perpetual license. If not, navigate to the pricing page. See How to get pay-as-you-go subscription or How to get perpetual license for details.

Create a docker secret with your license key:

Terminal window
export TB_LICENSE_KEY=PUT_YOUR_LICENSE_KEY_HERE
kubectl create -n thingsboard secret generic tb-license --from-literal=license-key=$TB_LICENSE_KEY

Edit tb-node-db-configmap.yml and replace YOUR_DB_IP_ADDRESS and YOUR_DB_PASSWORD:

Terminal window
nano tb-node-db-configmap.yml

Run the installation:

Terminal window
./k8s-install-tb.sh --loadDemo

Where --loadDemo is an optional argument to load additional demo data.

After this command finishes you should see:

Installation finished successfully!
Terminal window
./k8s-deploy-resources.sh

After a few minutes, call kubectl get pods. If everything went fine, you should see tb-node-0 pod in the READY state.

You have 3 options:

  • HTTP — recommended for development.
  • HTTPS — recommended for production. Uses Google-managed SSL certificate.
  • Transparent — forwards traffic to ThingsBoard HTTP/HTTPS ports. Requires your own SSL certificate.
Terminal window
kubectl apply -f receipts/http-load-balancer.yml

Check the status:

Terminal window
kubectl get ingress

Use the address to access the web UI and connect devices via HTTP API.

Default credentials:

See the official documentation for Google-managed SSL certificates. Reserve a static IP:

Terminal window
gcloud compute addresses create thingsboard-http-lb-address --global

Edit receipts/https-load-balancer.yml and replace PUT_YOUR_DOMAIN_HERE, then deploy:

Terminal window
kubectl apply -f receipts/https-load-balancer.yml

Assign the domain name to the load balancer IP and wait for the certificate to provision (up to 60 minutes):

Terminal window
kubectl describe managedcertificate managed-cert

Follow the HTTPS (TLS) configuration guide to configure SSL in tb-node.yml. Then deploy:

Terminal window
kubectl apply -f receipts/transparent-http-load-balancer.yml

9.2 Configure MQTT load balancer (optional)

Section titled “9.2 Configure MQTT load balancer (optional)”
Terminal window
kubectl apply -f receipts/mqtt-load-balancer.yml

The load balancer forwards all TCP traffic for ports 1883 and 8883.

For MQTT over SSL, follow the MQTT over SSL guide to configure the required environment variables in tb-node.yml.

9.3 Configure UDP load balancer (optional)

Section titled “9.3 Configure UDP load balancer (optional)”
Terminal window
kubectl apply -f receipts/udp-load-balancer.yml

The load balancer forwards UDP traffic for ports 5683–5688 (CoAP and LwM2M protocols).

For CoAP over DTLS, follow the CoAP over DTLS guide. For LwM2M over DTLS, follow the LwM2M over DTLS guide.

9.4 Configure Edge load balancer (optional)

Section titled “9.4 Configure Edge load balancer (optional)”
Terminal window
kubectl apply -f receipts/edge-load-balancer.yml

The load balancer forwards all TCP traffic on port 7070.

Terminal window
docker pull thingsboard/trendz:1.15.1
docker pull thingsboard/trendz-python-executor:1.15.1

10.2 Create a Trendz database in the existing Cloud SQL instance

Section titled “10.2 Create a Trendz database in the existing Cloud SQL instance”

Edit trendz/trendz-secret.yml and replace YOUR_DB_IP_ADDRESS and YOUR_DB_PASSWORD, then apply:

Terminal window
kubectl apply -f ./trendz/trendz-secret.yml
kubectl apply -f ./trendz/trendz-create-db.yml

Check logs:

Terminal window
kubectl logs job/trendz-create-db -n thingsboard
Terminal window
./k8s-deploy-trendz.sh

After this command finishes you should see:

Trendz installed successfully!
Terminal window
kubectl get ingress
Terminal window
kubectl get service
Terminal window
kubectl logs -f tb-node-0

See the kubectl Cheat Sheet for more details.

Terminal window
./k8s-delete-resources.sh
./k8s-upgrade-tb.sh --fromVersion=[FROM_VERSION]
./k8s-deploy-resources.sh

Where FROM_VERSION is the starting version. See Upgrade Instructions for valid values. Upgrade versions one by one.

Upgrading to new Trendz version (optional)

Section titled “Upgrading to new Trendz version (optional)”
Terminal window
git pull origin master
./k8s-upgrade-trendz.sh

Delete ThingsBoard pods and load balancers:

Terminal window
./k8s-delete-resources.sh

Delete all data including database:

Terminal window
./k8s-delete-all.sh