GKE Monolith Setup
This guide walks you through deploying ThingsBoard CE in monolith mode on Google Kubernetes Engine. We use Google Cloud SQL for managed PostgreSQL.
Prerequisites
Section titled “Prerequisites”Install and configure tools
Section titled “Install and configure tools”Install kubectl and gcloud. See before you begin for more info.
Create a new GCP project (recommended) or choose an existing one:
gcloud initEnable GCP services
Section titled “Enable GCP services”gcloud services enable container.googleapis.com sql-component.googleapis.com sqladmin.googleapis.comStep 1. Clone ThingsBoard CE K8S scripts repository
Section titled “Step 1. Clone ThingsBoard CE K8S scripts repository”git clone -b release-4.3 https://github.com/thingsboard/thingsboard-ce-k8s.gitcd thingsboard-ce-k8s/gcp/monolithStep 2. Define environment variables
Section titled “Step 2. Define environment variables”export GCP_PROJECT=$(gcloud config get-value project)export GCP_REGION=us-central1export GCP_ZONE=us-central1-aexport GCP_NETWORK=defaultexport TB_CLUSTER_NAME=tb-ceexport TB_DATABASE_NAME=tb-dbecho "Project: $GCP_PROJECT, region: $GCP_REGION, zone: $GCP_ZONE, network: $GCP_NETWORK, cluster: $TB_CLUSTER_NAME, database: $TB_DATABASE_NAME"| Variable | Default | Description |
|---|---|---|
GCP_PROJECT | (auto-detected) | Your GCP project ID |
GCP_REGION | us-central1 | Compute region |
GCP_ZONE | us-central1-a | Compute zone (must match region) |
GCP_NETWORK | default | GCP network name |
TB_CLUSTER_NAME | tb-ce | GKE cluster name |
TB_DATABASE_NAME | tb-db | Cloud SQL instance name |
Step 3. Configure and create GKE cluster
Section titled “Step 3. Configure and create GKE cluster”Create a zonal cluster with 1 node of e2-standard-4 machine type:
gcloud container clusters create $TB_CLUSTER_NAME \ --release-channel stable \ --zone $GCP_ZONE \ --node-locations $GCP_ZONE \ --network=$GCP_NETWORK \ --enable-ip-alias \ --num-nodes=1 \ --node-labels=role=main \ --machine-type=e2-standard-4Alternatively, see the custom cluster setup guide.
Step 4. Update the context of kubectl
Section titled “Step 4. Update the context of kubectl”gcloud container clusters get-credentials $TB_CLUSTER_NAME --zone $GCP_ZONEStep 5. Provision databases
Section titled “Step 5. Provision databases”5.1 Google Cloud SQL (PostgreSQL) instance
Section titled “5.1 Google Cloud SQL (PostgreSQL) instance”Prerequisites
Section titled “Prerequisites”Enable service networking:
gcloud services enable servicenetworking.googleapis.com --project=$GCP_PROJECT
gcloud compute addresses create google-managed-services-$GCP_NETWORK \ --global \ --purpose=VPC_PEERING \ --prefix-length=16 \ --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK
gcloud services vpc-peerings connect \ --service=servicenetworking.googleapis.com \ --ranges=google-managed-services-$GCP_NETWORK \ --network=$GCP_NETWORK \ --project=$GCP_PROJECTCreate database server instance
Section titled “Create database server instance”Create a PostgreSQL 16 instance. Recommendations:
- Use the same region and VPC network as your GKE cluster
- Use private IP address and disable public IP
- Use highly available instance for production, single zone for development
- At least 2 vCPUs and 7.5 GB RAM
gcloud beta sql instances create $TB_DATABASE_NAME \ --database-version=POSTGRES_16 \ --region=$GCP_REGION --availability-type=regional \ --no-assign-ip --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK \ --cpu=2 --memory=7680MBNote the IP address (YOUR_DB_IP_ADDRESS) from the command output.
Set database password
Section titled “Set database password”gcloud sql users set-password postgres \ --instance=$TB_DATABASE_NAME \ --password=secretCreate database
Section titled “Create database”gcloud sql databases create thingsboard --instance=$TB_DATABASE_NAME5.2 Cassandra (optional)
Section titled “5.2 Cassandra (optional)”Using Cassandra is optional. We recommend it if you plan to insert more than 5K data points per second or want to optimize storage space.
Provision additional node groups
Section titled “Provision additional node groups”Create 3 separate node pools with 1 node per zone. At least 4 vCPUs and 16 GB of RAM is recommended.
gcloud container node-pools create cassandra1 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE1 \ --node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4gcloud container node-pools create cassandra2 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE2 \ --node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4gcloud container node-pools create cassandra3 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE3 \ --node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4Deploy Cassandra stateful set
Section titled “Deploy Cassandra stateful set”kubectl apply -f tb-namespace.ymlkubectl config set-context $(kubectl config current-context) --namespace=thingsboardkubectl apply -f receipts/cassandra.ymlUpdate DB settings
Section titled “Update DB settings”echo " DATABASE_TS_TYPE: cassandra" >> tb-node-db-configmap.ymlecho " CASSANDRA_URL: cassandra:9042" >> tb-node-db-configmap.ymlecho " CASSANDRA_LOCAL_DATACENTER: $GCP_REGION" >> tb-node-db-configmap.ymlCreate keyspace
Section titled “Create keyspace”kubectl exec -it cassandra-0 -- bash -c "cqlsh -e \ \"CREATE KEYSPACE IF NOT EXISTS thingsboard \ WITH replication = { \ 'class' : 'NetworkTopologyStrategy', \ 'us-central1' : '3' \ };\""Step 6. Installation
Section titled “Step 6. Installation”Edit tb-node-db-configmap.yml and replace YOUR_DB_IP_ADDRESS and YOUR_DB_PASSWORD:
nano tb-node-db-configmap.ymlRun the installation:
./k8s-install-tb.sh --loadDemoWhere --loadDemo is an optional argument to load additional demo data.
After this command finishes you should see:
Installation finished successfully!Step 7. Starting
Section titled “Step 7. Starting”./k8s-deploy-resources.shAfter a few minutes, call kubectl get pods. If everything went fine, you should see tb-node-0 pod in the READY state.
Step 8. Configure load balancers
Section titled “Step 8. Configure load balancers”8.1 Configure HTTP(S) load balancer
Section titled “8.1 Configure HTTP(S) load balancer”You have 3 options:
- HTTP — recommended for development. Simple configuration and minimum costs.
- HTTPS — recommended for production. Uses Google-managed SSL certificate with automatic HTTP to HTTPS redirect.
- Transparent — forwards traffic to ThingsBoard HTTP/HTTPS ports. Requires you to provision your own SSL certificate.
HTTP load balancer
Section titled “HTTP load balancer”kubectl apply -f receipts/http-load-balancer.ymlCheck the status:
kubectl get ingressUse the address to access the web UI (port 80) and connect devices via HTTP API.
Default credentials:
- System Administrator: [email protected] / sysadmin
- Tenant Administrator: [email protected] / tenant (if demo data loaded)
- Customer User: [email protected] / customer (if demo data loaded)
HTTPS load balancer
Section titled “HTTPS load balancer”The process of configuring the load balancer with Google-managed SSL certificates is described in the official documentation. Make sure you read the prerequisites.
Reserve a static IP address:
gcloud compute addresses create thingsboard-http-lb-address --globalEdit receipts/https-load-balancer.yml and replace PUT_YOUR_DOMAIN_HERE with your domain name:
nano receipts/https-load-balancer.ymlDeploy:
kubectl apply -f receipts/https-load-balancer.ymlCheck the status:
kubectl get ingressAssign the domain name to the load balancer IP address. Then wait for the Google-managed certificate to finish provisioning (up to 60 minutes):
kubectl describe managedcertificate managed-certTransparent load balancer
Section titled “Transparent load balancer”Follow the HTTPS (TLS) configuration guide to configure SSL in tb-node.yml. Then deploy:
kubectl apply -f receipts/transparent-http-load-balancer.yml8.2 Configure MQTT load balancer (optional)
Section titled “8.2 Configure MQTT load balancer (optional)”kubectl apply -f receipts/mqtt-load-balancer.ymlThe load balancer forwards all TCP traffic for ports 1883 and 8883.
For MQTT over SSL, follow the MQTT over SSL guide to configure the required environment variables in tb-node.yml.
8.3 Configure UDP load balancer (optional)
Section titled “8.3 Configure UDP load balancer (optional)”kubectl apply -f receipts/udp-load-balancer.ymlThe load balancer forwards all UDP traffic for ports:
| Port | Protocol |
|---|---|
| 5683 | CoAP non-secure |
| 5684 | CoAP secure DTLS |
| 5685 | LwM2M non-secure |
| 5686 | LwM2M secure DTLS |
| 5687 | LwM2M bootstrap DTLS |
| 5688 | LwM2M bootstrap secure DTLS |
For CoAP over DTLS, follow the CoAP over DTLS guide. For LwM2M over DTLS, follow the LwM2M over DTLS guide.
8.4 Configure Edge load balancer (optional)
Section titled “8.4 Configure Edge load balancer (optional)”kubectl apply -f receipts/edge-load-balancer.ymlThe load balancer forwards all TCP traffic on port 7070. Use the external IP as CLOUD_RPC_HOST in Edge connection parameters.
Step 9. Validate the setup
Section titled “Step 9. Validate the setup”Validate Web UI access
Section titled “Validate Web UI access”kubectl get ingressValidate MQTT/CoAP access
Section titled “Validate MQTT/CoAP access”kubectl get serviceTwo load balancers are available:
tb-mqtt-loadbalancer— for MQTT protocoltb-udp-loadbalancer— for CoAP/LwM2M protocols
Troubleshooting
Section titled “Troubleshooting”kubectl logs -f tb-node-0See the kubectl Cheat Sheet for more details.
Upgrading
Section titled “Upgrading”If a database upgrade is needed:
./k8s-delete-resources.sh./k8s-upgrade-tb.sh --fromVersion=[FROM_VERSION]./k8s-deploy-resources.shWhere FROM_VERSION is the starting version. See Upgrade Instructions for valid values. Upgrade versions one by one.
Cluster deletion
Section titled “Cluster deletion”Delete ThingsBoard pods and load balancers:
./k8s-delete-resources.shDelete all data including database:
./k8s-delete-all.sh