GKE Microservices Setup
This guide walks you through deploying ThingsBoard CE in microservices mode on Google Kubernetes Engine. We use Google Cloud SQL for managed PostgreSQL.
Prerequisites
Section titled “Prerequisites”Install and configure tools
Section titled “Install and configure tools”Install kubectl and gcloud. See before you begin for more info.
Create a new GCP project (recommended) or choose an existing one:
gcloud initEnable GCP services
Section titled “Enable GCP services”gcloud services enable container.googleapis.com sql-component.googleapis.com sqladmin.googleapis.comStep 1. Clone ThingsBoard CE K8S scripts repository
Section titled “Step 1. Clone ThingsBoard CE K8S scripts repository”git clone -b release-4.3 https://github.com/thingsboard/thingsboard-ce-k8s.gitcd thingsboard-ce-k8s/gcp/microservicesStep 2. Define environment variables
Section titled “Step 2. Define environment variables”export GCP_PROJECT=$(gcloud config get-value project)export GCP_REGION=us-central1export GCP_ZONE=us-central1export GCP_ZONE1=us-central1-aexport GCP_ZONE2=us-central1-bexport GCP_ZONE3=us-central1-cexport GCP_NETWORK=defaultexport TB_CLUSTER_NAME=tb-ce-msaexport TB_DATABASE_NAME=tb-dbecho "Project: $GCP_PROJECT, region: $GCP_REGION, zones: $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3, network: $GCP_NETWORK, cluster: $TB_CLUSTER_NAME, database: $TB_DATABASE_NAME"| Variable | Default | Description |
|---|---|---|
GCP_PROJECT | (auto-detected) | Your GCP project ID |
GCP_REGION | us-central1 | Compute region |
GCP_ZONE1/2/3 | us-central1-a/b/c | Availability zones for the regional cluster |
GCP_NETWORK | default | GCP network name |
TB_CLUSTER_NAME | tb-ce-msa | GKE cluster name |
TB_DATABASE_NAME | tb-db | Cloud SQL instance name |
Step 3. Configure and create GKE cluster
Section titled “Step 3. Configure and create GKE cluster”Create a regional cluster distributed across 3 zones. The example provisions one e2-standard-4 node per zone (3 nodes total). You can modify the machine type and node count to suit your workload. See GCP machine types for options.
gcloud container clusters create $TB_CLUSTER_NAME \ --release-channel stable \ --region $GCP_REGION \ --network=$GCP_NETWORK \ --node-locations $GCP_ZONE1,$GCP_ZONE2,$GCP_ZONE3 \ --enable-ip-alias \ --num-nodes=1 \ --node-labels=role=main \ --machine-type=e2-standard-4Alternatively, see the regional cluster setup guide.
Step 4. Update the context of kubectl
Section titled “Step 4. Update the context of kubectl”gcloud container clusters get-credentials $TB_CLUSTER_NAME --region $GCP_REGIONStep 5. Provision databases
Section titled “Step 5. Provision databases”5.1 Google Cloud SQL (PostgreSQL) instance
Section titled “5.1 Google Cloud SQL (PostgreSQL) instance”Prerequisites
Section titled “Prerequisites”Enable service networking:
gcloud services enable servicenetworking.googleapis.com --project=$GCP_PROJECT
gcloud compute addresses create google-managed-services-$GCP_NETWORK \ --global \ --purpose=VPC_PEERING \ --prefix-length=16 \ --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK
gcloud services vpc-peerings connect \ --service=servicenetworking.googleapis.com \ --ranges=google-managed-services-$GCP_NETWORK \ --network=$GCP_NETWORK \ --project=$GCP_PROJECTCreate database server instance
Section titled “Create database server instance”gcloud beta sql instances create $TB_DATABASE_NAME \ --database-version=POSTGRES_16 \ --region=$GCP_REGION --availability-type=regional \ --no-assign-ip --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK \ --cpu=2 --memory=7680MBNote the IP address (YOUR_DB_IP_ADDRESS) from the command output.
Set database password
Section titled “Set database password”gcloud sql users set-password postgres \ --instance=$TB_DATABASE_NAME \ --password=secretCreate database
Section titled “Create database”gcloud sql databases create thingsboard --instance=$TB_DATABASE_NAME5.2 Cassandra (optional)
Section titled “5.2 Cassandra (optional)”Using Cassandra is optional. We recommend it if you plan to insert more than 5K data points per second.
Create 3 separate node pools:
gcloud container node-pools create cassandra1 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE1 \ --node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4gcloud container node-pools create cassandra2 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE2 \ --node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4gcloud container node-pools create cassandra3 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE3 \ --node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4Deploy Cassandra:
kubectl apply -f tb-namespace.ymlkubectl config set-context $(kubectl config current-context) --namespace=thingsboardkubectl apply -f receipts/cassandra.ymlUpdate DB settings:
echo " DATABASE_TS_TYPE: cassandra" >> tb-node-db-configmap.ymlecho " CASSANDRA_URL: cassandra:9042" >> tb-node-db-configmap.ymlecho " CASSANDRA_LOCAL_DATACENTER: $GCP_REGION" >> tb-node-db-configmap.ymlCreate keyspace:
kubectl exec -it cassandra-0 -- bash -c "cqlsh -e \ \"CREATE KEYSPACE IF NOT EXISTS thingsboard \ WITH replication = { \ 'class' : 'NetworkTopologyStrategy', \ 'us-central1' : '3' \ };\""Step 6. Installation
Section titled “Step 6. Installation”Edit tb-node-db-configmap.yml and replace YOUR_DB_IP_ADDRESS and YOUR_DB_PASSWORD:
nano tb-node-db-configmap.ymlRun the installation:
./k8s-install-tb.sh --loadDemoAfter this command finishes you should see:
Installation finished successfully!Step 7. Starting
Section titled “Step 7. Starting”Deploy thirdparty components (Zookeeper, Kafka, Redis) and main ThingsBoard microservices:
./k8s-deploy-resources.shAfter a few minutes, call kubectl get pods. You should see tb-node-0 pod in the READY state.
Deploy transport microservices
Section titled “Deploy transport microservices”Deploy the transport microservices you need. Omit protocols you don’t use to save resources:
# HTTP Transport (optional)kubectl apply -f transports/tb-http-transport.yml
# MQTT Transport (optional)kubectl apply -f transports/tb-mqtt-transport.yml
# CoAP Transport (optional)kubectl apply -f transports/tb-coap-transport.yml
# LwM2M Transport (optional)kubectl apply -f transports/tb-lwm2m-transport.yml
# SNMP Transport (optional)kubectl apply -f transports/tb-snmp-transport.ymlStep 8. Configure load balancers
Section titled “Step 8. Configure load balancers”8.1 Configure HTTP(S) load balancer
Section titled “8.1 Configure HTTP(S) load balancer”You have 3 options:
- HTTP — recommended for development.
- HTTPS — recommended for production. Uses Google-managed SSL certificate.
- Transparent — forwards traffic to ThingsBoard. Requires your own SSL certificate.
HTTP load balancer
Section titled “HTTP load balancer”kubectl apply -f receipts/http-load-balancer.ymlCheck the status:
kubectl get ingressDefault credentials:
- System Administrator: [email protected] / sysadmin
- Tenant Administrator: [email protected] / tenant (if demo data loaded)
- Customer User: [email protected] / customer (if demo data loaded)
HTTPS load balancer
Section titled “HTTPS load balancer”Reserve a static IP:
gcloud compute addresses create thingsboard-http-lb-address --globalEdit receipts/https-load-balancer.yml, replace PUT_YOUR_DOMAIN_HERE, then deploy:
kubectl apply -f receipts/https-load-balancer.ymlWait for the certificate to provision (up to 60 minutes):
kubectl describe managedcertificate managed-cert8.2 Configure MQTT load balancer (optional)
Section titled “8.2 Configure MQTT load balancer (optional)”kubectl apply -f receipts/mqtt-load-balancer.ymlFor MQTT over SSL, follow the MQTT over SSL guide to configure transport/tb-mqtt-transport.yml.
8.3 Configure CoAP load balancer (optional)
Section titled “8.3 Configure CoAP load balancer (optional)”kubectl apply -f receipts/coap-load-balancer.ymlThe load balancer forwards UDP traffic for ports 5683 (CoAP non-secure) and 5684 (CoAP secure DTLS).
For CoAP over DTLS, follow the CoAP over DTLS guide to configure transport/tb-coap-transport.yml.
8.4 Configure LwM2M load balancer (optional)
Section titled “8.4 Configure LwM2M load balancer (optional)”kubectl apply -f receipts/lwm2m-load-balancer.ymlThe load balancer forwards UDP traffic for ports 5685–5688.
For LwM2M over DTLS, follow the LwM2M over DTLS guide to configure transport/tb-lwm2m-transport.yml.
8.5 Configure Edge load balancer (optional)
Section titled “8.5 Configure Edge load balancer (optional)”kubectl apply -f receipts/edge-load-balancer.ymlThe load balancer forwards all TCP traffic on port 7070.
Step 9. Validate the setup
Section titled “Step 9. Validate the setup”kubectl get ingresskubectl get serviceTroubleshooting
Section titled “Troubleshooting”kubectl logs -f tb-node-0See the kubectl Cheat Sheet for more details.
Upgrading
Section titled “Upgrading”./k8s-delete-resources.sh./k8s-upgrade-tb.sh --fromVersion=[FROM_VERSION]./k8s-deploy-resources.shWhere FROM_VERSION is the starting version. See Upgrade Instructions for valid values. Upgrade versions one by one.
Cluster deletion
Section titled “Cluster deletion”./k8s-delete-resources.sh./k8s-delete-all.sh