GKE Monolith Setup
This guide walks you through deploying ThingsBoard PE in monolith mode on Google Kubernetes Engine. We use Google Cloud SQL for managed PostgreSQL.
Prerequisites
Section titled “Prerequisites”Install and configure tools
Section titled “Install and configure tools”Install kubectl and gcloud. See before you begin for more info.
Create a new GCP project (recommended) or choose an existing one:
gcloud initEnable GCP services
Section titled “Enable GCP services”gcloud services enable container.googleapis.com sql-component.googleapis.com sqladmin.googleapis.comPull ThingsBoard PE images
Section titled “Pull ThingsBoard PE images”Verify that you can pull the images from Docker Hub:
docker pull thingsboard/tb-pe-node:4.3.1.1PEdocker pull thingsboard/tb-pe-web-report:4.3.1.1PEStep 1. Clone ThingsBoard PE K8S scripts repository
Section titled “Step 1. Clone ThingsBoard PE K8S scripts repository”git clone -b release-4.3 https://github.com/thingsboard/thingsboard-pe-k8s.git --depth 1cd thingsboard-pe-k8s/gcp/monolithStep 2. Define environment variables
Section titled “Step 2. Define environment variables”export GCP_PROJECT=$(gcloud config get-value project)export GCP_REGION=us-central1export GCP_ZONE=us-central1-aexport GCP_NETWORK=defaultexport TB_CLUSTER_NAME=tb-peexport TB_DATABASE_NAME=tb-dbecho "Project: $GCP_PROJECT, region: $GCP_REGION, zone: $GCP_ZONE, network: $GCP_NETWORK, cluster: $TB_CLUSTER_NAME, database: $TB_DATABASE_NAME"| Variable | Default | Description |
|---|---|---|
GCP_PROJECT | (auto-detected) | Your GCP project ID |
GCP_REGION | us-central1 | Compute region |
GCP_ZONE | us-central1-a | Compute zone (must match region) |
GCP_NETWORK | default | GCP network name |
TB_CLUSTER_NAME | tb-pe | GKE cluster name |
TB_DATABASE_NAME | tb-db | Cloud SQL instance name |
Step 3. Configure and create GKE cluster
Section titled “Step 3. Configure and create GKE cluster”Create a zonal cluster with 1 node of e2-standard-4 machine type:
gcloud container clusters create $TB_CLUSTER_NAME \ --release-channel stable \ --zone $GCP_ZONE \ --node-locations $GCP_ZONE \ --network=$GCP_NETWORK \ --enable-ip-alias \ --num-nodes=1 \ --node-labels=role=main \ --machine-type=e2-standard-4Alternatively, see the custom cluster setup guide.
Step 4. Update the context of kubectl
Section titled “Step 4. Update the context of kubectl”gcloud container clusters get-credentials $TB_CLUSTER_NAME --zone $GCP_ZONEStep 5. Provision databases
Section titled “Step 5. Provision databases”5.1 Google Cloud SQL (PostgreSQL) instance
Section titled “5.1 Google Cloud SQL (PostgreSQL) instance”Prerequisites
Section titled “Prerequisites”Enable service networking:
gcloud services enable servicenetworking.googleapis.com --project=$GCP_PROJECT
gcloud compute addresses create google-managed-services-$GCP_NETWORK \ --global \ --purpose=VPC_PEERING \ --prefix-length=16 \ --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK
gcloud services vpc-peerings connect \ --service=servicenetworking.googleapis.com \ --ranges=google-managed-services-$GCP_NETWORK \ --network=$GCP_NETWORK \ --project=$GCP_PROJECTCreate database server instance
Section titled “Create database server instance”Create a PostgreSQL 16 instance. Recommendations:
- Use the same region and VPC network as your GKE cluster
- Use private IP address and disable public IP
- Use highly available instance for production, single zone for development
- At least 2 vCPUs and 7.5 GB RAM
gcloud beta sql instances create $TB_DATABASE_NAME \ --database-version=POSTGRES_16 \ --region=$GCP_REGION --availability-type=regional \ --no-assign-ip --network=projects/$GCP_PROJECT/global/networks/$GCP_NETWORK \ --cpu=2 --memory=7680MBNote the IP address (YOUR_DB_IP_ADDRESS) from the command output.
Set database password
Section titled “Set database password”gcloud sql users set-password postgres \ --instance=$TB_DATABASE_NAME \ --password=secretCreate database
Section titled “Create database”gcloud sql databases create thingsboard --instance=$TB_DATABASE_NAME5.2 Cassandra (optional)
Section titled “5.2 Cassandra (optional)”Using Cassandra is optional. We recommend it if you plan to insert more than 5K data points per second or want to optimize storage space.
Provision additional node groups
Section titled “Provision additional node groups”Create 3 separate node pools with 1 node per zone. At least 4 vCPUs and 16 GB of RAM is recommended.
gcloud container node-pools create cassandra1 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE1 \ --node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4gcloud container node-pools create cassandra2 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE2 \ --node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4gcloud container node-pools create cassandra3 --cluster=$TB_CLUSTER_NAME --zone=$GCP_ZONE --node-locations=$GCP_ZONE3 \ --node-labels=role=cassandra --num-nodes=1 --min-nodes=1 --max-nodes=1 --machine-type=e2-standard-4Deploy Cassandra stateful set
Section titled “Deploy Cassandra stateful set”kubectl apply -f tb-namespace.ymlkubectl config set-context $(kubectl config current-context) --namespace=thingsboardkubectl apply -f receipts/cassandra.ymlUpdate DB settings
Section titled “Update DB settings”echo " DATABASE_TS_TYPE: cassandra" >> tb-node-db-configmap.ymlecho " CASSANDRA_URL: cassandra:9042" >> tb-node-db-configmap.ymlecho " CASSANDRA_LOCAL_DATACENTER: $GCP_REGION" >> tb-node-db-configmap.ymlCreate keyspace
Section titled “Create keyspace”kubectl exec -it cassandra-0 -- bash -c "cqlsh -e \ \"CREATE KEYSPACE IF NOT EXISTS thingsboard \ WITH replication = { \ 'class' : 'NetworkTopologyStrategy', \ 'us-central1' : '3' \ };\""Step 6. Obtain and configure license key
Section titled “Step 6. Obtain and configure license key”We assume you have already chosen your subscription plan or decided to purchase a perpetual license. If not, navigate to the pricing page. See How to get pay-as-you-go subscription or How to get perpetual license for details.
Create a docker secret with your license key:
export TB_LICENSE_KEY=PUT_YOUR_LICENSE_KEY_HEREkubectl create -n thingsboard secret generic tb-license --from-literal=license-key=$TB_LICENSE_KEYStep 7. Installation
Section titled “Step 7. Installation”Edit tb-node-db-configmap.yml and replace YOUR_DB_IP_ADDRESS and YOUR_DB_PASSWORD:
nano tb-node-db-configmap.ymlRun the installation:
./k8s-install-tb.sh --loadDemoWhere --loadDemo is an optional argument to load additional demo data.
After this command finishes you should see:
Installation finished successfully!Step 8. Starting
Section titled “Step 8. Starting”./k8s-deploy-resources.shAfter a few minutes, call kubectl get pods. If everything went fine, you should see tb-node-0 pod in the READY state.
Step 9. Configure load balancers
Section titled “Step 9. Configure load balancers”9.1 Configure HTTP(S) load balancer
Section titled “9.1 Configure HTTP(S) load balancer”You have 3 options:
- HTTP — recommended for development.
- HTTPS — recommended for production. Uses Google-managed SSL certificate.
- Transparent — forwards traffic to ThingsBoard HTTP/HTTPS ports. Requires your own SSL certificate.
HTTP load balancer
Section titled “HTTP load balancer”kubectl apply -f receipts/http-load-balancer.ymlCheck the status:
kubectl get ingressUse the address to access the web UI and connect devices via HTTP API.
Default credentials:
- System Administrator: [email protected] / sysadmin
- Tenant Administrator: [email protected] / tenant (if demo data loaded)
- Customer User: [email protected] / customer (if demo data loaded)
HTTPS load balancer
Section titled “HTTPS load balancer”See the official documentation for Google-managed SSL certificates. Reserve a static IP:
gcloud compute addresses create thingsboard-http-lb-address --globalEdit receipts/https-load-balancer.yml and replace PUT_YOUR_DOMAIN_HERE, then deploy:
kubectl apply -f receipts/https-load-balancer.ymlAssign the domain name to the load balancer IP and wait for the certificate to provision (up to 60 minutes):
kubectl describe managedcertificate managed-certTransparent load balancer
Section titled “Transparent load balancer”Follow the HTTPS (TLS) configuration guide to configure SSL in tb-node.yml. Then deploy:
kubectl apply -f receipts/transparent-http-load-balancer.yml9.2 Configure MQTT load balancer (optional)
Section titled “9.2 Configure MQTT load balancer (optional)”kubectl apply -f receipts/mqtt-load-balancer.ymlThe load balancer forwards all TCP traffic for ports 1883 and 8883.
For MQTT over SSL, follow the MQTT over SSL guide to configure the required environment variables in tb-node.yml.
9.3 Configure UDP load balancer (optional)
Section titled “9.3 Configure UDP load balancer (optional)”kubectl apply -f receipts/udp-load-balancer.ymlThe load balancer forwards UDP traffic for ports 5683–5688 (CoAP and LwM2M protocols).
For CoAP over DTLS, follow the CoAP over DTLS guide. For LwM2M over DTLS, follow the LwM2M over DTLS guide.
9.4 Configure Edge load balancer (optional)
Section titled “9.4 Configure Edge load balancer (optional)”kubectl apply -f receipts/edge-load-balancer.ymlThe load balancer forwards all TCP traffic on port 7070.
Step 10. Configure Trendz (optional)
Section titled “Step 10. Configure Trendz (optional)”10.1 Pull Trendz images
Section titled “10.1 Pull Trendz images”docker pull thingsboard/trendz:1.15.1docker pull thingsboard/trendz-python-executor:1.15.110.2 Create a Trendz database in the existing Cloud SQL instance
Section titled “10.2 Create a Trendz database in the existing Cloud SQL instance”Edit trendz/trendz-secret.yml and replace YOUR_DB_IP_ADDRESS and YOUR_DB_PASSWORD, then apply:
kubectl apply -f ./trendz/trendz-secret.ymlkubectl apply -f ./trendz/trendz-create-db.ymlCheck logs:
kubectl logs job/trendz-create-db -n thingsboard10.3 Trendz starting
Section titled “10.3 Trendz starting”./k8s-deploy-trendz.shAfter this command finishes you should see:
Trendz installed successfully!Step 11. Validate the setup
Section titled “Step 11. Validate the setup”Validate Web UI access
Section titled “Validate Web UI access”kubectl get ingressValidate MQTT/CoAP access
Section titled “Validate MQTT/CoAP access”kubectl get serviceTroubleshooting
Section titled “Troubleshooting”kubectl logs -f tb-node-0See the kubectl Cheat Sheet for more details.
Upgrading
Section titled “Upgrading”Upgrading to new ThingsBoard version
Section titled “Upgrading to new ThingsBoard version”./k8s-delete-resources.sh./k8s-upgrade-tb.sh --fromVersion=[FROM_VERSION]./k8s-deploy-resources.shWhere FROM_VERSION is the starting version. See Upgrade Instructions for valid values. Upgrade versions one by one.
Upgrading to new Trendz version (optional)
Section titled “Upgrading to new Trendz version (optional)”git pull origin master./k8s-upgrade-trendz.shCluster deletion
Section titled “Cluster deletion”Delete ThingsBoard pods and load balancers:
./k8s-delete-resources.shDelete all data including database:
./k8s-delete-all.sh