Stop the war

Stand with Ukraine flag

Support Ukraine

Try it now Pricing
Community Edition
Community Edition Professional Edition Cloud Edge PE Edge IoT Gateway License Server Trendz Analytics Mobile Application PE Mobile Application MQTT Broker
Getting Started Documentation Devices Library Guides Installation Architecture API FAQ
On this page

Cluster setup using OpenShift

This guide will help you to setup ThingsBoard in cluster mode using OpenShift.


ThingsBoard Microservices run on the Kubernetes cluster. To deploy OpenShift cluster locally you’ll need to have Docker CE to run OpenShift containers and OpenShift Origin itself. Please follow these instructions to install all required software.

Log in to OpenShift cluster

To access OpenShift cluster you’ll have to login first. By default, you may login as the developer user:

oc login -u developer -p developer

Create project

On the first start-up you should create the thingsboard project. To create it, please execute next command:

oc new-project thingsboard

Step 1. Review the architecture page

Starting ThingsBoard v2.2, it is possible to install ThingsBoard cluster using new microservices architecture and docker containers. See microservices architecture page for more details.

Step 2. Clone ThingsBoard CE Kubernetes scripts repository

git clone -b release-3.6.4 --depth 1
cd thingsboard-ce-k8s/openshift

Step 3. Configure ThingsBoard database

Before performing initial installation you can configure the type of database to be used with ThingsBoard. In order to set database type change the value of DATABASE variable in .env file to one of the following:

  • postgres - use PostgreSQL database;
  • hybrid - use PostgreSQL for entities database and Cassandra for timeseries database;

NOTE: According to the database type corresponding kubernetes resources will be deployed (see postgres.yml and cassandra.yml for details).

Doc info icon

If you selected cassandra as DATABASE you can also configure the number of Cassandra nodes (StatefulSet.spec.replicas property in cassandra.yml config file) and the CASSANDRA_REPLICATION_FACTOR in .env file. If you want to configure CASSANDRA_REPLICATION_FACTOR please read Cassandra documentation first.

It is recommended to have 3 Cassandra nodes with CASSANDRA_REPLICATION_FACTOR equal to 2.

Step 5. Running

Execute the following command to run installation:

./ --loadDemo


  • --loadDemo - optional argument. Whether to load additional demo data.

Execute the following command to deploy third-party resources:


Type ‘yes’ when prompted, if you are running ThingsBoard in high-availability DEPLOYMENT_TYPE for the first time or don’t have configured Redis cluster.

Execute the following command to deploy ThingsBoard resources:


To see how to reach your ThingsBoard application on cluster, login as developer user (default password is developer too), open thingsboard project, then go to Application -> Routes menu and you’ll see all your configured routes. The root route should look like

When you open it, you should see ThingsBoard login page.

Use the following default credentials:

If you installed DataBase with demo data (using --loadDemo flag) you can also use the following credentials:

In case of any issues you can examine service logs for errors. For example to see ThingsBoard node logs execute the following command:

1) Get the list of the running tb-node pods:

oc get pods -l app=tb-node

2) Fetch logs of the tb-node pod:

oc logs -f [tb-node-pod-name]


  • tb-node-pod-name - tb-node pod name obtained from the list of the running tb-node pods.

Or use oc get pods to see the state of all the pods. Or use oc get services to see the state of all the services. Or use oc get deployments to see the state of all the deployments. See oc Cheat Sheet command reference for details.

Execute the following command to delete all ThingsBoard microservices:


Execute the following command to delete all third-party microservices:


Execute the following command to delete all resources (including database):



In case when database upgrade is needed, execute the following commands:

./ --fromVersion=[FROM_VERSION]


  • FROM_VERSION - from which version upgrade should be started. See Upgrade Instructions for valid fromVersion values.

Next steps