Professional Edition
ThingsBoard Documentation
Cloud Professional Edition Community Edition IoT Gateway License Server Trendz Analytics
Try it now Pricing
Installation > Installing ThingsBoard PE using Docker (Linux or Mac OS)
Getting Started Documentation Guides
Architecture API FAQ

On this page

Installing ThingsBoard PE using Docker (Linux or Mac OS)

This guide will help you to install and start ThingsBoard Professional Edition (PE) using Docker and Docker Compose on Linux or Mac OS. This guide covers standalone ThingsBoard PE installation. If you are looking for a cluster installation instruction, please visit cluster setup page.

Prerequisites

Step 1. Checkout all ThingsBoard PE Image

Please checkout ThingsBoard PE Image from Docker Hub. You will need to open all verified images and click on “Proceed to checkout” to accept ThingsBoard PE license agreement.

Listing all images mandatory for checkout for your convenience below:

image

Populate basic information about yourself and click “Get Content”

image

Step 2. Pull ThingsBoard PE Image

Make sure your have logged in to docker hub using command line.

1
docker pull store/thingsboard/tb-pe:3.2.2PE

Step 3. Obtain the license key

We assume you have already chosen your subscription plan or decided to purchase a perpetual license. If not, please navigate to pricing page to select the best license option for your case and get your license. See How-to get pay-as-you-go subscription or How-to get perpetual license for more details.

Note: We will reference the license key you have obtained during this step as PUT_YOUR_LICENSE_SECRET_HERE later in this guide.

Step 4. Choose ThingsBoard queue service

ThingsBoard is able to use various messaging systems/brokers for storing the messages and communication between ThingsBoard services. How to choose the right queue implementation?

  • In Memory queue implementation is built-in and default. It is useful for development(PoC) environments and is not suitable for production deployments or any sort of cluster deployments.

  • Kafka is recommended for production deployments. This queue is used on the most of ThingsBoard production environments now. It is useful for both on-prem and private cloud deployments. It is also useful if you like to stay independent from your cloud provider. However, some providers also have managed services for Kafka. See AWS MSK for example.

  • RabbitMQ is recommended if you don’t have much load and you already have experience with this messaging system.

  • AWS SQS is a fully managed message queuing service from AWS. Useful if you plan to deploy ThingsBoard on AWS.

  • Google Pub/Sub is a fully managed message queuing service from Google. Useful if you plan to deploy ThingsBoard on Google Cloud.

  • Azure Service Bus is a fully managed message queuing service from Azure. Useful if you plan to deploy ThingsBoard on Azure.

  • Confluent Cloud is a fully managed streaming platform based on Kafka. Useful for a cloud agnostic deployments.

See corresponding architecture page and rule engine page for more details.

ThingsBoard includes In Memory Queue service and use it by default without extra settings.

Create docker compose file for ThingsBoard queue service:

1
sudo nano docker-compose.yml

Add the following line to the yml file. Don’t forget to replace “PUT_YOUR_LICENSE_SECRET_HERE” with your license secret obtained on the first step:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
version: '2.2'
services:
  mytbpe:
    restart: always
    image: "store/thingsboard/tb-pe:3.2.2PE"
    ports:
      - "8080:8080"
      - "1883:1883"
      - "5683:5683/udp"
    environment:
      TB_QUEUE_TYPE: in-memory
      SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/thingsboard
      TB_LICENSE_SECRET: PUT_YOUR_LICENSE_SECRET_HERE
      TB_LICENSE_INSTANCE_DATA_FILE: /data/license.data
    volumes:
      - ~/.mytbpe-data:/data
      - ~/.mytbpe-logs:/var/log/thingsboard
  postgres:
    restart: always
    image: "postgres:12"
    ports:
    - "5432"
    environment:
      POSTGRES_DB: thingsboard
      POSTGRES_PASSWORD: postgres
    volumes:
      - ~/.mytbpe-data/db:/var/lib/postgresql/data

Apache Kafka is an open-source stream-processing software platform.

Create docker compose file for ThingsBoard queue service:

1
sudo nano docker-compose.yml

Add the following line to the yml file. Don’t forget to replace “PUT_YOUR_LICENSE_SECRET_HERE” with your license secret obtained on the first step:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
version: '2.2'
services:
  zookeeper:
    restart: always
    image: "zookeeper:3.5"
    ports:
      - "2181:2181"
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=zookeeper:2888:3888;zookeeper:2181
  kafka:
    restart: always
    image: wurstmeister/kafka
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    environment:
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENERS: INSIDE://:9093,OUTSIDE://:9092
      KAFKA_ADVERTISED_LISTENERS: INSIDE://:9093,OUTSIDE://kafka:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
  mytbpe:
    restart: always
    image: "store/thingsboard/tb-pe:3.2.2PE"
    depends_on:
      - kafka
    ports:
      - "8080:8080"
      - "1883:1883"
      - "5683:5683/udp"
    environment:
      TB_QUEUE_TYPE: kafka
      SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/thingsboard
      TB_KAFKA_SERVERS: kafka:9092
      TB_LICENSE_SECRET: PUT_YOUR_LICENSE_SECRET_HERE
      TB_LICENSE_INSTANCE_DATA_FILE: /data/license.data
    volumes:
      - ~/.mytbpe-data:/data
      - ~/.mytbpe-logs:/var/log/thingsboard
  postgres:
    restart: always
    image: "postgres:12"
    ports:
    - "5432"
    environment:
      POSTGRES_DB: thingsboard
      POSTGRES_PASSWORD: postgres
    volumes:
      - ~/.mytbpe-data/db:/var/lib/postgresql/data

AWS SQS Configuration

To access AWS SQS service, you first need to create an AWS account.

To work with AWS SQS service you will need to create your next credentials using this instruction:

  • Access key ID
  • Secret access key

Create docker compose file for ThingsBoard queue service:

1
sudo nano docker-compose.yml

Add the following line to the yml file. Don’t forget to replace “YOUR_KEY”, “YOUR_SECRET” with your real AWS SQS IAM user credentials and “YOUR_REGION” with your real AWS SQS account region, and “PUT_YOUR_LICENSE_SECRET_HERE” with your license secret obtained on the first step:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
version: '2.2'
services:
  mytbpe:
    restart: always
    image: "store/thingsboard/tb-pe:3.2.2PE"
    ports:
      - "8080:8080"
      - "1883:1883"
      - "5683:5683/udp"
    environment:
      TB_QUEUE_TYPE: aws-sqs
      SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/thingsboard
      TB_QUEUE_AWS_SQS_ACCESS_KEY_ID: YOUR_KEY
      TB_QUEUE_AWS_SQS_SECRET_ACCESS_KEY: YOUR_SECRET
      TB_QUEUE_AWS_SQS_REGION: YOUR_REGION
      TB_LICENSE_SECRET: PUT_YOUR_LICENSE_SECRET_HERE
      TB_LICENSE_INSTANCE_DATA_FILE: /data/license.data

      # These params affect the number of requests per second from each partitions per each queue.
      # Number of requests to particular Message Queue is calculated based on the formula:
      # ((Number of Rule Engine and Core Queues) * (Number of partitions per Queue) + (Number of transport queues)
      #  + (Number of microservices) + (Number of JS executors)) * 1000 / POLL_INTERVAL_MS
      # For example, number of requests based on default parameters is:
      # Rule Engine queues:
      # Main 10 partitions + HighPriority 10 partitions + SequentialByOriginator 10 partitions = 30
      # Core queue 10 partitions
      # Transport request Queue + response Queue = 2
      # Rule Engine Transport notifications Queue + Core Transport notifications Queue = 2
      # Total = 44
      # Number of requests per second = 44 * 1000 / 25 = 1760 requests
      # 
      # Based on the use case, you can compromise latency and decrease number of partitions/requests to the queue, if the message load is low.
      # Sample parameters to fit into 10 requests per second on a "monolith" deployment: 
      TB_QUEUE_CORE_POLL_INTERVAL_MS: 1000
      TB_QUEUE_CORE_PARTITIONS: 2
      TB_QUEUE_RULE_ENGINE_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_MAIN_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_MAIN_PARTITIONS: 2
      TB_QUEUE_RE_HP_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_HP_PARTITIONS: 1
      TB_QUEUE_RE_SQ_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_SQ_PARTITIONS: 1
      TB_QUEUE_TRANSPORT_REQUEST_POLL_INTERVAL_MS: 1000
      TB_QUEUE_TRANSPORT_RESPONSE_POLL_INTERVAL_MS: 1000
      TB_QUEUE_TRANSPORT_NOTIFICATIONS_POLL_INTERVAL_MS: 1000
    volumes:
      - ~/.mytbpe-data:/data
      - ~/.mytbpe-logs:/var/log/thingsboard
  postgres:
    restart: always
    image: "postgres:12"
    ports:
    - "5432"
    environment:
      POSTGRES_DB: thingsboard
      POSTGRES_PASSWORD: postgres
    volumes:
      - ~/.mytbpe-data/db:/var/lib/postgresql/data

Google Pub/Sub Configuration

To access Pub/Sub service, you first need to create an Google cloud account.

To work with Pub/Sub service you will need to create a project using this instruction.

Create service account credentials with the role “Editor” or “Admin” using this instruction, and save json file with your service account credentials step 9 here.

Create docker compose file for ThingsBoard queue service:

1
sudo nano docker-compose.yml

Add the following line to the yml file. Don’t forget to replace “YOUR_PROJECT_ID”, “YOUR_SERVICE_ACCOUNT” with your real Pub/Sub project id, and service account (it is whole data from json file), and “PUT_YOUR_LICENSE_SECRET_HERE” with your **license secret obtained on the first step:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
version: '2.2'
services:
  mytbpe:
    restart: always
    image: "store/thingsboard/tb-pe:3.2.2PE"
    ports:
      - "8080:8080"
      - "1883:1883"
      - "5683:5683/udp"
    environment:
      TB_QUEUE_TYPE: pubsub
      SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/thingsboard
      TB_QUEUE_PUBSUB_PROJECT_ID: YOUR_PROJECT_ID
      TB_QUEUE_PUBSUB_SERVICE_ACCOUNT: YOUR_SERVICE_ACCOUNT
      TB_LICENSE_SECRET: PUT_YOUR_LICENSE_SECRET_HERE
      TB_LICENSE_INSTANCE_DATA_FILE: /data/license.data

      # These params affect the number of requests per second from each partitions per each queue.
      # Number of requests to particular Message Queue is calculated based on the formula:
      # ((Number of Rule Engine and Core Queues) * (Number of partitions per Queue) + (Number of transport queues)
      #  + (Number of microservices) + (Number of JS executors)) * 1000 / POLL_INTERVAL_MS
      # For example, number of requests based on default parameters is:
      # Rule Engine queues:
      # Main 10 partitions + HighPriority 10 partitions + SequentialByOriginator 10 partitions = 30
      # Core queue 10 partitions
      # Transport request Queue + response Queue = 2
      # Rule Engine Transport notifications Queue + Core Transport notifications Queue = 2
      # Total = 44
      # Number of requests per second = 44 * 1000 / 25 = 1760 requests
      # 
      # Based on the use case, you can compromise latency and decrease number of partitions/requests to the queue, if the message load is low.
      # Sample parameters to fit into 10 requests per second on a "monolith" deployment: 
      TB_QUEUE_CORE_POLL_INTERVAL_MS: 1000
      TB_QUEUE_CORE_PARTITIONS: 2
      TB_QUEUE_RULE_ENGINE_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_MAIN_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_MAIN_PARTITIONS: 2
      TB_QUEUE_RE_HP_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_HP_PARTITIONS: 1
      TB_QUEUE_RE_SQ_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_SQ_PARTITIONS: 1
      TB_QUEUE_TRANSPORT_REQUEST_POLL_INTERVAL_MS: 1000
      TB_QUEUE_TRANSPORT_RESPONSE_POLL_INTERVAL_MS: 1000
      TB_QUEUE_TRANSPORT_NOTIFICATIONS_POLL_INTERVAL_MS: 1000
    volumes:
      - ~/.mytbpe-data:/data
      - ~/.mytbpe-logs:/var/log/thingsboard
  postgres:
    restart: always
    image: "postgres:12"
    ports:
    - "5432"
    environment:
      POSTGRES_DB: thingsboard
      POSTGRES_PASSWORD: postgres
    volumes:
      - ~/.mytbpe-data/db:/var/lib/postgresql/data

Azure Service Bus Configuration

To access Azure Service Bus, you first need to create an Azure account.

To work with Service Bus service you will need to create a Service Bus Namespace using this instruction.

Create Shared Access Signature using this instruction.

Create docker compose file for ThingsBoard queue service:

1
sudo nano docker-compose.yml

Add the following line to the yml file. Don’t forget to replace “YOUR_NAMESPACE_NAME” with your real Service Bus namespace name, and “YOUR_SAS_KEY_NAME”, “YOUR_SAS_KEY” with your real Service Bus credentials. Note: “YOUR_SAS_KEY_NAME” it is “SAS Policy”, “YOUR_SAS_KEY” it is “SAS Policy Primary Key”, and “PUT_YOUR_LICENSE_SECRET_HERE” with your license secret obtained on the first step:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
version: '2.2'
services:
  mytbpe:
    restart: always
    image: "store/thingsboard/tb-pe:3.2.2PE"
    ports:
      - "8080:8080"
      - "1883:1883"
      - "5683:5683/udp"
    environment:
      TB_QUEUE_TYPE: service-bus
      SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/thingsboard
      TB_QUEUE_SERVICE_BUS_NAMESPACE_NAME: YOUR_NAMESPACE_NAME
      TB_QUEUE_SERVICE_BUS_SAS_KEY_NAME: YOUR_SAS_KEY_NAME
      TB_QUEUE_SERVICE_BUS_SAS_KEY: YOUR_SAS_KEY
      TB_LICENSE_SECRET: PUT_YOUR_LICENSE_SECRET_HERE
      TB_LICENSE_INSTANCE_DATA_FILE: /data/license.data

      # These params affect the number of requests per second from each partitions per each queue.
      # Number of requests to particular Message Queue is calculated based on the formula:
      # ((Number of Rule Engine and Core Queues) * (Number of partitions per Queue) + (Number of transport queues)
      #  + (Number of microservices) + (Number of JS executors)) * 1000 / POLL_INTERVAL_MS
      # For example, number of requests based on default parameters is:
      # Rule Engine queues:
      # Main 10 partitions + HighPriority 10 partitions + SequentialByOriginator 10 partitions = 30
      # Core queue 10 partitions
      # Transport request Queue + response Queue = 2
      # Rule Engine Transport notifications Queue + Core Transport notifications Queue = 2
      # Total = 44
      # Number of requests per second = 44 * 1000 / 25 = 1760 requests
      # 
      # Based on the use case, you can compromise latency and decrease number of partitions/requests to the queue, if the message load is low.
      # Sample parameters to fit into 10 requests per second on a "monolith" deployment: 
      TB_QUEUE_CORE_POLL_INTERVAL_MS: 1000
      TB_QUEUE_CORE_PARTITIONS: 2
      TB_QUEUE_RULE_ENGINE_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_MAIN_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_MAIN_PARTITIONS: 2
      TB_QUEUE_RE_HP_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_HP_PARTITIONS: 1
      TB_QUEUE_RE_SQ_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_SQ_PARTITIONS: 1
      TB_QUEUE_TRANSPORT_REQUEST_POLL_INTERVAL_MS: 1000
      TB_QUEUE_TRANSPORT_RESPONSE_POLL_INTERVAL_MS: 1000
      TB_QUEUE_TRANSPORT_NOTIFICATIONS_POLL_INTERVAL_MS: 1000
    volumes:
      - ~/.mytbpe-data:/data
      - ~/.mytbpe-logs:/var/log/thingsboard
  postgres:
    restart: always
    image: "postgres:12"
    ports:
    - "5432"
    environment:
      POSTGRES_DB: thingsboard
      POSTGRES_PASSWORD: postgres
    volumes:
      - ~/.mytbpe-data/db:/var/lib/postgresql/data

For installing RabbitMQ use this instruction.

Create docker compose file for ThingsBoard queue service:

1
sudo nano docker-compose.yml

Add the following line to the yml file. Don’t forget to replace “YOUR_USERNAME” and “YOUR_PASSWORD” with your real user credentials, “localhost” and “5672” with your real RabbitMQ host and port, and “PUT_YOUR_LICENSE_SECRET_HERE” with your license secret obtained on the first step:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
version: '2.2'
services:
  mytbpe:
    restart: always
    image: "store/thingsboard/tb-pe:3.2.2PE"
    ports:
      - "8080:8080"
      - "1883:1883"
      - "5683:5683/udp"
    environment:
      TB_QUEUE_TYPE: rabbitmq
      SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/thingsboard
      TB_QUEUE_RABBIT_MQ_USERNAME: YOUR_USERNAME
      TB_QUEUE_RABBIT_MQ_PASSWORD: YOUR_PASSWORD
      TB_QUEUE_RABBIT_MQ_HOST: localhost
      TB_QUEUE_RABBIT_MQ_PORT: 5672
      TB_LICENSE_SECRET: PUT_YOUR_LICENSE_SECRET_HERE
      TB_LICENSE_INSTANCE_DATA_FILE: /data/license.data
    volumes:
      - ~/.mytbpe-data:/data
      - ~/.mytbpe-logs:/var/log/thingsboard
  postgres:
    restart: always
    image: "postgres:12"
    ports:
    - "5432"
    environment:
      POSTGRES_DB: thingsboard
      POSTGRES_PASSWORD: postgres
    volumes:
      - ~/.mytbpe-data/db:/var/lib/postgresql/data

Confluent Cloud Configuration

To access Confluent Cloud you should first create an account, then create a Kafka cluster and get your API Key.

Create docker compose file for ThingsBoard queue service:

1
sudo nano docker-compose.yml

Add the following line to the yml file. Don’t forget to replace “CLUSTER_API_KEY”, “CLUSTER_API_SECRET” and “confluent.cloud:9092” with your real Confluent Cloud bootstrap servers:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
version: '2.2'
services:
  mytbpe:
    restart: always
    image: "store/thingsboard/tb-pe:3.2.2PE"
    ports:
      - "8080:8080"
      - "1883:1883"
      - "5683:5683/udp"
    environment:
      TB_QUEUE_TYPE=kafka
      SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/thingsboard
    
      TB_KAFKA_SERVERS:confluent.cloud:9092
      TB_QUEUE_KAFKA_REPLICATION_FACTOR:3
    
      TB_QUEUE_KAFKA_USE_CONFLUENT_CLOUD:true
      TB_QUEUE_KAFKA_CONFLUENT_SSL_ALGORITHM:https
      TB_QUEUE_KAFKA_CONFLUENT_SASL_MECHANISM:PLAIN
      TB_QUEUE_KAFKA_CONFLUENT_SASL_JAAS_CONFIG:org.apache.kafka.common.security.plain.PlainLoginModule required username="CLUSTER_API_KEY" password="CLUSTER_API_SECRET";
      TB_QUEUE_KAFKA_CONFLUENT_SECURITY_PROTOCOL:SASL_SSL
      TB_QUEUE_KAFKA_CONFLUENT_USERNAME:CLUSTER_API_KEY
      TB_QUEUE_KAFKA_CONFLUENT_PASSWORD:CLUSTER_API_SECRET
    
      TB_QUEUE_KAFKA_RE_TOPIC_PROPERTIES:retention.ms:604800000;segment.bytes:52428800;retention.bytes:1048576000
      TB_QUEUE_KAFKA_CORE_TOPIC_PROPERTIES:retention.ms:604800000;segment.bytes:52428800;retention.bytes:1048576000
      TB_QUEUE_KAFKA_TA_TOPIC_PROPERTIES:retention.ms:604800000;segment.bytes:52428800;retention.bytes:1048576000
      TB_QUEUE_KAFKA_NOTIFICATIONS_TOPIC_PROPERTIES:retention.ms:604800000;segment.bytes:52428800;retention.bytes:1048576000
      TB_QUEUE_KAFKA_JE_TOPIC_PROPERTIES:retention.ms:604800000;segment.bytes:52428800;retention.bytes:104857600

      # These params affect the number of requests per second from each partitions per each queue.
      # Number of requests to particular Message Queue is calculated based on the formula:
      # ((Number of Rule Engine and Core Queues) * (Number of partitions per Queue) + (Number of transport queues)
      #  + (Number of microservices) + (Number of JS executors)) * 1000 / POLL_INTERVAL_MS
      # For example, number of requests based on default parameters is:
      # Rule Engine queues:
      # Main 10 partitions + HighPriority 10 partitions + SequentialByOriginator 10 partitions = 30
      # Core queue 10 partitions
      # Transport request Queue + response Queue = 2
      # Rule Engine Transport notifications Queue + Core Transport notifications Queue = 2
      # Total = 44
      # Number of requests per second = 44 * 1000 / 25 = 1760 requests
      # 
      # Based on the use case, you can compromise latency and decrease number of partitions/requests to the queue, if the message load is low.
      # Sample parameters to fit into 10 requests per second on a "monolith" deployment: 
      TB_QUEUE_CORE_POLL_INTERVAL_MS: 1000
      TB_QUEUE_CORE_PARTITIONS: 2
      TB_QUEUE_RULE_ENGINE_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_MAIN_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_MAIN_PARTITIONS: 2
      TB_QUEUE_RE_HP_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_HP_PARTITIONS: 1
      TB_QUEUE_RE_SQ_POLL_INTERVAL_MS: 1000
      TB_QUEUE_RE_SQ_PARTITIONS: 1
      TB_QUEUE_TRANSPORT_REQUEST_POLL_INTERVAL_MS: 1000
      TB_QUEUE_TRANSPORT_RESPONSE_POLL_INTERVAL_MS: 1000
      TB_QUEUE_TRANSPORT_NOTIFICATIONS_POLL_INTERVAL_MS: 1000
    volumes:
      - ~/.mytbpe-data:/data
      - ~/.mytbpe-logs:/var/log/thingsboard
  postgres:
    restart: always
    image: "postgres:12"
    ports:
    - "5432"
    environment:
      POSTGRES_DB: thingsboard
      POSTGRES_PASSWORD: postgres
    volumes:
      - ~/.mytbpe-data/db:/var/lib/postgresql/data

Where:

  • PUT_YOUR_LICENSE_SECRET_HERE - placeholder for your license secret obtained on the third step;
  • 8080:8080 - connect local port 8080 to exposed internal HTTP port 8080;
  • 1883:1883 - connect local port 1883 to exposed internal MQTT port 1883;
  • 5683:5683 - connect local port 5683 to exposed internal COAP port 5683;
  • ~/.mytbpe-data:/data - mounts the host’s dir ~/.mytbpe-data to ThingsBoard data directory;
  • ~/.mytbpe-data/db:/var/lib/postgresql/data - mounts the host’s dir ~/.mytbpe-data/db to Postgres data directory;
  • ~/.mytbpe-logs:/var/log/thingsboard - mounts the host’s dir ~/.mytbpe-logs to ThingsBoard logs directory;
  • mytbpe - friendly local name of this machine;
  • restart: always - automatically start ThingsBoard in case of system reboot and restart in case of failure.;
  • store/thingsboard/tb-pe:3.2.2PE - docker image.

Step 5. Running

Before starting Docker container run following commands to create a directory for storing data and logs and then change its owner to docker container user, to be able to change user, chown command is used, which requires sudo permissions (command will request password for a sudo access):

1
2
mkdir -p ~/.mytbpe-data && sudo chown -R 799:799 ~/.mytbpe-data
mkdir -p ~/.mytbpe-logs && sudo chown -R 799:799 ~/.mytbpe-logs

NOTE: replace directory ~/.mytbpe-data and ~/.mytbpe-logs with directories you’re planning to used in docker-compose.yml.

Execute the following command to up this docker compose directly:

NOTE: For running docker compose commands you have to be in a directory with docker-compose.yml file.

1
2
docker-compose up -d
docker-compose logs -f mytbpe

After executing this command you can open http://{your-host-ip}:8080 in you browser (for ex. http://localhost:8080). You should see ThingsBoard login page. Use the following default credentials:

You can always change passwords for each account in account profile page.

Detaching, stop and start commands

You can close logs Ctrl-c - the container will keep running in the background.

In case of any issues you can examine service logs for errors. For example to see ThingsBoard node logs execute the following command:

1
docker-compose logs -f mytbpe

To stop the container:

1
docker-compose stop

To start the container:

1
docker-compose start

Troubleshooting

DNS issues

Note If you observe errors related to DNS issues, for example

1
127.0.1.1:53: cannot unmarshal DNS message

You may configure your system to use Google public DNS servers. See corresponding Linux and Mac OS instructions.

Upgrading from old PostgreSQL 9.6 to PostgreSQL 11 and ThingsBoard 3.0.1PE

In this example we’ll show steps to upgrade ThingsBoard from 2.4.3PE to 3.0.1PE.

Make a backup of your data:

1
sudo cp -r ~/.mytbpe-data ./.mytbpe-data-2-4-3-backup

Stop currently running docker container:

1
docker stop [container_id]

Upgrade old postgres to the new one:

1
2
3
docker run --rm -v ~/.mytbpe-data/db/:/var/lib/postgresql/9.6/data -v ~/.mytbpe-data-temp/db/:/var/lib/postgresql/11/data --env LANG=C.UTF-8 tianon/postgres-upgrade:9.6-to-11
sudo rm -rf ~/.mytbpe-data/db
sudo mv ~/.mytbpe-data-temp/db ~/.mytbpe-data/db

Start the new version of TB with following command:

1
docker run -it -v ~/.mytbpe-data:/data --rm store/thingsboard/tb-pe:3.0.1PE bash

Then please follow these steps:

1
2
3
4
5
6
7
8
9
10
apt update
apt install sudo
chown -R postgres. /data/db/
chmod 750 -R /data/db/
sudo -i -u postgres
cd /usr/lib/postgresql/11/
./bin/pg_ctl -D /data/db start
logout
/usr/share/thingsboard/bin/install/upgrade.sh --fromVersion=2.4.1
exit

After this create your docker-compose.yml and insert (we used in-memory queue as an example):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
version: '2.2'
services:
  mytbpe:
    restart: always
    image: "store/thingsboard/tb-pe:3.0.1PE"
    ports:
      - "8080:9090"
      - "1883:1883"
      - "5683:5683/udp"
    environment:
      TB_QUEUE_TYPE: in-memory
      TB_LICENSE_SECRET: YOUR_SECRET_KEY
      TB_LICENSE_INSTANCE_DATA_FILE: /data/license.data
      INTEGRATIONS_RPC_PORT: 50052
      PGDATA: /data/db
    volumes:
      - ~/.mytbpe-data:/data
      - ~/.mytbpe-logs:/var/log/thingsboard

Start ThingsBoard:

1
docker-compose up

Upgrading from 3.0.1PE to 3.1.0PE

After 3.0.1PE PostgreSQL service was separated from ThingsBoard image and upgrading to 3.1.0PE version is not trivial.

First of all you need to create a dump of your database:

1
docker-compose exec mytbpe sh -c "pg_dump -U postgres thingsboard > /data/thingsboard_dump"

Note You have to use your valid username for connecting to PostgreSQL

Then you need to stop service, create a new directory for the database and set permissions:

1
2
3
4
5
sudo cp -r ~/.mytbpe-data ./.mytbpe-data-backup
docker-compose down
sudo rm -rf ~/.mytbpe-data/db
sudo chown -R 799:799 ~/.mytbpe-data
sudo chown -R 799:799 ~/.mytbpe-logs

After this you need to update docker-compose.yml as in Step 4 but with 3.1.0PE instead of 3.2.2PE:

Start PostgreSQL:

1
docker-compose up postgres

Restore backup:

1
2
sudo cp ~/.mytbpe-data/thingsboard_dump ~/.mytbpe-data/db/thingsboard_dump
docker-compose exec postgres sh -c "psql -U postgres thingsboard < /var/lib/postgresql/data/thingsboard_dump"

Upgrade ThingsBoard:

1
docker-compose run mytbpe upgrade-tb.sh

Start ThingsBoard:

1
docker-compose up mytbpe

Upgrade starting from 3.1.0PE

Please refer to the troubleshooting section in case you are upgrading from 3.0.0 or 3.0.1.

Below is example on how to upgrade from 3.1.0 to 3.1.1

  1. Stop mytbpe container

    1
    
     docker-compose stop mytbpe
    
  2. Create a dump of your database:

    1
    
     docker-compose exec postgres sh -c "pg_dump -U postgres thingsboard > /var/lib/postgresql/data/thingsboard_dump"
    
  3. After this you need to update docker-compose.yml as in Step 4 but with 3.1.1PE instead of 3.2.2PE:

  4. Change upgradeversion version to your current ThingsBoard version.

    1
    
     sudo sh -c "echo '3.1.0' > ~/.mytbpe-data/.upgradeversion"
    
  5. Then execute the following commands:
    1
    
     docker-compose run mytbpe upgrade-tb.sh
    
  6. Start ThingsBoard:

    1
    
     docker-compose up -d
    

To upgrade ThingsBoard to latest version those steps should be done for each intermediate version.

Please note that upgrades are not cumulative. Refer to upgrade instruction to know the right order of upgrades (f.e. if you are upgrading from 3.1.0 to 3.2.1, you need to do that in the following order: 3.1.0 -> 3.1.1 -> 3.2.0 -> 3.2.1, e.g. current version -> next release version -> etc)

In case you need to restore from the backup:

  1. Stop mytbpe container

    1
    
     docker-compose stop mytbpe
    
  2. Update docker-compose.yml to the initial version.

  3. Execute

    1
    2
    3
    
     docker-compose exec postgres sh -c "psql -U postgres -c 'drop database thingsboard'"
     docker-compose exec postgres sh -c "psql -U postgres -c 'create database thingsboard'"
     docker-compose exec postgres sh -c "psql -U postgres thingsboard < /var/lib/postgresql/data/thingsboard_dump"
    
  4. Start ThingsBoard:

    1
    
     docker-compose up -d
    

If you need to copy backup to local directory in case restoring it on another server:

1
docker cp tb-docker_postgres_1:/var/lib/postgresql/data/thingsboard_dump .

Note: You should paste the name for your postgres container.

Next steps