Installing ThingsBoard CE using Docker (Linux, macOS)
This guide covers a single-node ThingsBoard Community Edition (CE) installation using Docker Compose on Linux or macOS. By the end, you will have a fully functional ThingsBoard instance running on your machine. For cluster setup, see Cluster Setup with Docker Compose.
Prerequisites
Section titled “Prerequisites”Ensure your server meets the minimum requirements:
| Use case | CPU | RAM | Recommended services |
|---|---|---|---|
| Development / PoC | 1 core | 4 GB | ThingsBoard, PostgreSQL |
| Production (small) | 2 cores | 8 GB | ThingsBoard, PostgreSQL, Kafka |
| Production (recommended) | 4+ cores | 16+ GB | ThingsBoard, PostgreSQL, Kafka, Cassandra |
Install Docker on your server:
Step 1. Create docker compose file
Section titled “Step 1. Create docker compose file”Create a dedicated directory for your ThingsBoard installation and navigate to it. All subsequent commands in this guide should be run from this directory.
mkdir -p ~/thingsboard && cd ~/thingsboardThingsBoard uses a message queue to route messages between its internal services. Select the option that matches your infrastructure:
- In Memory (default) — built-in queue, no extra setup required. Suitable for development and PoC. Not recommended for production or multi-node deployments.
- Kafka — high-throughput, durable queue. Run it yourself or use a managed service such as AWS MSK.
- Confluent Cloud — fully managed Kafka service. Use this if you want Kafka without managing the infrastructure yourself.
Create the docker-compose.yml file:
nano docker-compose.ymlPaste one of the configurations below, save, and exit. Or use the download button to save the file directly.
services: postgres: restart: always image: "postgres:18" ports: - "5432" environment: POSTGRES_DB: thingsboard POSTGRES_PASSWORD: postgres volumes: - postgres-data:/var/lib/postgresql healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres -d thingsboard"] interval: 10s timeout: 5s retries: 5 start_period: 10s thingsboard-ce: restart: always image: "thingsboard/tb-node:4.3.1.1" ports: - "8080:8080" - "7070:7070" - "1883:1883" - "8883:8883" - "5683-5688:5683-5688/udp" logging: driver: "json-file" options: max-size: "100m" max-file: "10" environment: TB_SERVICE_ID: tb-ce-node SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/thingsboard SPRING_DATASOURCE_PASSWORD: postgres depends_on: postgres: condition: service_healthyvolumes: postgres-data: name: tb-postgres-data driver: localServices started:
postgres— PostgreSQL databasethingsboard-ce— ThingsBoard application node
This example runs Kafka locally as a Docker container. If you already have a Kafka broker or use a managed service (e.g. AWS MSK), remove the kafka service and update TB_KAFKA_SERVERS to point to your broker.
services: postgres: restart: always image: "postgres:18" ports: - "5432" environment: POSTGRES_DB: thingsboard POSTGRES_PASSWORD: postgres volumes: - postgres-data:/var/lib/postgresql healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres -d thingsboard"] interval: 10s timeout: 5s retries: 5 start_period: 10s kafka: restart: always image: bitnamilegacy/kafka:4.0 ports: - 9092:9092 - 9093 environment: ALLOW_PLAINTEXT_LISTENER: "yes" KAFKA_CFG_LISTENERS: "PLAINTEXT://:9092,CONTROLLER://:9093" KAFKA_CFG_ADVERTISED_LISTENERS: "PLAINTEXT://:9092" KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: "CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT" KAFKA_CFG_INTER_BROKER_LISTENER_NAME: "PLAINTEXT" KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "false" KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1" KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: "1" KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: "1" KAFKA_CFG_PROCESS_ROLES: "controller,broker" KAFKA_CFG_NODE_ID: "0" KAFKA_CFG_CONTROLLER_LISTENER_NAMES: "CONTROLLER" KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: "0@kafka:9093" KAFKA_CFG_LOG_RETENTION_MS: "300000" KAFKA_CFG_SEGMENT_BYTES: "26214400" volumes: - kafka-data:/bitnami thingsboard-ce: restart: always image: "thingsboard/tb-node:4.3.1.1" ports: - "8080:8080" - "7070:7070" - "1883:1883" - "8883:8883" - "5683-5688:5683-5688/udp" logging: driver: "json-file" options: max-size: "100m" max-file: "10" environment: TB_SERVICE_ID: tb-ce-node SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/thingsboard SPRING_DATASOURCE_PASSWORD: postgres TB_QUEUE_TYPE: kafka TB_KAFKA_SERVERS: kafka:9092 depends_on: postgres: condition: service_healthy kafka: condition: service_startedvolumes: postgres-data: name: tb-postgres-data driver: local kafka-data: name: tb-ce-kafka-data driver: localServices started:
postgres— PostgreSQL databasekafka— Kafka broker (local, single-node)thingsboard-ce— ThingsBoard application node
First create a Confluent Cloud account, create a Kafka cluster, and obtain your API Key. Replace CLUSTER_API_KEY, CLUSTER_API_SECRET, and localhost:9092 with your Confluent Cloud values:
services: postgres: restart: always image: "postgres:18" ports: - "5432" environment: POSTGRES_DB: thingsboard POSTGRES_PASSWORD: postgres volumes: - postgres-data:/var/lib/postgresql healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres -d thingsboard"] interval: 10s timeout: 5s retries: 5 start_period: 10s thingsboard-ce: restart: always image: "thingsboard/tb-node:4.3.1.1" ports: - "8080:8080" - "7070:7070" - "1883:1883" - "8883:8883" - "5683-5688:5683-5688/udp" logging: driver: "json-file" options: max-size: "100m" max-file: "10" environment: TB_SERVICE_ID: tb-ce-node SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/thingsboard SPRING_DATASOURCE_PASSWORD: postgres TB_QUEUE_TYPE: kafka TB_KAFKA_SERVERS: localhost:9092 TB_QUEUE_KAFKA_REPLICATION_FACTOR: 3 TB_QUEUE_KAFKA_USE_CONFLUENT_CLOUD: true TB_QUEUE_KAFKA_CONFLUENT_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.plain.PlainLoginModule required username="CLUSTER_API_KEY" password="CLUSTER_API_SECRET";' TB_QUEUE_CORE_POLL_INTERVAL_MS: 1000 TB_QUEUE_CORE_PARTITIONS: 2 TB_QUEUE_RULE_ENGINE_POLL_INTERVAL_MS: 1000 TB_QUEUE_TRANSPORT_REQUEST_POLL_INTERVAL_MS: 1000 TB_QUEUE_TRANSPORT_RESPONSE_POLL_INTERVAL_MS: 1000 TB_QUEUE_TRANSPORT_NOTIFICATIONS_POLL_INTERVAL_MS: 1000 TB_QUEUE_VC_INTERVAL_MS: 1000 TB_QUEUE_VC_PARTITIONS: 1 depends_on: postgres: condition: service_healthyvolumes: postgres-data: name: tb-postgres-data driver: localServices started:
postgres— PostgreSQL databasethingsboard-ce— ThingsBoard application node
You can update the default Rule Engine queue configuration using the UI. See Rule Engine Queues for details.
Docker compose parameters
Section titled “Docker compose parameters”Ports (host:container)
| Port mapping | Description |
|---|---|
8080:8080 | Web UI and REST API. The left value is the host port — change it if 8080 is already in use. |
1883:1883 | MQTT — plaintext IoT device connections |
8883:8883 | MQTT over TLS — encrypted IoT device connections |
5683:5683/udp | CoAP — plaintext IoT protocol |
5684:5684/udp | CoAP over DTLS — encrypted CoAP |
5685:5685/udp | LwM2M CoAP — plaintext Lightweight M2M |
5686:5686/udp | LwM2M CoAP over DTLS — encrypted LwM2M |
5687:5687/udp | LwM2M — plaintext Lightweight M2M (Bootstrap) |
5688:5688/udp | LwM2M over DTLS — encrypted Lightweight M2M (Bootstrap) |
7070:7070 | Edge RPC (gRPC) — connections from ThingsBoard Edge nodes |
Environment variables
| Variable | Description |
|---|---|
POSTGRES_PASSWORD | Password for the PostgreSQL postgres user. Must match SPRING_DATASOURCE_PASSWORD. Change the default value in production. |
SPRING_DATASOURCE_URL | PostgreSQL JDBC connection URL. Specifies the host and database name. Default: jdbc:postgresql://postgres:5432/thingsboard. |
SPRING_DATASOURCE_USERNAME | PostgreSQL username ThingsBoard connects as. Default: postgres. |
SPRING_DATASOURCE_PASSWORD | PostgreSQL password ThingsBoard uses to connect. Must match POSTGRES_PASSWORD. |
TB_QUEUE_TYPE | Message queue type. Options: in-memory (default, single-node only), kafka, rabbitmq. |
TB_KAFKA_SERVERS | Kafka bootstrap servers. Required when TB_QUEUE_TYPE=kafka. Default: localhost:9092. |
Volumes
| Volume | Description |
|---|---|
tb-postgres-data | Persists PostgreSQL data across container restarts and upgrades. |
tb-ce-kafka-data | Persists Kafka data. Only present when TB_QUEUE_TYPE=kafka. |
For the full list of configuration parameters, see the Configuration Reference.
Step 2. Initialize database schema and system assets
Section titled “Step 2. Initialize database schema and system assets”Before starting ThingsBoard, initialize the database schema and load built-in assets. Choose the option that matches your goal:
- With demo data — also loads a sample tenant account, pre-built dashboards, and demo devices. Useful for exploring the platform before deploying to production.
- Clean install — initializes the database with system data only (rule chains, widget bundles, system dashboards).
docker compose run --rm -e INSTALL_TB=true -e LOAD_DEMO=true thingsboard-cedocker compose run --rm -e INSTALL_TB=true thingsboard-ceThe container exits automatically once initialization is complete.
Step 3. Start the platform
Section titled “Step 3. Start the platform”Start all containers:
docker compose up -dMonitor the startup. The line confirming the platform is ready will be highlighted:
docker compose logs -f thingsboard-ce | grep --line-buffered --color=always -E 'Started ThingsboardServerApplication|$'Press Ctrl+C to detach from the log stream — containers will continue running in the background.
Open http://localhost:8080 in your browser. You should see the ThingsBoard login page. Use the following default credentials:
| Role | Password | With demo data | Clean install | |
|---|---|---|---|---|
| System Administrator | [email protected] | sysadmin | ✅ | ✅ |
| Tenant Administrator | [email protected] | tenant | ✅ | ❌ |
| Customer User | [email protected] | customer | ✅ | ❌ |
See Getting Started for your next steps after login.
Inspect logs and control containers
Section titled “Inspect logs and control containers”Stream the ThingsBoard container logs:
docker compose logs -f thingsboard-ceStop all containers:
docker compose downStart all containers:
docker compose up -dUpgrading
Section titled “Upgrading”When a new ThingsBoard release becomes available, update your installation to benefit from the latest features and security patches.
See the Upgrade Instructions for detailed steps.
Troubleshooting
Section titled “Troubleshooting”DNS issues
Section titled “DNS issues”If you observe errors related to DNS issues, for example:
127.0.1.1:53: cannot unmarshal DNS messageConfigure your system to use Google public DNS servers. See Linux and macOS instructions.