This guide will help you to install and start ThingsBoard Professional Edition (PE) using Docker and Docker Compose on Linux or MacOS.
This guide covers standalone ThingsBoard PE installation.
If you are looking for a cluster installation instruction, please visit cluster setup page.
Prerequisites
Install Docker:
Step 1. Obtain the license key
We assume you have already chosen your subscription plan or decided to purchase a perpetual license.
If not, please navigate to pricing page to select the best license option for your case and get your license.
See How-to get pay-as-you-go subscription or How-to get perpetual license for more details.
Note: We will reference the license key you have obtained during this step as PUT_YOUR_LICENSE_SECRET_HERE later in this guide.
Step 2. Choose ThingsBoard queue service
ThingsBoard platform currently supports two type of messaging brokers for storing the messages and communication between ThingsBoard services: In-memory and Kafka-based brokers.
-
In Memory queue implementation is built-in and default.
It is useful for development(PoC) environments and is not suitable for production deployments or any sort of cluster deployments.
-
Kafka is recommended for production deployments. This queue is used on the most of ThingsBoard production environments now.
It is useful for both on-prem and private cloud deployments. It is also useful if you like to stay independent from your cloud provider.
However, some providers also have managed services for Kafka. See AWS MSK for example.
-
Confluent Cloud is a fully managed streaming platform based on Kafka. Useful for a cloud agnostic deployments.
See corresponding architecture page and rule engine page for more details.
ThingsBoard includes In Memory Queue service and use it by default without extra settings.
Create docker compose file for ThingsBoard queue service:
1
| nano docker-compose.yml
|
Add the following line to the yml file. Don’t forget to replace “PUT_YOUR_LICENSE_SECRET_HERE” with your license secret obtained on the first step:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
| services:
postgres:
restart: always
image: "postgres:16"
ports:
- "5432"
environment:
POSTGRES_DB: thingsboard
POSTGRES_PASSWORD: postgres
volumes:
- postgres-data:/var/lib/postgresql/data
thingsboard-pe:
restart: always
image: "thingsboard/tb-pe-node:4.2.1.1PE"
ports:
- "8080:8080"
- "1883:1883"
- "8883:8883"
- "9090:9090"
- "7070:7070"
- "5683-5688:5683-5688/udp"
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "10"
environment:
TB_SERVICE_ID: tb-pe-node
TB_LICENSE_SECRET: PUT_YOUR_LICENSE_SECRET_HERE
TB_LICENSE_INSTANCE_DATA_FILE: /data/license.data
REPORTS_SERVER_ENDPOINT_URL: http://tb-web-report:8383
SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/thingsboard
DEFAULT_TRENDZ_URL: http://trendz:8888
DEFAULT_TB_URL: http://thingsboard-pe:8080
volumes:
- license-data:/data
depends_on:
- postgres
tb-web-report:
restart: always
image: "thingsboard/tb-pe-web-report:4.2.1.1PE"
ports:
- "8383"
depends_on:
- thingsboard-pe
environment:
HTTP_BIND_ADDRESS: 0.0.0.0
HTTP_BIND_PORT: 8383
LOGGER_LEVEL: info
LOG_FOLDER: logs
LOGGER_FILENAME: tb-web-report-%DATE%.log
DOCKER_MODE: true
DEFAULT_PAGE_NAVIGATION_TIMEOUT: 120000
DASHBOARD_IDLE_WAIT_TIME: 3000
USE_NEW_PAGE_FOR_REPORT: true
trendz:
profiles: ['trendz']
restart: always
image: "thingsboard/trendz:1.14.0"
ports:
- "8888:8888"
environment:
TB_API_URL: http://thingsboard-pe:8080
SPRING_DATASOURCE_URL: jdbc:postgresql://trendz-postgres:5432/trendz
SPRING_DATASOURCE_USERNAME: postgres
SPRING_DATASOURCE_PASSWORD: postgres
SCRIPT_ENGINE_DOCKER_PROVIDER_URL: trendz-python-executor:8181
SCRIPT_ENGINE_TIMEOUT: 30000
volumes:
- trendz-conf:/trendz-config-files
- trendz-data:/data
depends_on:
- trendz-postgres
trendz-python-executor:
profiles: ['trendz']
restart: always
image: "thingsboard/trendz-python-executor:1.14.0"
ports:
- "8181:8181"
environment:
EXECUTOR_MANAGER: 1
EXECUTOR_SCRIPT_ENGINE: 6
THROTTLING_QUEUE_CAPACITY: 10
THROTTLING_THREAD_POOL_SIZE: 6
NETWORK_BUFFER_SIZE: 5242880
volumes:
- trendz-python-executor-conf:/python-executor-config-files
- trendz-python-executor-data:/data
trendz-postgres:
profiles: ['trendz']
restart: always
image: "postgres:16"
ports:
- "5433:5432"
environment:
POSTGRES_DB: trendz
POSTGRES_PASSWORD: postgres
volumes:
- trendz-postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
name: tb-postgres-data
driver: local
license-data:
name: tb-pe-license-data
driver: local
trendz-conf:
name: trendz-conf
driver: local
trendz-data:
name: trendz-data
driver: local
trendz-python-executor-conf:
name: trendz-python-executor-conf
driver: local
trendz-python-executor-data:
name: trendz-python-executor-data
driver: local
trendz-postgres-data:
name: trendz-postgres-data
driver: local
|
|
|
Apache Kafka is an open-source stream-processing software platform.
Create docker compose file for ThingsBoard queue service:
1
| nano docker-compose.yml
|
Add the following line to the yml file. Don’t forget to replace “PUT_YOUR_LICENSE_SECRET_HERE” with your license secret obtained on the first step:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
| services:
postgres:
restart: always
image: "postgres:16"
ports:
- "5432"
environment:
POSTGRES_DB: thingsboard
POSTGRES_PASSWORD: postgres
volumes:
- postgres-data:/var/lib/postgresql/data
kafka:
restart: always
image: bitnamilegacy/kafka:4.0
ports:
- 9092:9092 #to localhost:9092 from host machine
- 9093 #for Kraft
environment:
ALLOW_PLAINTEXT_LISTENER: "yes"
KAFKA_CFG_LISTENERS: "PLAINTEXT://:9092,CONTROLLER://:9093"
KAFKA_CFG_ADVERTISED_LISTENERS: "PLAINTEXT://:9092"
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: "CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT"
KAFKA_CFG_INTER_BROKER_LISTENER_NAME: "PLAINTEXT"
KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "false"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1"
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: "1"
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: "1"
KAFKA_CFG_PROCESS_ROLES: "controller,broker" #KRaft
KAFKA_CFG_NODE_ID: "0" #KRaft
KAFKA_CFG_CONTROLLER_LISTENER_NAMES: "CONTROLLER" #KRaft
KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: "0@kafka:9093" #KRaft
KAFKA_CFG_LOG_RETENTION_MS: "300000"
KAFKA_CFG_SEGMENT_BYTES: "26214400"
volumes:
- kafka-data:/bitnami
thingsboard-pe:
restart: always
image: "thingsboard/tb-pe-node:4.2.1.1PE"
ports:
- "8080:8080"
- "1883:1883"
- "8883:8883"
- "9090:9090"
- "7070:7070"
- "5683-5688:5683-5688/udp"
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "10"
environment:
TB_SERVICE_ID: tb-pe-node
TB_LICENSE_SECRET: PUT_YOUR_LICENSE_SECRET_HERE
TB_LICENSE_INSTANCE_DATA_FILE: /data/license.data
REPORTS_SERVER_ENDPOINT_URL: http://tb-web-report:8383
SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/thingsboard
DEFAULT_TRENDZ_URL: http://trendz:8888
DEFAULT_TB_URL: http://thingsboard-pe:8080
TB_QUEUE_TYPE: kafka
TB_KAFKA_SERVERS: kafka:9092
volumes:
- license-data:/data
depends_on:
- postgres
tb-web-report:
restart: always
image: "thingsboard/tb-pe-web-report:4.2.1.1PE"
ports:
- "8383"
depends_on:
- thingsboard-pe
environment:
HTTP_BIND_ADDRESS: 0.0.0.0
HTTP_BIND_PORT: 8383
LOGGER_LEVEL: info
LOG_FOLDER: logs
LOGGER_FILENAME: tb-web-report-%DATE%.log
DOCKER_MODE: true
DEFAULT_PAGE_NAVIGATION_TIMEOUT: 120000
DASHBOARD_IDLE_WAIT_TIME: 3000
USE_NEW_PAGE_FOR_REPORT: true
trendz:
profiles: ['trendz']
restart: always
image: "thingsboard/trendz:1.14.0"
ports:
- "8888:8888"
environment:
TB_API_URL: http://thingsboard-pe:8080
SPRING_DATASOURCE_URL: jdbc:postgresql://trendz-postgres:5432/trendz
SPRING_DATASOURCE_USERNAME: postgres
SPRING_DATASOURCE_PASSWORD: postgres
SCRIPT_ENGINE_DOCKER_PROVIDER_URL: trendz-python-executor:8181
SCRIPT_ENGINE_TIMEOUT: 30000
volumes:
- trendz-conf:/trendz-config-files
- trendz-data:/data
depends_on:
- trendz-postgres
trendz-python-executor:
profiles: ['trendz']
restart: always
image: "thingsboard/trendz-python-executor:1.14.0"
ports:
- "8181:8181"
environment:
EXECUTOR_MANAGER: 1
EXECUTOR_SCRIPT_ENGINE: 6
THROTTLING_QUEUE_CAPACITY: 10
THROTTLING_THREAD_POOL_SIZE: 6
NETWORK_BUFFER_SIZE: 5242880
volumes:
- trendz-python-executor-conf:/python-executor-config-files
- trendz-python-executor-data:/data
trendz-postgres:
profiles: ['trendz']
restart: always
image: "postgres:16"
ports:
- "5433:5432"
environment:
POSTGRES_DB: trendz
POSTGRES_PASSWORD: postgres
volumes:
- trendz-postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
name: tb-postgres-data
driver: local
license-data:
name: tb-pe-license-data
driver: local
kafka-data:
name: tb-pe-kafka-data
driver: local
trendz-conf:
name: trendz-conf
driver: local
trendz-data:
name: trendz-data
driver: local
trendz-python-executor-conf:
name: trendz-python-executor-conf
driver: local
trendz-python-executor-data:
name: trendz-python-executor-data
driver: local
trendz-postgres-data:
name: trendz-postgres-data
driver: local
|
|
Confluent Cloud Configuration
To access Confluent Cloud
you should first create an account,
then create a Kafka cluster
and get your API Key.
Create docker compose file for ThingsBoard queue service:
1
| nano docker-compose.yml
|
Add the following line to the yml file. Don’t forget to replace “CLUSTER_API_KEY”, “CLUSTER_API_SECRET” and “localhost:9092” with your real Confluent Cloud bootstrap servers:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
| services:
postgres:
restart: always
image: "postgres:16"
ports:
- "5432"
environment:
POSTGRES_DB: thingsboard
POSTGRES_PASSWORD: postgres
volumes:
- postgres-data:/var/lib/postgresql/data
thingsboard-pe:
restart: always
image: "thingsboard/tb-pe-node:4.2.1.1PE"
ports:
- "8080:8080"
- "1883:1883"
- "8883:8883"
- "9090:9090"
- "7070:7070"
- "5683-5688:5683-5688/udp"
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "10"
environment:
TB_SERVICE_ID: tb-pe-node
TB_LICENSE_SECRET: PUT_YOUR_LICENSE_SECRET_HERE
TB_LICENSE_INSTANCE_DATA_FILE: /data/license.data
REPORTS_SERVER_ENDPOINT_URL: http://tb-web-report:8383
SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/thingsboard
DEFAULT_TRENDZ_URL: http://trendz:8888
DEFAULT_TB_URL: http://thingsboard-pe:8080
TB_QUEUE_TYPE: kafka
TB_KAFKA_SERVERS: localhost:9092
TB_QUEUE_KAFKA_REPLICATION_FACTOR: 3
TB_QUEUE_KAFKA_USE_CONFLUENT_CLOUD: true
TB_QUEUE_KAFKA_CONFLUENT_SASL_JAAS_CONFIG: 'org.apache.kafka.common.security.plain.PlainLoginModule required username="CLUSTER_API_KEY" password="CLUSTER_API_SECRET";'
# These params affect the number of requests per second from each partitions per each queue.
# Number of requests to particular Message Queue is calculated based on the formula:
# ((Number of Rule Engine and Core Queues) * (Number of partitions per Queue) + (Number of transport queues)
# + (Number of microservices) + (Number of JS executors)) * 1000 / POLL_INTERVAL_MS
# For example, number of requests based on default parameters is:
# Rule Engine queues:
# Main 10 partitions + HighPriority 10 partitions + SequentialByOriginator 10 partitions = 30
# Core queue 10 partitions
# Transport request Queue + response Queue = 2
# Rule Engine Transport notifications Queue + Core Transport notifications Queue = 2
# Total = 44
# Number of requests per second = 44 * 1000 / 25 = 1760 requests
#
# Based on the use case, you can compromise latency and decrease number of partitions/requests to the queue, if the message load is low.
# By UI set the parameters - interval (1000) and partitions (1) for Rule Engine queues.
# Sample parameters to fit into 10 requests per second on a "monolith" deployment:
TB_QUEUE_CORE_POLL_INTERVAL_MS: 1000
TB_QUEUE_CORE_PARTITIONS: 2
TB_QUEUE_RULE_ENGINE_POLL_INTERVAL_MS: 1000
TB_QUEUE_TRANSPORT_REQUEST_POLL_INTERVAL_MS: 1000
TB_QUEUE_TRANSPORT_RESPONSE_POLL_INTERVAL_MS: 1000
TB_QUEUE_TRANSPORT_NOTIFICATIONS_POLL_INTERVAL_MS: 1000
TB_QUEUE_VC_INTERVAL_MS: 1000
TB_QUEUE_VC_PARTITIONS: 1
depends_on:
- postgres
volumes:
- license-data:/data
tb-web-report:
restart: always
image: "thingsboard/tb-pe-web-report:4.2.1.1PE"
ports:
- "8383"
depends_on:
- thingsboard-pe
environment:
HTTP_BIND_ADDRESS: 0.0.0.0
HTTP_BIND_PORT: 8383
LOGGER_LEVEL: info
LOG_FOLDER: logs
LOGGER_FILENAME: tb-web-report-%DATE%.log
DOCKER_MODE: true
DEFAULT_PAGE_NAVIGATION_TIMEOUT: 120000
DASHBOARD_IDLE_WAIT_TIME: 3000
USE_NEW_PAGE_FOR_REPORT: true
trendz:
profiles: ['trendz']
restart: always
image: "thingsboard/trendz:1.14.0"
ports:
- "8888:8888"
environment:
TB_API_URL: http://thingsboard-pe:8080
SPRING_DATASOURCE_URL: jdbc:postgresql://trendz-postgres:5432/trendz
SPRING_DATASOURCE_USERNAME: postgres
SPRING_DATASOURCE_PASSWORD: postgres
SCRIPT_ENGINE_DOCKER_PROVIDER_URL: trendz-python-executor:8181
SCRIPT_ENGINE_TIMEOUT: 30000
volumes:
- trendz-conf:/trendz-config-files
- trendz-data:/data
depends_on:
- trendz-postgres
trendz-python-executor:
profiles: ['trendz']
restart: always
image: "thingsboard/trendz-python-executor:1.14.0"
ports:
- "8181:8181"
environment:
EXECUTOR_MANAGER: 1
EXECUTOR_SCRIPT_ENGINE: 6
THROTTLING_QUEUE_CAPACITY: 10
THROTTLING_THREAD_POOL_SIZE: 6
NETWORK_BUFFER_SIZE: 5242880
volumes:
- trendz-python-executor-conf:/python-executor-config-files
- trendz-python-executor-data:/data
trendz-postgres:
profiles: ['trendz']
restart: always
image: "postgres:16"
ports:
- "5433:5432"
environment:
POSTGRES_DB: trendz
POSTGRES_PASSWORD: postgres
volumes:
- trendz-postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
name: tb-postgres-data
driver: local
license-data:
name: tb-pe-license-data
driver: local
trendz-conf:
name: trendz-conf
driver: local
trendz-data:
name: trendz-data
driver: local
trendz-python-executor-conf:
name: trendz-python-executor-conf
driver: local
trendz-python-executor-data:
name: trendz-python-executor-data
driver: local
trendz-postgres-data:
name: trendz-postgres-data
driver: local
|
You can update default Rule Engine queues configuration using UI. More about ThingsBoard Rule Engine queues see in documentation.
|
Where:
PUT_YOUR_LICENSE_SECRET_HERE - placeholder for your license secret obtained on the third step
8080:8080 - connect local port 8080 to exposed internal HTTP port 8080
1883:1883 - connect local port 1883 to exposed internal MQTT port 1883
8883:8883 - connect local port 8883 to exposed internal MQTT over SSL port 8883
7070:7070 - connect local port 7070 to exposed internal Edge RPC port 7070
9090:9090 - connect local port 9090 to exposed internal Remote Integration port 9090
5683-5688:5683-5688/udp - connect local UDP ports 5683-5688 to exposed internal COAP and LwM2M ports
tb-pe-license-data - name of the docker volume that stores the ThingsBoard’s license instance data file
tb-postgres-data - name of the docker volume that stores the PostgreSQL’s data
thingsboard-pe - friendly local name of this machine
restart: always - automatically start ThingsBoard in case of system reboot and restart in case of failure.
thingsboard/tb-pe-node:4.2.1.1PE - docker image.
Also, this docker compose file contains Trendz Analytics add-on services:
profiles: ['trendz'] - Trendz Analytics services have such profile
8888:8888 - connect local port 8888 to exposed internal HTTP port 8888
trendz-conf - name of the docker volume that stores the Trendz’s configuration files
trendz-data - name of the docker volume that stores the Trendz’s data
trendz-python-executor-conf - name of the docker volume that stores the Trendz python executor configuration files
trendz-python-executor-data - name of the docker volume that stores the Trendz python executor data
trendz-postgres-data - name of the docker volume that stores the Trendz PostgreSQL’s data
You can read more about Trendz Analytics here.
Step 3. Initialize database schema & system assets
Before you start ThingsBoard, initialize the database schema and load built-in assets by running:
1
| docker compose run --rm -e INSTALL_TB=true -e LOAD_DEMO=true thingsboard-pe
|
Environment variables:
INSTALL_TB=true - Installs the core database schema and system resources (widgets, images, rule chains, etc.).
LOAD_DEMO=true - Loads sample tenant account, dashboards and devices for evaluation and testing.
It’s possible to start the ThingsBoard with or without Trendz Analytics add-on.
Bring up all containers (including Trendz containers) in detached mode, then follow the ThingsBoard logs:
1
| docker compose --profile trendz up -d && docker compose logs -f thingsboard-pe
|
|
Bring up all core containers in detached mode, then follow the ThingsBoard logs:
1
| docker compose up -d && docker compose logs -f thingsboard-pe
|
|
After executing this command you can open http://{your-host-ip}:8080 in you browser (for ex. http://localhost:8080). You should see ThingsBoard login page.
Note that web-reports will generate only if you access ThingsBoard via external IP address or domain name.
Web-report will not generate if you access ThingsBoard by http://localhost:8080
Use the following default credentials:
- System Administrator: sysadmin@thingsboard.org / sysadmin
- Tenant Administrator: tenant@thingsboard.org / tenant
- Customer User: customer@thingsboard.org / customer
You can always change passwords for each account in account profile page.
You can safely detach from the log stream (e.g. Ctrl+C); containers will continue running.
Inspect logs & control container lifecycle
If something goes wrong, you can stream the ThingsBoard container logs in real time:
1
| docker compose logs -f thingsboard-pe
|
Stream the Trendz container logs in real time:
1
| docker compose logs -f trendz
|
Bring down every container defined in your Compose file:
1
| docker compose --profile trendz down
|
Launch all services in detached mode:
1
| docker compose --profile trendz up -d
|
|
If something goes wrong, you can stream the ThingsBoard container logs in real time:
1
| docker compose logs -f thingsboard-pe
|
Bring down every container defined in your Compose file:
Launch all services in detached mode:
|
Upgrading
Note, that you have to upgrade versions one by one (for example 4.0.2 -> 4.1.0 -> 4.2.0 ,etc).
Upgrading to new ThingsBoard version
When a new PE release is available, follow these steps to update your installation without losing data:
If you are upgrading using previous version of deployment files, make sure to follow steps described in this instruction first.
-
Change the version of the thingsboard/tb-pe-node and thingsboard/tb-web-report in the docker-compose.yml file to the subsequent version of thingsboard regarding your current one (e.g. 4.2.1.1PE)
-
Execute the following commands:
1
2
3
4
| docker pull thingsboard/tb-pe-node:4.2.1.1PE
docker compose stop thingsboard-pe
docker compose run --rm -e UPGRADE_TB=true thingsboard-pe
docker compose up -d
|
Upgrading to new Trendz version (Optional)
Trendz Analytics have different version system, and should be updated separately from ThingsBoard platform main services.
You can read how to upgrade Trendz Analytics here.
Troubleshooting
DNS issues
NOTE If you observe errors related to DNS issues, for example
1
| 127.0.1.1:53: cannot unmarshal DNS message
|
You may configure your system to use Google public DNS servers.
See corresponding Linux and Mac OS instructions.
Next steps