Skip to content
Stand with Ukraine flag

Actor system

ThingsBoard Edge uses an actor-based concurrency model to manage per-entity state. Every device, rule chain, and calculated field gets its own actor — a lightweight object with a private mailbox. The actor processes messages one at a time, eliminating locks and race conditions across the platform.

Each actor follows this lifecycle:

  1. Created on demand: When a message arrives for an entity (for example, telemetry from device ABC), the system creates an actor if one does not already exist.
  2. Mailbox queuing: Incoming messages land in the actor’s mailbox, an in-memory queue.
  3. Sequential processing: The actor processes messages one at a time, in order. No two threads ever execute inside the same actor simultaneously.
  4. Ephemeral state: Actors may hold in-memory state (like latest attribute values or active RPC requests), but this state is rebuilt from PostgreSQL on restart. The database is always the source of truth.

Edge runs a single tenant, so the actor hierarchy is shallower than on the ThingsBoard server. One App actor sits at the root, with a single Tenant actor beneath it:

┌─────────────┐
│ App Actor │
└──────┬──────┘
┌──────┴──────┐
│ Tenant │
└──────┬──────┘
┌────────────┬────────┴──────┬──────────────┐
▼ ▼ ▼ ▼
┌─────────┐ ┌──────────┐ ┌────────────┐ ┌──────────────┐
│ Device │ │ Rule │ │ CF Manager │ │ Cloud Sync │
│ Actors │ │ Chain │ │ Actor │ │ Actor │
└─────────┘ │ Actors │ └──────┬─────┘ └──────────────┘
└────┬─────┘ │
│ ▼
┌─────┴─────┐ ┌──────────────┐
▼ ▼ ▼ │ │ CF Engine │
Rule Rule Rule │ │ Actors │
Node Node Node │ └──────────────┘
Actors │
Actor typeOne perResponsibility
AppJVMRoot actor — creates and supervises the tenant actor
TenantTenantCreates device, rule chain, and calculated field actors for the Edge tenant
DeviceDeviceHandles device sessions, RPC, connectivity state, and activity tracking
Rule chainRule chainManages rule node actors and routes messages through the rule chain
Rule nodeRule nodeExecutes a single rule node’s logic — filter, transform, or action
CF ManagerTenantCoordinates calculated field evaluation across entities
CF EngineCalculated fieldExecutes a single calculated field’s logic
Cloud syncEdge instanceManages the gRPC connection and synchronization state with the ThingsBoard server

Each actor type runs on a dedicated thread pool. This prevents one type of work from starving another — a burst of incoming telemetry does not block rule chain processing.

DispatcherEnv variableDefaultHandles
AppACTORS_SYSTEM_APP_DISPATCHER_POOL_SIZE1App actor only — lightweight
TenantACTORS_SYSTEM_TENANT_DISPATCHER_POOL_SIZE2Tenant actor message processing
DeviceACTORS_SYSTEM_DEVICE_DISPATCHER_POOL_SIZE4Device sessions, RPC, and state tracking
Rule engineACTORS_SYSTEM_RULE_DISPATCHER_POOL_SIZE8Rule chain and rule node execution
Cloud syncACTORS_SYSTEM_EDGE_DISPATCHER_POOL_SIZE4Cloud synchronization state management
CF ManagerACTORS_SYSTEM_CFM_DISPATCHER_POOL_SIZE2Calculated field coordination
CF EngineACTORS_SYSTEM_CFE_DISPATCHER_POOL_SIZE8Calculated field execution

ACTORS_SYSTEM_THROUGHPUT (default 5) controls how many messages an actor processes in a single batch before yielding the thread to other actors:

  • Low value (one to five): Each actor processes fewer messages per turn, giving all actors more equal access to the thread pool. Better when many devices send telemetry at the same time.
  • High value (25–50): Actors process more messages per turn, reducing context-switch overhead. Better when a small number of devices send high-frequency telemetry.

The default of 5 works well for most Edge deployments.

Rule engine action nodes that perform I/O — database writes, HTTP calls, email, SMS — run on separate thread pools to avoid blocking the actor dispatchers:

Thread poolEnv variableDefaultPurpose
DB callbacksACTORS_RULE_DB_CALLBACK_THREAD_POOL_SIZE50Database write callbacks from rule engine nodes
MailACTORS_RULE_MAIL_THREAD_POOL_SIZE40Email sending from “send email” rule nodes
Password reset mailACTORS_RULE_MAIL_PASSWORD_RESET_THREAD_POOL_SIZE10Password reset emails, isolated from the main mail pool
SMSACTORS_RULE_SMS_THREAD_POOL_SIZE50SMS sending from “send SMS” rule nodes
External RESTACTORS_RULE_EXTERNAL_CALL_THREAD_POOL_SIZE50HTTP calls to external APIs from “rest API call” rule nodes
AI requestsACTORS_RULE_AI_REQUESTS_THREAD_POOL_SIZE50AI/LLM API calls from AI rule nodes

ACTORS_MAX_CONCURRENT_SESSION_PER_DEVICE (default 1) limits how many simultaneous transport sessions a single device can hold. The default means that when a device opens a new MQTT connection, any existing session for that device closes automatically.

Increase this value only if devices on your Edge legitimately maintain multiple concurrent connections — for example, a gateway that manages several sub-connections simultaneously.

The actor system throttles error log output to prevent a misconfigured rule chain from flooding the log:

VariableDefaultDescription
ACTORS_RULE_CHAIN_ERROR_FREQUENCY3000 msMinimum interval between error log entries for the same rule chain
ACTORS_RULE_NODE_ERROR_FREQUENCY3000 msMinimum interval between error log entries for the same rule node
ACTORS_RULE_CHAIN_DEBUG_MODE_RATE_LIMITS_PER_TENANT_ENABLEDtrueEnables per-tenant rate limiting in debug mode
ACTORS_RULE_CHAIN_DEBUG_MODE_RATE_LIMITS_PER_TENANT_CONFIGURATION50000:3600Maximum 50,000 debug events per tenant per hour
WorkloadWhat to scaleWhy
Many devices, frequent telemetryACTORS_SYSTEM_DEVICE_DISPATCHER_POOL_SIZEMore threads to handle concurrent device actor mailboxes
Complex rule chains, many rule nodesACTORS_SYSTEM_RULE_DISPATCHER_POOL_SIZERule chain processing is CPU-bound
Many external API calls from rulesACTORS_RULE_EXTERNAL_CALL_THREAD_POOL_SIZEI/O-bound — threads block waiting for HTTP responses
Heavy database writes from rulesACTORS_RULE_DB_CALLBACK_THREAD_POOL_SIZEWrite callbacks queue when PostgreSQL is under load
Many calculated fieldsACTORS_SYSTEM_CFE_DISPATCHER_POOL_SIZECalculated field evaluation is CPU-bound
Limited CPU cores (two to four)Reduce all pool sizes proportionallyPrevents thread contention and reduces JVM heap usage