Skip to content
Stand with Ukraine flag

Actor System

ThingsBoard uses an actor-based concurrency model to manage per-entity state. Every device, rule chain, tenant, and calculated field gets its own actor — a lightweight object with a private mailbox. The actor processes messages one at a time, eliminating locks and race conditions. This design lets the platform handle millions of entities on a single node without shared mutable state.

Each actor follows a simple lifecycle:

  1. Created on demand — when a message arrives for an entity (e.g., telemetry from device ABC), the system creates an actor if one doesn’t exist.
  2. Mailbox queuing — incoming messages land in the actor’s mailbox (an in-memory queue).
  3. Sequential processing — the actor processes messages one at a time, in order. No two threads ever execute inside the same actor simultaneously.
  4. Ephemeral state — actors may hold in-memory state (e.g., latest attribute values, active RPC requests), but this state is rebuilt from the database on restart. The database is always the source of truth.

Actors are organized in a tree. Parent actors manage the lifecycle of their children:

Actor TypeOne perResponsibility
AppJVMRoot actor — creates and supervises tenant actors
TenantTenantCreates device, rule chain, and calculated field actors for the tenant
DeviceDeviceHandles device sessions, RPC, connectivity state, activity tracking
Rule ChainRule ChainManages rule node actors, routes messages through the rule chain
Rule NodeRule NodeExecutes a single rule node’s logic (filter, transform, action)
CF ManagerTenantCoordinates calculated field evaluation across entities
CF EngineCalculated FieldExecutes a single calculated field’s logic
EdgeEdge instanceManages edge synchronization state

Each actor type runs on a dedicated thread pool (dispatcher). This prevents one type of work from starving another — a burst of device messages won’t block rule chain processing.

DispatcherENV VariableDefaultHandles
AppACTORS_SYSTEM_APP_DISPATCHER_POOL_SIZE1App actor only — lightweight
TenantACTORS_SYSTEM_TENANT_DISPATCHER_POOL_SIZE2Tenant actor message processing
DeviceACTORS_SYSTEM_DEVICE_DISPATCHER_POOL_SIZE4Device sessions, RPC, state tracking
Rule EngineACTORS_SYSTEM_RULE_DISPATCHER_POOL_SIZE8Rule chain and rule node execution
EdgeACTORS_SYSTEM_EDGE_DISPATCHER_POOL_SIZE4Edge synchronization
CF ManagerACTORS_SYSTEM_CFM_DISPATCHER_POOL_SIZE2Calculated field coordination
CF EngineACTORS_SYSTEM_CFE_DISPATCHER_POOL_SIZE8Calculated field execution

ACTORS_SYSTEM_THROUGHPUT (default 5) controls how many messages an actor processes in a single batch before yielding the thread to other actors. This is the fairness knob:

  • Low value (1–5) — each actor processes fewer messages per turn, giving all actors equal access to the thread pool. Better latency fairness when many actors are active.
  • High value (25–50) — actors process more messages per turn, reducing context-switch overhead. Better throughput when a few actors receive heavy traffic.

For most deployments, the default of 5 provides a good balance. Increase it only if profiling shows excessive dispatcher scheduling overhead.

Rule engine action nodes that perform I/O (database writes, HTTP calls, emails, SMS) execute on separate thread pools to avoid blocking the actor dispatchers:

Thread PoolENV VariableDefaultPurpose
DB CallbacksACTORS_RULE_DB_CALLBACK_THREAD_POOL_SIZE50Database write callbacks from rule engine nodes
MailACTORS_RULE_MAIL_THREAD_POOL_SIZE40Email sending from “send email” rule nodes
Password Reset MailACTORS_RULE_MAIL_PASSWORD_RESET_THREAD_POOL_SIZE10Password reset emails (separate pool to avoid blocking)
SMSACTORS_RULE_SMS_THREAD_POOL_SIZE50SMS sending from “send SMS” rule nodes
External RESTACTORS_RULE_EXTERNAL_CALL_THREAD_POOL_SIZE50HTTP calls to external APIs from “rest api call” rule nodes
AI RequestsACTORS_RULE_AI_REQUESTS_THREAD_POOL_SIZE50AI/LLM API calls from AI rule nodes

ACTORS_MAX_CONCURRENT_SESSION_PER_DEVICE (default 1) limits how many simultaneous transport sessions a single device can have. The default of 1 means that when a device opens a new MQTT connection, any existing session for that device is closed.

Increase this value only if your devices legitimately maintain multiple concurrent connections (e.g., a gateway device with multiple sub-connections).

The actor system tracks error rates in rule chains and rule nodes:

  • ACTORS_RULE_CHAIN_ERROR_FREQUENCY (default 3000 ms) — minimum interval between error log entries for the same rule chain. Prevents log flooding when a misconfigured rule chain fails on every message.
  • ACTORS_RULE_NODE_ERROR_FREQUENCY (default 3000 ms) — same, per rule node.
  • ACTORS_RULE_CHAIN_DEBUG_MODE_RATE_LIMITS_PER_TENANT_ENABLED (default true) — enables per-tenant rate limiting in debug mode.
  • ACTORS_RULE_CHAIN_DEBUG_MODE_RATE_LIMITS_PER_TENANT_CONFIGURATION (default 50000:3600) — maximum 50,000 debug events per tenant per hour.
WorkloadWhat to scaleWhy
High device count, frequent telemetryACTORS_SYSTEM_DEVICE_DISPATCHER_POOL_SIZEMore threads to handle device actor mailboxes
Complex rule chains, many rule nodesACTORS_SYSTEM_RULE_DISPATCHER_POOL_SIZERule chain processing is CPU-bound
Many external API calls from rulesACTORS_RULE_EXTERNAL_CALL_THREAD_POOL_SIZEI/O-bound; threads block waiting for HTTP responses
Heavy database writes from rulesACTORS_RULE_DB_CALLBACK_THREAD_POOL_SIZEWrite callbacks queue when DB is slow
Many calculated fieldsACTORS_SYSTEM_CFE_DISPATCHER_POOL_SIZECF evaluation is CPU-bound
High email/SMS volumeACTORS_RULE_MAIL_THREAD_POOL_SIZE, ACTORS_RULE_SMS_THREAD_POOL_SIZEI/O-bound, limited by provider API rate limits

For the complete list of actor system environment variables, see Configuration Reference.