Stand with Ukraine flag
Pricing Try it now
Community Edition
Getting Started Documentation Devices Library Guides Installation Architecture API FAQ

save time series

Since TB Version 2.0

Stores the incoming message payload as time series data of the message originator.

Expected incoming message format

The node accepts messages of type POST_TELEMETRY_REQUEST and supports the following three payload formats:

  1. Key-value pairs: an object where each property name represents a time series key, and its corresponding value is the time series value.
    1
    2
    3
    4
    
     {
       "temperature": 42.2,
       "humidity": 70
     }
    
  2. Timestamped key-value pairs: an object that includes a ts property for the timestamp and a values property containing key-value pairs (defined in format 1).
    1
    2
    3
    4
    5
    6
    7
    
     {
       "ts": 1737963587742,
       "values": {
         "temperature": 42.2,
         "humidity": 70
       }
     }
    
  3. Multiple timestamped key-value pairs: an array of timestamped key-value pair objects (defined in format 2).
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    
     [
       {
         "ts": 1737963595638,
         "values": {
           "temperature": 42.2,
           "humidity": 70
         }
       },
       {
         "ts": 1737963601607,
         "values": {
           "pressure": 2.56,
           "velocity": 0.553
         }
       }
     ]
    

Configuration: Processing settings

The save time series node can perform four distinct actions, each governed by configurable processing strategies:

  • Time series: saves time series data to the ts_kv table in the database.
  • Latest values: updates time series data in the ts_kv_latest table in the database, if new data is more recent.
  • WebSockets: notifies WebSocket subscriptions about updates to the time series data.
  • Calculated fields: notifies calculated fields about updates to the time series data.

For each of these actions, you can choose from the following processing strategies:

  • On every message: perform the action for every incoming message.
  • Deduplicate: perform the action only for the first message from a specific originator within a configurable time interval. Minimum value for a deduplication interval is 1 second and maximum is 1 day. To determine whether a message falls within a previously processed interval, the system calculates a deduplication interval number using the following formula:
    1
    
    long intervalNumber = ts / deduplicationIntervalMillis;
    

    Where:

    • ts is the timestamp used for deduplication (in milliseconds).
    • deduplicationIntervalMillis is the configured deduplication interval (converted automatically to milliseconds).
    • intervalNumber determines the logical time bucket the message belongs to.

    The timestamp ts is determined using the following priority:

    1. If the message metadata contains a ts property (in UNIX milliseconds), it is used.
    2. Otherwise, the time when the message was created is used.

    All timestamps are UNIX milliseconds (in UTC).

  • Skip: never perform the action.

Note: Processing strategies are available since TB version 4.0. “Skip latest persistence” toggle from earlier TB versions corresponds to “Skip” strategy for Latest values.

Processing strategies can be set using either Basic or Advanced processing settings.

image

  • Basic processing settings - provide predefined strategies for all actions:
    • On every message: applies the On every message strategy to all actions. All actions are performed for all messages.
    • Deduplicate: applies the Deduplicate strategy (with a specified interval) to all actions.
    • WebSockets only: applies the Skip strategy to Time series and Latest values, and the On every message strategy to WebSockets. Effectively, nothing is stored in a database; data is available only in real-time via WebSocket subscriptions.

image

  • Advanced processing settings - allow you to configure each action’s processing strategy independently.

image

When configuring the processing strategies in advanced mode, certain combinations can lead to unexpected behavior. Consider the following scenarios:

  • Skipping database storage

    Choosing to disable one or more persistence actions (for instance, skipping database storage for Time series or Latest values while keeping WS updates enabled) introduces the risk of having only partial data available:

    • If a message is processed only for real-time notifications (WebSockets) and not stored in the database, historical queries may not match data on the dashboard.
    • When processing strategies for Time series and Latest values are out-of-sync, telemetry data may be stored in one table (e.g., Time series) while the same data is absent in the other (e.g., Latest values).
  • Disabling WebSocket (WS) updates

    If WS updates are disabled, any changes to the time series data won’t be pushed to dashboards (or other WS subscriptions). This means that even if a database is updated, dashboards may not display the updated data until browser page is reloaded.

  • Skipping calculated field recalculation

    If telemetry data is saved to the database while bypassing calculated field recalculation, the aggregated value may not update to reflect the latest data. Conversely, if the calculated field is recalculated with new data but the corresponding telemetry value is not persisted in the database, the calculated field’s value might include data that isn’t stored.

  • Different deduplication intervals across actions

    When you configure different deduplication intervals for actions, the same incoming message might be processed differently for each action. For example, a message might be stored immediately in the Time series table (if set to On every message) while not being stored in the Latest values table because its deduplication interval hasn’t elapsed. Also, if the WebSocket updates are configured with a different interval, dashboards might show updates that do not match what is stored in the database.

  • Deduplication cache clearing

    The deduplication mechanism uses an in-memory cache to track processed messages by interval. This cache retains up to 100 intervals for a maximum of 2 days, but entries may be cleared at any time due to its use of soft references. As a result, deduplication is not guaranteed, even under light loads. For example, with a deduplication interval of one day and messages arriving once per hour, each message may still be processed if the cache is cleared between arrivals. Deduplication should be treated as a performance optimization, not a strict guarantee of single processing per interval.

  • Whole message deduplication

    It’s important to note that deduplication is applied to the entire incoming message rather than to individual time series keys. For example, if the first message contains key A and is processed, and a subsequent message (received within the deduplication interval) contains key B, the second message will be skipped—even though it includes a new key. To safely leverage deduplication, ensure that your messages maintain a consistent structure so that all required keys are present in the same message, avoiding unintended data loss.

Due to the scenarios described above, the ability to configure each persistence action independently—including setting different deduplication intervals—should be treated as a performance optimization rather than a strict processing guarantee.

Configuration: Advanced settings

image

  • Use server timestamp - if enabled, rule node will use current server time when time series data does not have an explicit timestamp associated with it (payload format 1 is used). Available since TB Version 3.3.3

    The node determines the timestamp for each time series data point using the following priority:

    1. If the time series data includes a ts property (payload formats 2 and 3), this timestamp is used.
    2. If the Use server timestamp option is enabled, the current server time is used.
    3. If the message metadata contains a ts property (expected in UNIX milliseconds), this value is used.
    4. If none of the above are provided, the timestamp when the message was created is used.

    Using server time is particularly important in sequential processing scenarios where messages may arrive with out-of-order timestamps from multiple sources. The DB layer has certain optimizations to ignore the updates of the attributes and latest values if the new record has a timestamp that is older than the previous record. So, to make sure that all the messages will be processed correctly, one should enable this parameter for sequential message processing scenarios.

  • Default TTL (Time-to-Live) - determines how long the stored data remains in the database. The TTL is set based on the following priority:

    1. If the metadata contains a TTL property (expected as integer representing seconds), this value is used.
    2. If the metadata does not specify a TTL, the node’s configured TTL value is applied.
    3. If the node’s configured TTL is set to 0, the Storage TTL defined in the tenant profile is used.

Note: TTL value of 0 means that the data never expires.

Output connections

  • Success:
    • If an incoming message was successfully processed.
  • Failure:
    • If an incoming message type is not POST_TELEMETRY_REQUEST.
    • If an incoming message payload is empty (for example, {} or [] or even [{}, {}, {}]).
    • If unexpected error occurs during message processing.