Solutions
...
Litmus Companion Solutions for...
Litmus Production Record Digit...

Configuring the Litmus Production Record Digital Twin Message Conversion Flow

9min

o allow the flow to create the right message payload so that the data can be successfully ingested by Litmus Production Record Database using a Litmus Integration, there are two mandatory configurations the user has to make.

Configure the topics for the tags to be used

The purpose of the flow, is to ingest digital twin data published by Litmus Edge Digital Twin Instances and modify the message payload to meet the requirements of Litmus Production Record Database. The flow has to therefore subscribe to the topics of the Digital Twin Instances which are to be send to Litmus Production Record Database.

The user provides these topics through the use of the Litmus pallet node Datahub Subscribe.

Document image

Document image


To provide a topic, users edit the datahub subscribe node and replace the placeholder "Please enter topic"

Document image


with the topic of a Digital Twin Instance to be used by the event configuration and press the Done button.

Document image


If more than one Digital Twin Instance is required, users have one option for providing all the instances they require, as wildcards are not supported at this point in time for Litmus Edge Digital Twins.

Option 1 Add more DatahubSubscribe nodes

The flow comes with a single Datahub Subscribe node in the section

Document image


But users are able to add more nodes as they need and wire them all into the output.

The example shows how a user added more node by creating copies of the original node.

Document image


Configure the Item and publishing topic

For the modification of the message payload, the flow requires at the minimum an API_Key.

To create an API_Key please visit the Litmus Documentation on how to create a Rest API Key.

By default the flow will use as WriteTopic "Litmus.DT_ProRec". But users are able to modify these.

To modify the API_Key and/or WriteTopic, the user has to edit the sub flow.

Document image


This allows to set the desired values for these two settings and press the Done button.

Document image


To start the processing, deploy the flow.

Document image


Optional Settings

As Litmus Production Record stores Production Event data such as Downtimes, Energy Management, Product Traceability and more against a unique combination of identifiers as explained in the chapter How are data for a Production Record Data Model stored, these have to be defined also for data coming from a Digital Twin. As a Digital Twin may not have such Items defined, the flow will by default use as Identifiers:

  • Digital Twin Instance Name
  • The current Year
  • The current Month
  • The current Day
  • The current Hour

This means, if a Digital Twin Model does not have any Items directly under their root which satisfy the need for Identifiers, the user does not have to do any changes and the flow will take care of it. And the Optional options can be left as they are.

Document image


If a Digital Twin model has items directly under the root which would satisfy the needs for Identifiers

Document image


the user is able to provide these instead.

Document image


Final Step

After all configurations have been done and are deployed, it is recommended to setup a Litmus Integration using the DB - Microsoft SQL Server connector.

How to setup an integration can be read up on using our Litmus Edge Documentation.

After the integration is established, the user needs to subscribe to the topic which has been setup in the sub flow.

Document image


How a subscription for an integration is added, can be found in the Litmus Edge Documentation.

Add as Local Data Topic the value used in your event configuration.

Document image


Now, every time the flow publishes a new message on this topic, the integration will send it to Litmus Production Record Database.