Skip to content

HerculesMiner/RMG-941-and-AWS

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AWS/vSensor

Virtual sensor (soft sensor) to automatically determine different states of machines and plants via machine learning and to make the result available to other applications via an API.

awsvsensor_schema_ampel

The following example describes how a soft sensor in combination with the AWS cloud and machine learning algorithms can be used for condition monitoring tasks. It lists all necessary steps as well as the required hard- and software components for the realization.

Overview

A (vibration) sensor is placed at a machine and connected with a gateway (or an embedded system), thus forming a soft sensor. The gateway gathers the measurement values and writes them into CSV files. The CSV files are then uploaded to an AWS S3 database from where they are imported into AWS SageMaker. In SageMaker the data from the CSV files can be visualized and transformed to generate a TensorFlow model. To use this model in an inference engine a SageMaker Endpoint is created.

To make classifications with the model, the sensor data must be streamed to the AWS cloud. To do this, the gateway must be connected to the AWS IoT Core. In addition, a rule must be created that triggers an AWS Lambda function when new values arrive. This Lambda function in turn calls the SageMaker Endpoit, which analyzes the values to make a classification. The result of the classification is then sent back to the gateway and displayed on a dashboard created with Node-RED.

Workflow and Engineering Process

The workflow for generating and deploying a machine learning model can be described with the engineering process shown in the following figure. Since the performance of a model can always be improved, the steps can be repeated sequentially. Once the model is deployed, classifications can be made by sending new values from the soft sensor to the SageMaker Endpoint.

ml_engineering_process

  1. The first step is to collect data. The data is used to train a machine learning model which will be used to monitor the condition of a target machine. There are many different methods to train such a model. One method is the supervised machine learning. Therefore the data of every machine state is gathered and labeled, so that the machine learning algorithm recognizes the different machine states after the training. Another training method is the unsupervised machine learning, where only the data of the machine during normal operation is gathered and used to train the model. In this case the model performs a one class classification, recognizing if the machine is working as desired or not.

  2. The second step is to upload the data (CSV files) to the AWS S3 database from where it is imported into SageMaker. Within SageMaker a Jupyter Notebook is used to visualize the data and to transform it to get the needed features for the machine learning algorithm.

  3. In the third step the model is being generated by training of an appropriate algorithm. There are a lot of possible algorithms to choose from, like neuronal networks, random forest classifiers or k-nearest neighbors.

  4. In step four the model is being evaluated. It is checked, how precise the recognition performance of the model is.

  5. If the performance of the model is not satisfying yet, it can be improved by further feature engineering and/or parameter optimizations.

  6. Once the desired accuracy of the model is achieved, it is ready to be deployed. In our example we deploy the model within a SageMaker endpoint, but it could also be deployed in an embedded system or gateway. Because the model can always be improved, one can run through the shown process again.

Hardware Components

RMG/941

The Remote Maintenance Gateway RMG/941 from SSV Software Systems runs with Embedded Linux and offers a lot of features like VPN, Python 3, Jupyter Notebook and Node-RED. Even the TensorFlow Lite runtime can be installed.

rmg941

For more information visit the RMG/941 product description.

MLS/160A

The soft sensor MLS/160A has the Bosch BMI160 at its heart with accelerometer and gyroscope. It is able to stream measurement values with a maximum frequency of 1.6 kHz. The MLS/160A is connected with the RMG/941 via RS485 in half duplex mode.

mls160a

Software Components

PyDSlog

This app is installed by default on the RMG/941. It allows to read the sensor values coming from the serial RS485 interface or via MQTT and stores them in CSV files which can easily be uploaded to AWS S3 thanks to an S3 integration. It is also possible to stream the values directly to the AWS IoT Core.

PyDSlog claims to make the data acquisition for machine learning and AI applications in the RMG/941 fast and easy.

AWS Account

For our example an AWS account is needed. The following AWS services will be used:

  • IoT Core: Needed to establish a connection between the soft sensor and the AWS cloud.

  • S3: Simple storage service. Here the CSV files with the sensor values as well as the machine mearning model will be stored.

  • Lambda Function: Function that is triggered when new sensor values arrive in the AWS IoT Core. It calls the SageMaker Endpoint and makes a classification. After that, it can trigger other AWS services like SNS, to send a message to a mobile phone or to send a notification back to the soft sensor over the AWS IoT Core. The Python code for the AWS Lambda function can be found on GitHub.

  • SageMaker: Service, where the CSV files from S3 are loaded to be visualized and transformed and the model is finally generated. Also a SageMaker Endpoint is being created. When called, the endpoint loads the model from S3 and classifies the new incoming sensor values. The Python code for the AWS SageMaker can be found on GitHub.

  • SNS: Required to send messages to a mobile phone, an email address or any other system or device.

Node-RED

Easy to use browser-based flow editor built on Node.js that makes it easy to wire together flows (see also nodered.org). In our example it is needed to visualize classification result messages coming from the Lambda function over AWS IoT Core. The Node-RED dashboard code to visualize classification results can be found on GitHub.

Step 1: Sensor Attachment and Vibration Source

aufbau4

The figure above shows what is used for our example. We use a simple 24 V PC fan from Sunon to simulate the target machine. The MLS/160A was placed on top of the fan. Since the quality of the monitoring depends heavily on the sensor position, the best place for mounting the sensor has to be found. The sensor is then connected to the RMG/941, which is initially used as a data logger to collect values needed for model creation. Once the finished model is available in the cloud, the RMG/941 acts as a gateway and transmits the new sensor values to the AWS IoT Core service.

The next figure shows the values of the accelerometer and gyroscope and how every component resp. axis looks like. In this example we consider three different machine states. Every color represents a different state and every plot shows only one axis.

The labels for the different machine states are:

  • 0 for the machine in normal operation condition (green line)

  • 1 for the machine stopped (orange line)

  • 2 for an error (red line)

signals

Step 2: Data Acquisition for Model Generation

To start the data acquisition you have to log into the RMG/941 and open the SSV/WebUI. Therefore the RMG/941 and a PC with a browser must be in the same local network. Enter the IP address of the RMG/941 with the port 7777 (e.g. 192.168.0.126:7777) in the address bar of the browser to open the SSV/WebUI’s login page. After the login click in the menu on Apps > PyDSlog.

Before recording values in a CSV file with PyDSlog some settings have to be made, like the sensor to use (data source), the sampling rate (frequency), the desired channels (resp. axes), the signal length (frame size) and also if the data should be labeled or not.

pydslog_1

After the setup click on the button [Apply] to start the recording. If you checked the box to label the data a field to enter a number (the label) which represents a certain state is available. The recording can be paused and restarted and of course also stopped.

When the recording is stopped the created CSV files are stored in a new folder with the current timestamp as name. The files will also be displayed in the section CSV files.

pydslog_2

Step 3: Storing CSV Files in AWS S3

To upload and store the created CSV files in the S3 database click on the tab AWS S3. There you have to enter your AWS credentials as well as the S3 bucket name and region. The region should be the same as the SageMaker region (the SageMaker instance will be created in the following step). All this information is available in your AWS account. After entering the information click on [Apply].

pydslog_3

To start the upload click on the Amazon button (the button with the “a”) next to the desired folder resp. CSV files. The files are then uploaded as a compressed .tgz file with the timestamp as name. After the upload the .tgz file appears in the S3 bucket.

Step 4: Configuring AWS SageMaker

Once the CSV files are stored in S3, they can be imported into SageMaker. For that, it is necessary to create a SageMaker instance.

After the SageMaker instance is started, you can open Jupyter and create a new Jupyter Notebook. You have to choose a conda_tensorflow_p36 notebook. You can import and adapt the Jupyter Notebook from this example on GitHub. In the Jupyter Notebook, a SageMaker Endpoint will be created. This Endpoint is called from the Lambda function.

Since the Jupyter Notebook is only used to generate a machine learning model, you can stop the SageMaker instance as soon as the Endpoint is deployed.

notebook

The next steps are described in the Jupyter Notebook from this example on GitHub.

Step 5: Creating AWS Lambda Function

To call the SageMaker Endpoint and make a classification, a Lambda function is needed. For that, go to the Lambda service and click on the button [Create function].

lambda_functions

A new window appears where you can enter a name for the Lambda function and select a runtime. For our example a Python 3 runtime is used. Click on [Create function] and the new Lambda function is created and an online editor opens where you can place your code.

With the Lambda function you can process the values which arrive over the AWS IoT Core and forward them to other services. Lambda functions are used as a processing point between the different AWS services. The Lambda function we prepared for this example can be found here on GitHub.

lambda_function_code

IMPORTANT: If you want to use functions from NumPy and SciPy, you will have to add a layer with this libraries.

To interact with other AWS services it might be necessary to give the Lambda function further permissions. Therefore you go to the AWS Identity and Access Management (IAM), where you can see which role the Lambda function is currently using and attach new policies to it, like AmazonS3FullAccess, AWSIoTFullAccess, AmazonSNSFullAccess and AmazonSageMakerFullAccess. These policies will allow you to call the services without restrictions, but please keep in mind, that with FullAccess policies you are giving far more permissions than you actually need.

IMPORTANT: Don’t forget to change the name of your SageMaker Endpoint in the Lambda function.

iam_lambda

Step 6: Connecting to AWS IoT Core

To send new sensor values directly to AWS and make classifications, it is necessary to connect the RMG/941 with the AWS IoT Core. Therefore log in the AWS IoT Core console and register a new device.

iot1

You can register a new thing (device) by clicking in the left menu on Manage > Things and then on [Create]. A new window appears, where you have to click on [Create a single thing].

iot2

In the next window you can assign to the thing a name (e.g. RMG/941) and click on [Next]. In the following window you can add a certificate to your new registered thing by clicking on [Create certificate].

iot3

IMPORTANT: The certificates have to be downloaded and saved in the RMG/941. Without the certificates it is not possible to establish a connection to AWS.

iot4

You have to download each certificate and then click on [Activate] and finally on [Done].

IMPORTANT: It is not possible to download the certificates later, so you must download them here and now!

PyDSlog can then import the downloaded certificates directly.

To import the certificates open the PyDSlog app (in the RMG/941) and click on the tab AWS IoT. In the section Stream configuration the settings should be the same as for the CSV file creation. Also define a send interval. In most cases it is not necessary to stream permanently to the AWS. That could become expensive, since you would use the AWS services very often and the data processing quantity would also be very high.

In the section AWS IoT endpoint click on [Durchsuchen…]/[Browse...] and navigate to the folder where the certificates are saved. Choose all files and click then on [Import].

The next step is to create a policy in the AWS IoT Core console. In the left menu click on Secure > Policies and then on [Create]. A window like the following appears.

iot5

Here you can set a name for the policy. In the field Action enter iot:* to allow full access to the AWS IoT service and in the field Resource ARN just enter *. This will allow you to listen to all the MQTT topics of the things that were created before (the topic for the new values coming from the sensor will be defined later). Finally click on [Create].

After the policy is created, you will have to attach the policy to the certificate. For that click in the menu on Certificates and select the certificate you created before. Then click on Actions > Attach policy and select your policy.

attach_policy_cert

Now return to the PyDSlog app in the RMG/941 and open the tab AWS IoT.

pydslog_5

In the section Data preprocessing you can enable to transpose the values so they are treated like a signal. If the transpose checkbox is set it is possible to do an FFT over the samples. Also the offset of the signals can be deleted, so the data volume to be sent will be smaller.

In the section AWS IoT endpoint the MQTT topic where the AWS will wait for the values must be set. You also need to enter a Rest API Endpoint address in the field Endpoint. Therefore switch back to the AWS IoT Core console, navigate to Manage > Things and select your device. Then click on Interact on the left side. Now the Rest API Endpoint address is shown in the section HTTPS.

endpoint

Copy this address and enter it in the according field of the PyDSlog configuration. In the last section you can enable the service by checking the checkbox and clicking on [Apply]. PyDSlog will read the configurations and starts to stream to the AWS. To stop the service just uncheck the checkbox and click again on [Apply].

To verify that the values are arriving in the AWS cloud go the AWS IoT Core console and click in the left menu on Test. Then enter the MQTT topic where the data is sent to and you should see the sensor values arriving.

Step 7: Creating AWS IoT Rule

Since the Lambda function needs to be triggered when new sensor values arrive to the AWS IoT Core, a rule has to be created. For that open the AWS IoT Core console, click in the left menu on Act > Rules and then on [Create].

iot_rule1

A new window opens where the name for the rule has to be set. Further below in the section Rule query statement the MQTT topic from which the IoT messages are coming from must be defined. The topic has to be the same as defined in the AWS IoT endpoint in the PyDSlog app on the RMG/941.

iot_rule2

For example: If the topic in the PyDSlog app is rmg1/values, enter SELECT * FROM “rmg1/values”. That means that everything that arrives on that topic will trigger this rule.

Now an action for this rule must be defined. Therefore click on [Add action] and a list of possible actions is displayed.

action_1

Select Send a message to a Lambda function and click on [Configure action]. Now choose the Lambda function that we have created in step 5.

Step 8: Node-RED Dashboard

To visualize the classification results that represent the current state of our monitored PC fan resp. machine, we will use Node-RED. If Node-RED is not already installed on the RMG/941, here you can download and install Node-RED.

To open Node-RED in the RMG/941 click in the menu on Apps > Node-RED. Enable Node-RED by checking the checkbox next to Enable service and click on [Apply]. A green arrow should appear on the right side to indicate that Node-RED is running. Now click on the link node-red on the right side.

node_red1

Node-RED opens in a new window and you can see all nodes available on the left side. Here you can download even more nodes.

The Node-RED flow can be found on Github.

node-red

For our example we use the MQTT node, the debug node as well as the template node where the dashboard is stored.

Once you imported the example flow from GitHub, doubleclick on the the MQTT node and enter the same values for server and port that were defined in step 6 and select Enable secure (SSL/TLS) connection.

The server is the Rest API Endpoint address of the AWS IoT Core like described in step 6. The same endpoint is used in PyDSlog to stream the sensor values to the AWS. For the port enter 8883

IMPORTANT: You have to upload the same certificates like in step 6 when a new thing resp. device was created in the AWS IoT Core.

Finally an MQTT topic for the classification results must be defined:

lambda_topic

IMPORTANT: The MQTT topic has to be the same as the boto3 IoT client topic defined in the Lambda function in step 5.

node-red2

After that, you should see how the classification results arrive in Node-RED by clicking on the tab debug.

node-red3

Once everything is set up properly, open the dashboard by clicking on the tab dashboard and then on the small arrow icon just below the dashboard tab.

dashboard

About

RMG/941 Soft Sensor Demo with AWS IoT

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.8%
  • Python 0.2%