Skip to content

WorkshopFlow

David Goedicke edited this page Jul 19, 2022 · 1 revision

Introduction

This document gives a quick technical rundown of how we connected different systems to experiment with contextual listening for robots. This document does not guide you through the design process but explains how the technical systems connect. It discusses all steps, from collecting data to controlling the motion of a robot.

While we use many different technologies, we have tried to abstract away many elements while retaining the freedom to design the interactions you are interested in. The components used are:

   

TeachableMachines

With teachable machines, we collect data to train a machine learning model used later to distinguish the different sounds around our robot.

In the following example, I have a simple model that can detect when someone is talking.

Since we only trained two classes (Background sound & talking), any other structure sounds like, e.g., music, dog barking will most likely be classified as talking. It is important to be aware of these possible miss classifications and either design around them in the behavior of the robot or by adding more classes to the machine learning model.

drawings/images/TeachableMachines.png TeachableMachines by Google

   

Context description in Node-RED

After training the model, we need to describe what the robot should do when it hears a particular audio context. Besides the detected audio class, it can also react to additional audio features, like the direction of arrival, volume, and the change in volume. A combination of these features can then trigger the behavior of the robot.

Here is an example in which a loud music context relates to a specific mechanical behavior of the robot:

drawings/images/flow.png

 

 

Double-clicking on the left node will show a menu to specific an acoustic context. All parameters, including the machine learning model from TeachableMashines, can be specified here. For example, we try to detect when music is playing, and it is loud.

   

In this node, you upload the model generated from TeachableMachines, select the sound class you are interested in and the other parameters. Then, whenever these parameters match what the system hears, the node sends out a trigger that can be used by a Robot controller! node to start a motion.


   

Robot motion design in Node-RED

The Robot controller! node let you design the motion of a servo and a motor (more about those two in the next section.)

 

Double-clicking on the node labeled Robot Controller! reveals the motion controls, here shown on the right.

   

Whenever a robot motion gets triggered by a detection node (see above), the robot performs the movement on the graph over a period of time. The graph can easily be manipulated by clicking and dragging the black/red dots around. The slider on the bottom adjusts the length of the motion. The dropdown menu lets us choose to either address a Motor or Servo.


   

Servos

Finally, this brings us into the physical world. Each robot can use a servo and a motor. While very similar in appearance, the movement is very different. The motor can turn continually in one direction while the servo sticks to one angle (between 0°, and 180°).

For the Robot motion design, the following holds:

  • For a motor, the graph shows the speed (up=>forward & down=> backward).
  • For a servo the graph shows the angel (up=>left & down => right).

The motors we will use are the following.

Servo Motor
drawings/images/servo.jpg drawings/images/continues.gif

Picture credits Adafruit: https://www.adafruit.com/product/169 & https://www.adafruit.com/product/2442

   

Robot design

For the robot's physical design, we will use and extend the ideas from Rob Ives. He has created many Paper animations that are articulated in some way. The templates can be downloaded, replicated, and connected to motors and servos.

An examples from his website and shop www.robives.com:

Paper Comet