Skip to content

Tutorial: Robot Stages at Olin

Rocco edited this page Jan 25, 2018 · 1 revision

Robot design at Olin

If you'd like to make a robot at Olin but aren't quite sure what this "robot" thing should look like, hopefully this can help. There are a few levels of complexity, each with it's own benefits and drawbacks, and each of which is supported in slightly different ways by the robo lab.

1: Low level onboard, most code offboard

In this level, there is low level, hindbrain code on the robot, for example on an arduino. This communicates (wirelessly for mobile robots) with your computer, and gives you an interface to control the robot and get sensor readings. Most of the drones fit in this category (the pixhawk is the hindbrain, and it gives you a wireless interface so that midbrain and higher code can run on your computer). Edwin also fits here. When we aren't basing a robot on a commercial "hindbrain" element, we use X-bee modules to communicate between arduinos and computers. TODO: link to pages about X-bees

2: Midbrain onboard

If you start running into issues with bandwith or latency, or you just want to move towards a better packaged autonomous robot, the next step is to put the midbrain level on the robot. Midbrains generally benefit from better computing power, multi-threading, and standard interfaces that you get with an computer on a chip. The hindbrain-midbrain interface is therefore replaced with a wired connection (generally a serial connection in the robo-lab), and the onboard computer commuicates with your laptop over wifi. using ROS makes this stage of robot much easier to manage because it gives your laptop access to all of the communications within the midbrain, and gives forbrain that you may be developing on your laptop seamless access to the rest of the robot. Depending on the requirements, you can also develop midbrain level code on your laptop with this platform setup.

3: Fully onboard control

At this level, all of the computing is done on the robot, it presents some simple interface to a user as a control panel to command the robot. Given its high level commands, the robot then runs autonomously to execute the mission, indicating its high level status in a way that's relevant to the end user, and only getting input that is relevant to the task. (There is a sliding scale between the previous step and this level of autonomy, but the end goal of robots should be a user-focused interface that provides a "dummy proof" interaction)