Skip to content

Hardware and software tools we use

Avery edited this page Nov 7, 2021 · 2 revisions

Hardware

Our current system includes:

  • Polaris GEM e6
    • This is the car (Golf cart? Let's call it a car. It has a license plate.)
    • It's totally electric, with a battery bank in the rear "trunk". I believe it has a small secondary battery for when the car isn't switched on, but I'm not certain.
    • Specs
  • Nvidia Jetson AGX Xavier
    • This is the car's embedded, on-board computer.
    • It's meant for GPU-accelerated computing while also consuming little power and space. It's not nearly as powerful as a gaming PC, but the module itself is the size of a credit card, so what can you expect?
    • The AGX uses an external SSD as a boot drive, since the built-in memory is so small.
    • Specs
  • 2 Velodyne LIDAR pucks
    • I believe these are literally the "Puck" model (formerly "VLP-16," but I'm not certain.
    • One on front-right, one on rear-left
  • GPS sensor
    • Model uncertain
  • 2 CANable Pro CAN-to-USB interface chips
  • 2 ZED stereo camera
    • One above front windshield, one above rear windshield
    • Connected to Jetson via USB 3.0 hub
  • Powered USB hub
  • Touchscreen display
    • Connected via HDMI to Jetson
  • Wireless keyboard for debugging

Software tools

Think of our software as less like a stack and more like a layered Matryoshka doll.

Software layers

First layer: Operating System

The operating system manages the low-level tasks of the computer. Windows, Mac OS X, Android, and iOS are all operating systems. Our car uses Ubuntu 18.04, which is a popular version of Linux. Using Linux gives us a lot of control over how the computer operates. Some of the software we use is not yet compatible with the latest long-term support release (Ubuntu 20.04), but we'll make that transition eventually.

Second layer: Container environment

Most of our code (ideally all of it) lives inside a Docker container. A Docker container is a sort of virtual environment that is designed to run exactly the same regardless of what system it runs on. It's not quite as full-featured as a virtual machine (VM), but it's a lot faster.

Our Docker environment is actually wrapped in "ADE," which stands for Awesome Development Environment (I did not name it). ADE is a tool developed for Autoware (we'll come to that) that simplifies Docker a good bit.

A running Docker instance can contain multiple containers, all running at the same time. We do this: we run a special command, ade start, which runs a container for Autoware.Auto and others for various tools we use. We include a custom Docker image named vde (Can you guess? Voltron Development Environment), which simply loads any dependencies/tools we want to use that don't already get loaded from other containers.

Anyway most of Docker is beyond me! Maybe someone can explain it to me.

Third layer: ROS

Inside the Docker environment is a beautiful framework called the Robot Operating System. Its biggest job is to manage and distribute all the information between the many parts of our system. Our version of ROS, ROS2 Dashing, is part of a major rebuild of ROS that's still taking place (hence the "2").

ROS contains many other useful tools, like the RViz data visualizer. Most open-source robotics software either supports or outright depends on ROS. It's the biggest software project that you've probably never heard of.

The best part about ROS is that all the code you use is modular and language-agnostic. A program written in C++ can easily understand the results of another program written in Python, or even sent through a web server in JSON. So programmers don't necessarily need to learn specific tools, simply the ones they're most comfortable with.

I highly recommend that you watch this crash-course video about ROS. Once you understand ROS, I also encourage you to walk through the official ROS2 tutorials (you should probably install Ubuntu for this, though).

Fourth layer: Autoware.Auto

The Autoware Foundation is a collection of companies, non-profits, and a few universities that develops open-source tools for self-driving cars. Their crown jewel is Autoware.Auto. It's still very much a work in progress, and is a bit messy in places, but Autoware.Auto at the moment provides a number of useful tools, including:

  • An implementation of the Normal Distribution Transform localization algorithm
  • A couple controllers for following steering curves
  • Some tools for handling coordinate transforms (physical reference frames for our software)
  • A lot more... it's still growing. Maybe we can contribute!

Autoware.Auto exists as a Docker image, loaded through ADE. It depends on ROS2, and uses it heavily.

Note: software projects that use ROS exist within ROS "workspaces," which are special directories with a fixed structure. Autoware.Auto has its own workspace. Our custom code (below) has its own workspace. ROS build tools like colcon and catkin can build and test entire workspaces at once. You learn more in that ROS crash course video that you already watched, right?

Final layer: Custom code

In many cases, it's useful to add code on top of Autoware.Auto without modifying it directly. That way we can still use the "official" releases of Autoware.Auto while still enjoying our secret sauce.

For example, Autoware.Auto includes an interface (called a "bridge") with our driving simulator, LGSVL. But the bridge they provide is a bit funky: it misses lots of useful data from the simulator, like the speed reported by the vehicle's speedometer. I added some simple code that reports this missing data.[^1]

Other pieces: LGSVL and Dash

LGSVL is a realistic and feature-rich driving simulator developed by the formerly-named LG Silicon Valley Lab (hence the name). It simulates LIDAR data, GPS, stereo cameras, and the CAN bus. It has a good physics engine, NPCs, weather simulation, and more. It's so good, in fact, that the Autoware Foundation used it exclusively to develop their parking demonstration, only testing in the real world two weeks before the demo.

LGSVL publishes all its results directly into ROS2. We run it outside the Docker container.[^2] We run our Docker environment in a privileged mode that allows it to share the local network of the host. This is how LGSVL can communicate with software running in the container.

I also wrote a very basic (almost trivial) program called Dash. It's a web application that can connect to ROS and display data from the system. Right now it only displays the steering, throttle, and brake values from the simulated vehicle. Later on, we'll likely want to extend this to create a full-featured web dashboard that could not only visualize data but send commands. Dash is written in Typescript using Angular.


[^1]: I put all this code in a separate ROS workspace called "Luna." The Autoware.Auto workspace and the Luna workspace coexist quite nicely. You can change the name from Luna, by the way.

[^2]: Some of Autoware's documentation will say to run a special version of LGSVL within the Docker container. This modified version is outdated, and it simply doesn't run as well as the latest version running outside of Docker. It also uses a custom bridge that doesn't work very well.

Clone this wiki locally