Skip to content

Fall Orientation notes

Will Heitman edited this page Oct 30, 2021 · 3 revisions

Fall Orientation

Introduction

The first thing to know about autonomous cars is that they're hard. Beyond the challenge of building individual components like the path planner or localizer, there's also the complexity of juggling all the pieces at once. Voltron can, and usually will, cause us to scratch our heads over all sorts of things. And it may cause us to grind our teeth. But in embracing this challenge, we can look forward to a few months from now, when UTD's first self-driving car makes its premier tour-- with you in the seat.

Software overview

Simplified structure General structure of an ADS

Parts of our car Demo 1 stucture, simplified

From input to output

Voltron starts with raw sensor data. Right now this is purely Lidar streams.

This data is filtered, downsampled, fused, and formatted to create a clean sensor stream to work with.

The sensor input is processed by our perception algorithms, which take the current sensor data and some stored data (like maps) and calculate the states of the vehicle and surrounding area. Examples of states include the location of the car, the distance between us and a stop sign, our speed, and so on.

These states are fed into our behavior algorithms, which decide what the car should do: Where it should drive, when it should turn on the blinker, when it should slow down or speed up, and so on. In other words, while the perception algorithms guage the current states, the behavior algorithms request state changes.

These state change requests are sent to our controllers, which convert these requests into raw actions, like signals to our EPAS or peddles.

The Robot Operating System

We split the functions of our system into many components. Each component has its own inputs and outputs, and we can combine these components to create a full system.

We accomplish this using a standard communication framework called the Robot Operating System, or ROS. In ROS, system components are called "nodes," and each node can either publish or subscribe to information on "topics" using standardized data structures called "messages." Nodes with similar functions can be grouped into "packages."

Example: Our raw Lidar data is filtered using a node called point_cloud_filter_transform_node_exe (it's a mouthful). This node subscribes to our raw Lidar stream on the topic lidar_front/points_raw, which publishes 3D data as a PointCloud2 message. It then publishes the filtered data on lidar_front/points_filtered, again as a PointCloud2 message. Each topic and node can have multiple publishers and subscribers.

In order to easily launch all of the nodes we need, we write a special file called a "launch" file. It's basically a script that configures and runs all of the nodes that we need. For example, this launch file runs all of the nodes that we need for autonomous steering.

Finally, ROS also includes a number of useful GUI and command-line tools, including:

  • Rviz2, which visualizes our sensor and decision messages. $ rviz2
  • rqt_graph, which shows the network of nodes and topics currently running. $ rqt_graph.
  • ros2 topic list and ros2 topic info
  • ros2 node list and ros2 node info
  • rqt_console, a GUI tool for console messages

VDE

We store all of the ROS packages we use, both the ones ported from autoware.auto and the ones we've developed on our own, in a single Git repository. This repository also stores our maps and param files. As of Oct 7, 2021, this repo is called VDE. Since all of our project's code is stored in a single location, we call VDE a monorepo.

VDE is a fork of autoware.auto. Custom packages include our steering controller, visualizer, simulation bridge, and message library, but we'd like to add much more.

VDE is structured as a standard ROS workspace. This means that in order to build our stack, you simply need to source our dependencies, run colcon build, then run our launch file to start everything:

$ cd vde
$ . install/setup.bash
$ colcon build
$ ros2 launch main.launch.py

VDE also includes a Docker image that can be used where a host-based approach is impractical, such as on the actual vehicle. Use of the Docker image is discouraged, since building takes longer, and the Dockerfile might not always be kept up to date. Usage goes (something) like this:

$ cd vde
$ docker build -t vde:latest .
$ docker run vde:latest

Hardware overview

Our vehicle is equipped with all of the hardware we need, including cameras, motors, and a powerful computer. The best part: Our hardware work is basically complete-- It's self-driving ready.

Sensors

Our sensors include front and rear Velodyne Pucks (VLP-16s), which are high-powered, 16-line Lidar sensors. Each sensor is connected to its own interface box, which sends the Lidar packets to our computer over UDP.

We also have three stereoscopic (3D) cameras: Two ZED cameras and one ZED 2i. These connect to the computer over USB. They aren't currently used.

Actuators

Our car is equipped with an EPAS system, which is a motor and computer that together physically turn the wheel.

We also have two linear actuators that together manage the throttle and brake peddles. These still need to be fully connected, though.

The Jetson

Our onboard computer is an NVIDIA Jetson AGX Xavier, which we simply call the "Jetson." It's a top-of-the-line embedded computer with powerful computational abilities and energy efficiency.

What's the catch? The Jetson is designed for industry use, which means it's not designed to run things like web browsers and text editors (though it can). Instead, programs should be developed on the Quad, then transferred and built on the Jetson (or built and transferred using cross-compilation).

CAN buses

Voltron uses two CAN buses, each of which allows us to communicate with external hardware.

The "GEM Bus" connects us to the vehicle's factory-installed CAN bus, which publishes things like the vehicle's speed and battery status.

The "Robot Bus" is a CAN bus that we've added to the stock vehicle. It connects to the EPAS, though we're working on connecting it to the peddles as well.

The CAN buses are monitored using two CANable Pros.

Additional hardware

Our car also has a:

  • Powered USB hub
  • Touchscreen monitor
  • Wi-Fi antenna (not configured)
  • GPS chip and antenna (not connected)
  • Audio amplifier (not connected)
  • Haptic feedback motor and driver (x2, not connected)
  • Large LED matrix displays for front and back (not connected)
  • RGB LED strips (not connected)

(I'm sure I'm missing something...)

Electrical diagram

A simplified wiring diagram of the car is available at https://Nova-UTD.github.io/static/electrical.html.

General (boring) tools we use

We use Teams for chats, calls, and announcements.

We use GitHub for code storage and version control.

We use GitHub Pages to host general information for the public, including blog posts, contact information, and videos. The website can be easily edited by members here.

We use Jira for project management-- basically a souped-up to-do list. Tracks our progress, current tasks, and backlog.

We use UTD's Box service to store large or miscellaneous files that don't fit within our monorepo. These include everything from rosbags to promotional material.

The Quad, NoMachine, and SVL

Each member has remote access to the Quad, our powerful GPU workstation from Lambda Labs. The Quad is eye-meltingly fast, which makes it useful for simulation, ML training, and other computation-heavy tasks.

We use an open-source, photorealistic driving simulator called SVL to test our code before running it on the actual vehicle. SVL requires a free account from LG. SVL communicates with ROS using a special node called lgsvl_bridge. This node is automatically started by our launch file.

The Quad can be accessed over plain SSH, but it also runs a NoMachine server, which allows users to access graphical applications like SVL. You can download the NoMachine client from https://www.nomachine.com/download.

Note that you need to be connected to UTD's network to access the Quad. Remote users can connect to UTD's VPN for access (instructions here).

Development cycle: Scrum

We're trying out scrum, which is an approach to Agile software development. We don't claim to be scrum experts, so we'll use a simplified version.

All of our Issues (tasks) start in the Backlog, which is a big list of everything that needs to get done.

Once every 3 weeks (this can vary), some Issues are moved from the Backlog into a Sprint, which is a focused, fast-paced set of Issues that should be completed together. All Issues that aren't in the Sprint should be ignored until future sprints whenever possible.

During weekly meetings, members give updates on how their assigned Issues within the Sprint are progressing. At the end of the Sprint (after 3 weeks), we discuss how the Sprint went, how we can improve, and then we start the next Sprint by choosing more issues from the Backlog. The cycle starts again.

Issues within the current Sprint are organized on a board with four simple columns: To-Do, In Progress, Done, and Deferred. As each Issue progresses, the assignee moves the Issue to the appropriate column and adds comments that update the team on their progress.

Member expectations

  • Weekly meetings are mandatory.
    • If you can't come, let Will know ahead of time.
    • Those absent need to share their updates in writing.
    • Each member should contribute at least 5 hours of work per week.
      • This includes meeting attendance.
      • Reasonable exceptions will be made for final exams, holidays, and so on. Just let Will know.
    • Check Teams at least twice daily
  • Your work should be clearly documented
    • Inline comments, thorough READMEs
    • Occasional blog posts

Documenting your work

Communicating how your code works is an important way to help your teammates and the public. There are four key parts to this:

Clean code

Autonomous cars are safety-critical systems. Needless to say, clean code is important! This doesn't just mean safe and stable operation, but also proper formatting and inline comments. Don't take shortcuts; write your code properly the first time.

Package documentation

Each ROS package should contain a completed package.xml, along with a README.md that contains the package's purpose, inputs, outputs, and a summary of operation.

External posts

On occasion, the PR Director will ask you to write a simple blog post to explain what you're working on. This could mean a general explanation of state machines, your approach to stop sign detection, and so on. These posts can be as informal or technical as you like. Communicating your ideas to a general audience is a good habit.

Posts are generated automatically using Jekyll from Markdown files. These files have a special header at the top called the "front matter." Refer to existing posts for examples. Creating a new post is as simple as adding one of these Markdown files to our posts folder.

Wiki pages

Topics that both apply to more than one package and are irrelevant to the general public can be added to our team wiki. Every member has full access to this, so simply create or edit as many pages as you want.

Glossary

ADS: Autonomous Driving System

base_link: The midpoint of the car's rear axle, with an orientation pointed to the front.

CAN: Controller Area Network, a standard communication protocol for automotive hardware. Comparable to I2C, SPI.

colcon: A build tool that wraps multiple build systems, including CMake. Think of it as a very high-level compiler.

EPAS: Electronic Power Assisted Steering, the motor and computer that turn the steering wheel.

Lanelet: A standard data type to represent a map element, a road segment. Part of a broad Lanelet2 format, which describes labelled maps, as well as the Lanelet2 library.

NDT: Normal Distribution Transforms, a localization algorithm similar to K-Nearest Neighbor

PCD: Pointcloud, a collection of points

Rosbag: A recording of all the messages sent through ROS over a certain time. This allows us to record things like Lidar data and camera streams in the real world, then play them back later on the Quad.

SVL: Photorealistic driving simulator developed by LG's Silicon Valley Lab.

URDF: Universal Robot Description Format. Describes the visual, structural, and kinematic properties of the vehicle.

Clone this wiki locally