Skip to content

Writing and Running Tests

Joshua Williams edited this page Nov 12, 2021 · 1 revision

Like any good software project, we strive to automatically test as much of our code as possible. Our primary tool for doing this is the same as our build tool, colcon.

Writing Tests

CMakeLists.txt

Many examples of how to write tests already exist in our various packages. Here's the general format of how to include tests in your CMakeLists.txt file:

if(BUILD_TESTING)
  find_package(ament_lint_auto REQUIRED)   # General test dependency
  ament_lint_auto_find_test_dependencies() # Find any test dependencies declared in package.xml

  ament_add_gtest(tests                    # Create a test by linking test files
    test/test_file_1.cpp                   # These files contain gtests
    test/test_file_2.cpp)                  # See later in this page for information on how to create them

  ament_target_dependencies(tests          # Declare your dependencies
    rclcpp voltron_msgs                    # This list will change from project to project!
    voltron_test_utils std_msgs)
  
  target_link_libraries(tests              # Link to the code you are testing as well as
    your_project_lib gtest_main)           # our testing framework, GoogleTest
endif()

package.xml

Not much needs to be done in package.xml, but make sure to declare your test dependencies. Do this in the same section where build and runtime dependencies are declared, using test_depend tags, like so:

<test_depend>ament_lint_auto</test_depend>
<test_depend>ament_cmake_gtest</test_depend>
<test_depend>voltron_test_utils</test_depend>

Those first two are dependencies of almost all packages. voltron_test_utils is a common dependency, but not required. Make sure to add any other dependencies your project has.

Test Files

Test source files should be named as package_root/test/test_name.cpp. These are Google Test files - see this guide for more information on the tools available to you in these files. Alternatively, many simple examples exist in our project already.

There's no hard and fast rule on what constitutes a single test file. However, a good rule of thumb is that for each testable class in the main source, you should have one test source file.

Running Tests

Luckily, colcon automates our testing for us much like it automates our build process. However, the process can be a bit arcane and difficult to get used to.

Step 1 - Source ROS

If you have not already sourced ROS in your current terminal, do so before continuing. If you don't do this, you'll just get lots of failures and obscure error messages:

. /opt/ros/foxy/setup.bash

Note: I have not found it necessary to source the navigator workspace before testing. However, as a troubleshooting step if things are not working, you can try running . <path_to_navigator>/install/setup.bash to source our workspace.

Step 2 - Build

If you haven't built your package since the last changes were made, do that now using one of the following commands:

colcon build                                  # Builds all packages. Takes a while! 
colcon build --packages-select your_package   # Builds only your_package. Usually much faster. 

Step 3 - Test your package

To test a single package, use the following (somewhat obscure) command:

colcon test --packages-select your_package --event-handlers console_direct+

The event-handlers argument tells colcon to show command-line output to you directly, rather than just displaying a summary of whether tests passed or failed. You will still get quite a lot of output, but if you look closely, the actual errors encountered will be embedded in the output. It takes some getting used to, but eventually becomes quite natural.

Step 4 (Optional) - Test all packages at once

If you're just looking to find which packages are failing, you can use the following command:

colcon test

This will test all of our packages and report which ones fail. You can then use the above command to dig deeper into what went wrong with each of them.

At the time of writing, 7 of our packages do have failing tests that do not negatively impact our vehicle's ability to drive. If you run this command, expect these packages to fail, but don't worry - you didn't do anything wrong. These packages are:

behavior_planner_nodes controller_common_nodes ndt_nodes voltron_can vt_steering_controller vt_vehicle_bridge vt_viz

We will look into this issue in the future and hopefully get these failures resolved, but for now, it's just a minor annoyance and therefore a low priority.

Clone this wiki locally