/!\ Catkin section is under development.

Obtaining gtest

Gtest has been converted to a rosdep and is available in ros_comm

Google Test (gtest)

We use GoogleTest, or gtest, to write unit tests in C++. The official documentation for gtest is here:

Also refer to http://www.ibm.com/developerworks/aix/library/au-googletestingframework.html

These pages give some tips, and examples for writing and calling unit tests in ROS code.

For language-independent policy and strategy for testing, see UnitTesting.

Code structure

By convention, test programs for a package go in a test subdirectory. For a simple package, it is usually sufficient to write a single test file, say test/utest.cpp.

Writing tests

The basic structure of a test looks like this:

   1 // Bring in my package's API, which is what I'm testing
   2 #include "foo/foo.h"
   3 // Bring in gtest
   4 #include <gtest/gtest.h>
   5 
   6 // Declare a test
   7 TEST(TestSuite, testCase1)
   8 {
   9 <test things here, calling EXPECT_* and/or ASSERT_* macros as needed>
  10 }
  11 
  12 // Declare another test
  13 TEST(TestSuite, testCase2)
  14 {
  15 <test things here, calling EXPECT_* and/or ASSERT_* macros as needed>
  16 }
  17 
  18 // Run all the tests that were declared with TEST()
  19 int main(int argc, char **argv){
  20   testing::InitGoogleTest(&argc, argv);
  21   ros::init(argc, argv, "tester");
  22   ros::NodeHandle nh;
  23   return RUN_ALL_TESTS();
  24 }

Note that you need to initialize ROS if your tests are using ROS.

Test naming conventions

Each test is a "test case," and test cases are grouped into "test suites." It's up to you to declare and use test suites appropriately. Many packages will need just one test suite, but you can use more if it makes sense.

Building and running tests

Add your test to your package's CMakeLists.txt like so:

catkin_add_gtest(utest test/utest.cpp)

These calls will cause the utest executable to be built during the main build (a simple make), and will put it in TBD. Note that unlike rosbuild, specifying directory hierarchy in the target declaration is not allowed.

You can run the test with make test. You can also just run the test executable directly, e.g.:

./bin/test/utest

Remember that roscore needs be running on your machine. Otherwise, the test will silently wait until it is.

If you are using catkin tools, see this page for more information.

For more information on the catkin_add_gtest() macro, see catkin/CMakeLists.txt.


NOTE: Do not declare a dependency on gtest in your package manifest. The catkin_add_gtest() macro will pull in the necessary flags.

NOTE: If you want to build an executable against gtest, but not declare it to be a test on its own (e.g., when you intend the executable to be run by rostest), use catkin_add_executable_with_gtest().

NOTE:Using Ubuntu 11.10 you will get a "undefined reference to `pthread_getspecific'" error. To fix that you have to add the "-pthread" compiler flag by adding following line just above the catkin_add_gtest call:

SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -pthread")

rosbuild_add_gtest(test/utest test/utest.cpp)

These calls will cause the utest executable to be built during the main build (a simple make), and will put it in the bin/test subdirectory (You will need to create this directory if it doesn't exist).

You can run the test with TBD. See this ROS answers post for more information.

For more information on the rosbuild_add_gtest macro, see rosbuild/CMakeLists.


NOTE: Do not declare a dependency on gtest in your package manifest. The rosbuild_add_gtest() macro will pull in the necessary flags.

NOTE: If you want to build an executable against gtest, but not declare it to be a test on its own (e.g., when you intend the executable to be run by rostest), use rosbuild_add_executable() followed by rosbuild_add_gtest_build_flags(). See rosbuild/CMakeLists for details.

NOTE:Using Ubuntu 11.10 you will get a "undefined reference to `pthread_getspecific'" error. To fix that you have to add the "-pthread" compiler flag by adding following line just above the rosbuild_add_gtest call:

SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -pthread")

Test output

The console output from a test will look something like this:

[==========] Running 3 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 3 tests from MapServer
[ RUN      ] MapServer.loadValidPNG
[       OK ] MapServer.loadValidPNG
[ RUN      ] MapServer.loadValidBMP
[       OK ] MapServer.loadValidBMP
[ RUN      ] MapServer.loadInvalidFile
[       OK ] MapServer.loadInvalidFile
[----------] Global test environment tear-down
[==========] 3 tests from 1 test case ran.
[  PASSED  ] 3 tests.

When run as make test, each test will also generate an XML file, in:

  • $ROS_ROOT/test/test_results/<package-name> if you're running ROS cturtle

  • ~/.ros/test_results if you're running ROS Diamondback.

  • To make sure where they are generated on your system, you can run: rosrun rosunit test_results_dir.py

The generated XML looks something like this:

<?xml version="1.0" encoding="UTF-8"?>
<testsuite tests="3" failures="1" disabled="0" errors="0" time="25" name="AllTests">
  <testsuite name="MapServer" tests="3" failures="1" disabled="0" errors="0" time="25">
    <testcase name="loadValidPNG" status="run" time="24" classname="MapServer">
      <failure message="/Users/gerkey/code/ros-pkg/world_models/map_server/test/utest.cpp:56&#x0A;Failed&#x0A;Uncaught exception : This is OK on OS X" type=""/>
    </testcase>
    <testcase name="loadValidBMP" status="run" time="0" classname="MapServer" />
    <testcase name="loadInvalidFile" status="run" time="1" classname="MapServer" />
  </testsuite>
</testsuite>

We will eventually have a way of parsing and rendering these results into a web-based dashboard.

Examples

Simple case: calling functions

The following example is taken from the math_utils package. It shows how to test for correct results from various function calls.

Note the use of EXPECT_* macros, instead of ASSERT_* macros. Whereas the ASSERT_* macros records failure and immediately exits from the test, the EXPECT_* version records failure and continues. The latter behavior is usually what you want, because you want to run every test. The exception would be in the case where one test depends on the successful execution of a previous test.

   1 #include "math_utils/MathExpression.h"
   2 #include "math_utils/math_utils.h"
   3 #include <gtest/gtest.h>
   4 
   5 #define TEST_EXPRESSION(a) EXPECT_EQ((a), meval::EvaluateMathExpression(#a))
   6 
   7 TEST(MathExpressions, operatorRecognition){
   8   EXPECT_TRUE(meval::ContainsOperators("+"));
   9   EXPECT_TRUE(meval::ContainsOperators("-"));
  10   EXPECT_TRUE(meval::ContainsOperators("/"));
  11   EXPECT_TRUE(meval::ContainsOperators("*"));
  12   EXPECT_FALSE(meval::ContainsOperators("1234567890qwertyuiop[]asdfghjkl;'zxcvbnm,._=?8")); 
  13 }
  14 
  15 TEST(MathExpressions, basicOperations){
  16 EXPECT_EQ(5, meval::EvaluateMathExpression("2+3"));
  17 EXPECT_EQ(5, meval::EvaluateMathExpression("2 + 3"));
  18 EXPECT_EQ(10, meval::EvaluateMathExpression("20/2"));
  19 EXPECT_EQ(-4, meval::EvaluateMathExpression("6 - 10"));
  20 EXPECT_EQ(24, meval::EvaluateMathExpression("6 * 4"));
  21 }
  22 
  23 TEST(MathExpressions, complexOperations){
  24   TEST_EXPRESSION(((3 + 4) / 2.0) + 10);
  25   TEST_EXPRESSION(7 * (1 + 2 + 3 - 2 + 3.4) / 12.7);
  26   TEST_EXPRESSION((1 + 2 + 3) - (8.0 / 10)); 
  27 }
  28 
  29 TEST(MathExpressions, UnaryMinus){
  30   TEST_EXPRESSION(-5);
  31 }
  32 
  33 TEST(MathExpressions, badInput){
  34   //TODO - figure out what good error behavior is and test for it properly
  35   //EXPECT_EQ(0, meval::EvaluateMathExpression("4.1.3 - 4.1"));
  36   //EXPECT_EQ(0, meval::EvaluateMathExpression("4.1.3"));
  37 }
  38 
  39 TEST(MathUtils, basicOperations){
  40   EXPECT_EQ(math_utils::clamp<int>(-10, 10, 20), 10);
  41   EXPECT_EQ(math_utils::clamp<int>(15, 10, 20), 15);
  42   EXPECT_EQ(math_utils::clamp<int>(25, 10, 20), 20);
  43 }
  44 
  45 int main(int argc, char **argv){
  46   testing::InitGoogleTest(&argc, argv);
  47   return RUN_ALL_TESTS();
  48 }

Handling exceptions

Because Google doesn't use exceptions, gtest doesn't handle them.

If you're testing code that can throw exceptions, then you need to try/catch them yourself, and then use the ADD_FAILURE and/or FAIL macros as appropriate. ADD_FAILURE records a non-fatal failure, while FAIL records a fatal failure.

The following example is taken from the map_server package. It shows how to test a library that can throw exceptions. Note how we test both for unexpected exceptions (fail if we catch it) and expected exceptions (fail if we don't catch it).

Also note that the SUCCEED macro is purely documentary. If you want to terminate the test with success in the middle, you must also explicitly call return.

   1 #include <stdexcept> // for std::runtime_error
   2 #include <gtest/gtest.h>
   3 #include "map_server/image_loader.h"
   4 #include "test_constants.h"
   5 
   6 /* Try to load a valid PNG file.  Succeeds if no exception is thrown, and if
   7  * the loaded image matches the known dimensions and content of the file.
   8  *
   9  * This test can fail on OS X, due to an apparent limitation of the
  10  * underlying SDL_Image library. */
  11 TEST(MapServer, loadValidPNG)
  12 {
  13   try
  14   {
  15     std_srvs::StaticMap::response map_resp;
  16     map_server::loadMapFromFile(&map_resp, g_valid_png_file, g_valid_image_res, false);
  17     EXPECT_FLOAT_EQ(map_resp.map.resolution, g_valid_image_res);
  18     EXPECT_EQ(map_resp.map.width, g_valid_image_width);
  19     EXPECT_EQ(map_resp.map.height, g_valid_image_height);
  20     for(unsigned int i=0; i < map_resp.map.width * map_resp.map.height; i++)
  21       EXPECT_EQ(g_valid_image_content[i], map_resp.map.data[i]);
  22     }
  23   catch(...)
  24   {
  25     ADD_FAILURE() << "Uncaught exception : " << "This is OK on OS X";
  26   }
  27 }
  28 
  29 /* Try to load a valid BMP file.  Succeeds if no exception is thrown, and if
  30 * the loaded image matches the known dimensions and content of the file. */
  31 TEST(MapServer, loadValidBMP)
  32 {
  33   try
  34   {
  35     std_srvs::StaticMap::response map_resp;
  36     map_server::loadMapFromFile(&map_resp, g_valid_bmp_file, g_valid_image_res, false);
  37     EXPECT_FLOAT_EQ(map_resp.map.resolution, g_valid_image_res);
  38     EXPECT_EQ(map_resp.map.width, g_valid_image_width);
  39     EXPECT_EQ(map_resp.map.height, g_valid_image_height);
  40     for(unsigned int i=0; i < map_resp.map.width * map_resp.map.height; i++)
  41       EXPECT_EQ(g_valid_image_content[i], map_resp.map.data[i]);
  42   }
  43   catch(...)
  44   {
  45     ADD_FAILURE() << "Uncaught exception";
  46   }
  47 }
  48 
  49 /* Try to load a valid BMP file.  Succeeds if no exception is thrown, and if
  50   * the loaded image matches the known dimensions and content of the file. */
  51 TEST(MapServer, loadValidBMP)
  52 {
  53   try
  54   {
  55     std_srvs::StaticMap::response map_resp;
  56     map_server::loadMapFromFile(&map_resp, g_valid_bmp_file, g_valid_image_res, false);
  57     EXPECT_FLOAT_EQ(map_resp.map.resolution, g_valid_image_res);
  58     EXPECT_EQ(map_resp.map.width, g_valid_image_width);
  59     EXPECT_EQ(map_resp.map.height, g_valid_image_height);
  60     for(unsigned int i=0; i < map_resp.map.width * map_resp.map.height; i++)
  61       EXPECT_EQ(g_valid_image_content[i], map_resp.map.data[i]);
  62   }
  63   catch(...)
  64   {
  65     ADD_FAILURE() << "Uncaught exception";
  66   }
  67 }
  68 
  69 /* Try to load an invalid file.  Succeeds if a std::runtime_error exception
  70 * is thrown. */
  71 TEST(MapServer, loadInvalidFile)
  72 {
  73   try
  74   {
  75     std_srvs::StaticMap::response map_resp;
  76     map_server::loadMapFromFile(&map_resp, "foo", 0.1, false);
  77   }
  78   catch(std::runtime_error &e)
  79   {
  80     SUCCEED();
  81     return;
  82   }
  83   catch(...)
  84   {
  85     FAIL() << "Uncaught exception";
  86   }
  87   ADD_FAILURE() << "Didn't throw exception as expected";
  88 }
  89 
  90 int main(int argc, char **argv)
  91 {
  92   testing::InitGoogleTest(&argc, argv);
  93   return RUN_ALL_TESTS();
  94 }

Using test fixtures for persistent data

You can create a persistent data object by creating a "test fixture," a class that inherits from testing::Test. Tests using a test fixture are declared with the TEST_F macro, instead of the TEST macro. This google test primer is a highly recommended reading to understanding the powerful uses of test fixtures.

The most common reason for using test fixtures is when a tedious process setting up of variables and conditions are required to test a function that could be reused by different test cases. Below is an example of a class that can add/multiply Vector3 geometry messages.

   1 #ifndef TALKER
   2 #define TALKER
   3 
   4 #include <ros/ros.h>
   5 #include "geometry_msgs/Vector3.h"
   6 
   7 class Talker{
   8     ros::NodeHandle nh;
   9     ros::Publisher pub;
  10 public:
  11     Talker(); 
  12     void pubMsg(geometry_msgs::Vector3&& v);
  13     geometry_msgs::Vector3 add(const geometry_msgs::Vector3& a, const geometry_msgs::Vector3& b);
  14     geometry_msgs::Vector3 mult(const geometry_msgs::Vector3& a, const geometry_msgs::Vector3& b);
  15 };
  16 
  17 # endif // TALKER_H
  18 

Because we would like to test two functions separately with the same parameters we can take advantage of test fixtures to setup and tear down our variables between each test.

   1 #include <gtest/gtest.h>
   2 #include "talker/talker.h"
   3 
   4 // helper function to create a geometry_msg::Vector3
   5 auto createVec = [](double x, double y, double z) {
   6   geometry_msgs::Vector3 v;
   7   v.x = x;
   8   v.y = y;
   9   v.z = z;
  10   return v;
  11 };
  12 
  13 class TalkerFixture : public ::testing::Test {
  14   protected:
  15     geometry_msgs::Vector3 a;
  16     geometry_msgs::Vector3 b;
  17   
  18   // Setup
  19   TalkerTest() {
  20     a = createVec(1,2,3);
  21     b = createVec(2,2,2);
  22   }
  23 };
  24 
  25 TEST_F(TalkerFixture, TestAdder) {
  26   Talker talk;
  27   // a and b are inherited from TalkerTest
  28   geometry_msgs::Vector3 vec = talk.add(a,b);
  29   geometry_msgs::Vector3 ans = createVec(3,4,5);
  30 
  31   ASSERT_EQ(ans.x, vec.x) << "Talker add x's failed";
  32   ASSERT_EQ(ans.y, vec.y) << "Talker add y's failed";
  33   ASSERT_EQ(ans.z, vec.z) << "Talker add z's failed";
  34 }
  35 
  36 TEST_F(TalkerFixture, TestMult) {
  37   Talker talk;
  38   // a and b are inherited from TalkerTest
  39   geometry_msgs::Vector3 vec = talk.mult(a,b);
  40   geometry_msgs::Vector3 ans = createVec(2,4,6);
  41 
  42   ASSERT_EQ(ans.x, vec.x) << "Talker mult x's failed";
  43   ASSERT_EQ(ans.y, vec.y) << "Talker mult y's failed";
  44   ASSERT_EQ(ans.z, vec.z) << "Talker mult z's failed";
  45 }
  46 
  47 int main(int argc, char** argv) {
  48     ::testing::InitGoogleTest(&argc, argv);
  49     ros::init(argc, argv, "test_talker");
  50     ros::NodeHandle nh;
  51     return RUN_ALL_TESTS();
  52 }

Update Package.xml

<package format="2">
  <test_depend>rosunit</test_depend>
</package>

Update CMakeLists.txt

Notice how the gtest is added as an executable like any other that must be linked to its dependencies, i.e. the Talker class/library.

## Add gtest based cpp test target and link libraries
catkin_add_gtest(${PROJECT_NAME}-test test/talker_tests.cpp)
if(TARGET ${PROJECT_NAME}-test)
  target_link_libraries(${PROJECT_NAME}-test ${PROJECT_NAME})
endif()

Run tests

If your gtest initializes a rosnode via rosinit() you must initialize a roscore separately, then run all tests:

roscore
catkin_make run_tests

OR to run a specific package simply tab complete to find the specific package tests you'd like to run

roscore
catkin_make run_tests<TAB><TAB>

For more information, see the gtest documentation.

Wiki: gtest (last edited 2018-11-08 00:47:06 by MaxGittelman)