Documentation Status

Cannot load information on name: robot_localization, distro: electric, which means that it is not yet in our index. Please see this page for information on how to submit your repository to our index.
Cannot load information on name: robot_localization, distro: fuerte, which means that it is not yet in our index. Please see this page for information on how to submit your repository to our index.

Package Summary

Released Continuous integration Documented

The robot_localization package provides nonlinear state estimation through sensor fusion of an abritrary number of sensors.

Package Summary

Released Continuous integration Documented

The robot_localization package provides nonlinear state estimation through sensor fusion of an abritrary number of sensors.

Use GitHub to report bugs or submit feature requests. [View active issues]

NEWS:

  • If you're new to robot_localization, check out the 2015 ROSCon talk for some pointers on getting started.

What is robot_localization?

robot_localization is a collection of state estimation nodes, each of which is an implementation of a nonlinear state estimator for robots moving in 3D space. It contains two state estimation nodes, ekf_localization_node and ukf_localization_node. In addition, robot_localization provides navsat_transform_node, which aids in the integration of GPS data.

robot_localization was developed by Charles River Analytics, and is part of the cra-ros-pkg package collection.

Further details can be found in this paper:

@inproceedings{MooreStouchKeneralizedEkf2014,
  author    = {T. Moore and D. Stouch},
  title     = {A Generalized Extended Kalman Filter Implementation for the Robot Operating System},
  year      = {2014},
  month     = {July},
  booktitle = {Proceedings of the 13th International Conference on Intelligent Autonomous Systems (IAS-13)},
  publisher = {Springer}
}

rl.png

Features

All the state estimation nodes in robot_localization share common features, namely:

  • Fusion of an arbitrary number of sensors. The nodes do not restrict the number of input sources. If, for example, your robot has multiple IMUs or multiple sources of odometry information, the state estimation nodes within robot_localization can support all of them.

  • Support for multiple ROS message types. All state estimation nodes in robot_localization can take in nav_msgs/Odometry, sensor_msgs/Imu, geometry_msgs/PoseWithCovarianceStamped, or geometry_msgs/TwistWithCovarianceStamped messages.

  • Per-sensor input customization. If a given sensor message contains data that you don't want to include in your state estimate, the state estimation nodes in robot_localization allow you to exclude that data on a per-sensor basis.

  • Continuous estimation. Each state estimation node in robot_localization begins estimating the vehicle's state as soon as it receives a single measurement. If there is a holiday in the sensor data (i.e., a long period in which no data is received), the filter will continue to estimate the robot's state via an internal motion model.

All state estimation nodes track the 15-dimensional state of the vehicle (x, y, z, roll, pitch, yaw, their respective velocities, and linear acceleration).

Installing robot_localization

Via package manager:

sudo apt-get install ros-groovy-robot-localization

From source:

source /opt/ros/groovy/setup.bash
cd catkin_ws/src
git clone https://github.com/cra-ros-pkg/robot_localization --branch groovy-devel
cd ..
catkin_make -DCMAKE_BUILD_TYPE=Release
source catkin_ws/devel/setup.bash

Via package manager:

sudo apt-get install ros-hydro-robot-localization

From source:

source /opt/ros/hydro/setup.bash
cd catkin_ws/src
git clone https://github.com/cra-ros-pkg/robot_localization --branch hydro-devel
cd ..
catkin_make -DCMAKE_BUILD_TYPE=Release
source catkin_ws/devel/setup.bash

Via package manager:

sudo apt-get install ros-indigo-robot-localization

From source:

source /opt/ros/indigo/setup.bash
cd catkin_ws/src
git clone https://github.com/cra-ros-pkg/robot_localization --branch indigo-devel
cd ..
catkin_make -DCMAKE_BUILD_TYPE=Release
source catkin_ws/devel/setup.bash

Via package manager:

sudo apt-get install ros-jade-robot-localization

From source:

source /opt/ros/indigo/setup.bash
cd catkin_ws/src
git clone https://github.com/cra-ros-pkg/robot_localization --branch jade-devel
cd ..
catkin_make -DCMAKE_BUILD_TYPE=Release
source catkin_ws/devel/setup.bash

Via package manager:

sudo apt-get install ros-kinetic-robot-localization

From source:

source /opt/ros/kinetic/setup.bash
cd catkin_ws/src
git clone https://github.com/cra-ros-pkg/robot_localization --branch kinetic-devel
cd ..
catkin_make -DCMAKE_BUILD_TYPE=Release
source catkin_ws/devel/setup.bash

State Estimation Nodes

ekf_localization_node

ekf_localization_node is an implementation of an extended Kalman filter. It uses a 3D vehicle model to project the state forward in time, and corrects that projected estimate using perceived sensor data.

ukf_localization_node

ukf_localization_node is an implementation of an unscented Kalman filter. It uses a set of carefully selected sigma points to project the state through the nonlinear transition function, and then uses those projected sigma points to recover the state estimate and covariance. This eliminates the use of Jacobians and makes the filter more stable. However, it is also more computationally taxing than ekf_localization_node.

Using robot_localization's State Estimation Nodes

The relatively large number of parameters available to nodes in robot_localization make launch files the preferred method for starting any of its nodes. The package contains template launch files to help get users started.

Parameters

The state estimation nodes in the robot_localization package have a number of parameters that can be used to configure their behavior. The parameters are described below. In addition, we have provided sample ekf_template.launch and ukf_template.launch files in robot_localization/launch. The files are heavily commented for clarity.

Standard Parameters

  • ~frequency - The real-valued frequency, in Hz, at which the filter produces a state estimate. Note that the filter will not begin computation until it receives at least one message from one of the inputs.

  • ~sensor_timeout - The real-valued period, in seconds, after which we consider any sensor to have timed out. In this event, we carry out a predict cycle on the EKF without correcting it. This parameter can be thought of as the inverse of the minimum frequency at which the filter will generate new output.

  • ~two_d_mode - If your robot is operating in a planar environment and you're comfortable with ignoring the subtle variations in the ground (as reported by an IMU), then set this to true. It will fuse 0 values for all 3D variables (Z, roll, pitch, and their respective velocities and accelerations). This keeps the covariances for those values from exploding while ensuring that your robot's state estimate remains affixed to the X-Y plane.

  • ~map_frame, ~odom_frame, ~base_link_frame, ~world_frame - These parameters define the operating "mode" for robot_localization. REP-105 specifies three principal coordinate frames: map, odom, and base_link. base_link is the coordinate frame that is affixed to the robot. The robot's position in the odom frame will drift over time, but is accurate in the short term and should be continuous. The map frame, like the odom frame, is a world-fixed coordinate frame, and while it contains the most globally accurate position estimate for your robot, it is subject to discrete jumps, e.g., due to the fusion of GPS data. Here is how to use these parameters:

    1. Set the map_frame, odom_frame, and base_link parameters to the appropriate frame names for your system.

      • If your system does not have a map_frame, just remove it, and make sure world_frame is set to the value of odom_frame.

    2. If you are only fusing continuous position data such as wheel encoder odometry, visual odometry, or IMU data, set world_frame to your odom_frame value. This is the default behavior for the state estimation nodes in robot_localization, and the most common use for it.

    3. If you are fusing global absolute position data that is subject to discrete jumps (e.g., GPS or position updates from landmark observations) then:
      1. Set your world_frame to your map_frame value

      2. MAKE SURE something else is generating the odom->base_link transform. Note that this can even be another instance of a robot_localization state estimation node. However, that instance should not fuse the global data.

    The default values for map_frame, odom_frame, and base_link_frame are map, odom, and base_link, respectively. The world_frame parameter defaults to the value of odom_frame.

  • ~transform_time_offset - Some packages require that your transforms are future-dated by a small time offset. The value of this parameter will be added to the timestamp of map->odom or odom->base_link transform being generated by the state estimation nodes in robot_localization.

  • ~odomN, twistN, imuN, poseN - For each sensor, users need to define this parameter based on the message type. For example, if we define one source of Imu messages and two sources of Odometry messages, the configuration would look like this:

        <param name="imu0" value="robot/imu/data"/>
        <param name="odom0" value="wheel_encoder/odometry"/>
        <param name="odom1" value="visual_odometry/odometry"/>

    The index for each parameter name is 0-based (e.g., odom0, odom1, etc.) and must be defined sequentially (e.g., do NOT use pose0 and pose2 if you have not defined pose1). The values for each parameter are the topic name for that sensor.

  • ~odomN_config, ~twistN_config, ~imuN_config, ~poseN_config - For each of the sensor messages defined above, users must specify what variables of those messages should be fused into the final state estimate. An example odometry configuration might look like this:

        <rosparam param="odom0_config">[true,  true,  false,
                                        false, false, true,
                                        true,  false, false,
                                        false, false, true,
                                        false, false, false]</rosparam>

    The order of the boolean values are X, Y, Z, roll, pitch, yaw, X velocity, Y velocity, Z velocity, roll velocity, pitch velocity, yaw velocity, X acceleration, Y acceleration, and Z acceleration. In this example, we are fusing X and Y position, yaw, X velocity, and yaw velocity. Note that the specification is done in the frame_id of the sensor, not in the world_frame or base_link_frame. Please see the Sensor Integration tutorial for more information.

  • ~odomN_queue_size, ~twistN_queue_size, ~imuN_queue_size, poseN_queue_size - Users can use these parameters to adjust the callback queue sizes for each sensor. This is useful if your frequency parameter value is much lower than your sensor's frequency, as it allows the filter to incorporate all measurements that arrived in between update cycles.

  • ~odomN_differential, ~imuN_differential, ~poseN_differential - For each of the sensor messages defined above that contain pose information, users can specify whether the pose variables should be integrated differentially. If a given value is set to true, then for a measurement at time t from the sensor in question, we first subtract the measurement at time t-1, and convert the resulting value to a velocity. This setting is especially useful if your robot has two sources of absolute pose information, e.g., yaw measurements from odometry and an IMU. In that case, if the variances on the input sources are not configured correctly, these measurements may get out of sync with one another and cause oscillations in the filter, but by integrating one or both of them differentially, we avoid this scenario.

        <param name="odom0_differential" value="true"/>

    Users should take care when using this parameter for orientation data, as the conversion to velocity means that the covariance for orientation state variables will grow without bound (unless another source of absolute orientation data is being fused). If you simply want all of your pose variables to start at 0, then please use the _relative parameter. Please note that if you are fusing GPS information via navsat_transform_node or utm_transform_node, you should make sure that the _differential setting is false.

  • ~odomN_relative, ~imuN_relative, ~poseN_relative - If this parameter is set to true, then any measurements from this sensor will be fused relative to the first measurement received from that sensor. This is useful if, for example, you want your state estimate to always start at (0, 0, 0) and with roll, pitch, and yaw values of (0, 0, 0). It is similar to the _differential parameter, but instead of removing the measurement at time t-1, we always remove the measurement at time 0, and the measurement is not converted to a velocity.

  • ~imuN_remove_gravitational_acceleration - If fusing accelerometer data from IMUs, this parameter determines whether or not acceleration due to gravity is removed from the acceleration measurement before fusing it. Please note, however, that it assumes that the IMU that is providing the acceleration data is also producing an absolute orientation. The orientation data is required to correctly remove gravitational acceleration.

  • ~print_diagnostics - If true, the state estimation nodes in robot_localization will publish diagnostic messages to the /diagnostics topic. This is useful for debugging your configuration and sensor data.

Advanced Parameters

  • ~odomN_pose_rejection_threshold, ~odomN_twist_rejection_threshold, ~poseN_rejection_threshold, ~twistN_rejection_threshold, ~imuN_pose_rejection_threshold, ~imuN_angular_velocity_rejection_threshold, ~imuN_linear_acceleration_rejection_threshold - If your data is subject to outliers, use these threshold settings, expressed as Mahalanobis distances, to control how far away from the current vehicle state a sensor measurement is permitted to be. Each defaults to numeric_limits<double>::max() if unspecified.

  • ~debug - Boolean flag that specifies whether or not to run in debug mode. WARNING: setting this to true will generate a massive amount of data. The data is written to the value of the debug_out_file parameter.

  • ~debug_out_file - If debug is true, the file to which debug output is written.

  • ~process_noise_covariance - The process noise covariance, commonly denoted Q, is used to model uncertainty in the prediction stage of the filtering algorithms. It can be difficult to tune, and has been exposed as a parameter for easier customization. This parameter can be left alone, but you will achieve superior results by tuning it.

  • ~initial_estimate_covariance - The estimate covariance, commonly denoted P, defines the error in the current state estimate. The parameter allows users to set the initial value for the matrix, which will affect how quickly the filter converges. For example, if users set the value at position [0, 0] to a very small value, e.g., 1e-12, and then attempt to fuse measurements of X position with a high variance value for X, then the filter will be very slow to "trust" those measurements, and the time required for convergence will increase. Again, users should take care with this parameter. When only fusing velocity data (e.g., no absolute pose information), users will likely not want to set the initial covariance values for the absolute pose variables to large numbers. This is because those errors are going to grow without bound (owing to the lack of absolute pose measurements to reduce the error), and starting them with large values will not benefit the state estimate.

Node-specific Parameters

The standard and advanced parameters are common to all state estimation nodes in robot_localization. This section details parameters that are unique to their respective state estimation nodes.

ukf_localization_node

The parameters for ukf_localization_node follow the nomenclature of the original paper and wiki article.

  • ~alpha - Controls the spread of sigma points. Unless you are familiar with unscented Kalman filters, it's probably best for this setting to remain at its default value (0.001).

  • ~kappa - Also control the spread of sigma points. Unless you are familiar with unscented Kalman filters, it's probably best for this setting to remain at its default value (0).

  • ~beta - Relates to the distribution of the state vector. The default value of 2 implies that the distribution is Gaussian. Like the other parameters, this should remain unchanged unless the user is familiar with unscented Kalman filters.

Published Topics

Published Transforms

  • If the user's world_frame parameter is set to the value of odom_frame, a transform is published from the frame given by the odom_frame parameter to the frame given by the base_link_frame parameter.

  • If the user's world_frame parameter is set to the value of map_frame, a transform is published from the frame given by the map_frame parameter to the frame given by the odom_frame parameter. This mode assumes that another node is broadcasting the transform from the frame given by the odom_frame parameter to the frame given by the base_link_frame parameter. This can be another instance of a robot_localization state estimation node.

Manual State Reset

By issuing a geometry_msgs/PoseWithCovarianceStamped message to the set_pose topic, users can manually set the state of the filter. This is useful for resetting the filter during testing, and allows for interaction with rviz. Alternatively, the state estimation nodes advertise a SetPose service, whose type is robot_localization/SetPose.

Other Nodes

navsat_transform_node takes as input a nav_msgs/Odometry message (usually the output of ekf_localization_node or ukf_localization_node), a sensor_msgs/Imu containing an accurate estimate of your robot's heading, and a sensor_msgs/NavSatFix message containing GPS data. It produces an odometry message in coordinates that are consistent with your robot's world frame. This value can be directly fused into your state estimate. Please note that if you fuse the output of this node with any of the state estimation nodes in robot_localization, you should make sure that the odomN_differential setting is false for that input.

Parameters

  • magnetic_declination_radians - Enter the magnetic declination for your location. If you don't know it, see http://www.ngdc.noaa.gov/geomag-web/ (make sure to convert the value to radians). This parameter is needed if your IMU prodives its orientation with respect to the magnetic north.

  • ~delay - The time, in seconds, to wait before calculating the transform from GPS coordinates to your robot's world frame.

  • ~yaw_offset - Your IMU should read 0 for yaw when facing east. If it doesn't, enter the offset here (desired_value = offset + sensor_raw_value). For example, if your IMU reports 0 when facing north, as most of them do, this parameter would be pi/2 (approx 1.5707963). This parameter changed in version 2.2.1. Previously, robot_localization assumed that IMUs read 0 when facing north, so yaw_offset was used acordingly.

  • ~zero_altitude - If this is true, the nav_msgs/Odometry message produced by this node has its pose Z value set to 0.

  • ~publish_filtered_gps - If true, navsat_transform_node will also transform your robot's world frame (e.g., map) position back to GPS coordinates, and publish a sensor_msgs/NavSatFix message on the /gps/filtered topic.

  • ~broadcast_utm_transform - If this is true, navsat_transform_node will broadcast the transform between the UTM grid and the frame of the input odometry data. See Published Transforms below for more information.

  • ~use_odometry_yaw - If true, navsat_transform_node will not get its heading from the IMU data, but from the input odometry message. Users should take care to only set this to true if your odometry message has orientation data specified in an earth-referenced frame, e.g., as produced by a magnetometer. Additionally, if the odometry source is one of the state estimation nodes in robot_localization, the user should have at least one source of absolute orientation data being fed into the node, with the _differential and _relative parameters set to false.

Subscribed Topics

  • imu/data - A sensor_msgs/Imu message with orientation data

  • odometry/filtered - A nav_msgs/Odometry message of your robot's current position. This is needed in the event that your first GPS reading comes after your robot has attained some non-zero pose.

  • gps/fix - A sensor_msgs/NavSatFix containing your robot's GPS coordinates

Published Topics

  • odometry/gps - A nav_msgs/Odometry message containing the GPS coordinates of your robot, transformed into its world coordinate frame. This message can be directly fused into robot_localization's state estimation nodes.

  • gps/filtered - A sensor_msgs/NavSatFix message containing your robot's world frame position, transformed into GPS coordinates

Published Transforms

  • world_frame->utm (optional) - If the broadcast_utm_transform parameter is set to true, navsat_transform_node calculates a transform from the utm frame to the frame_id of the input odometry data. However, to avoid re-parenting the odometry frame (and avoid making assumptions about the user's setup), we instead define the inverse transform so that the utm frame is a child of the odometry frame.

Forthcoming Features

A list of enhancements and (say it ain't so!) bugs can be found on the GitHub repository issues page.

Tutorials

Please see the Tutorials page.

Troubleshooting

Please see the Troubleshooting page.

Wiki: robot_localization (last edited 2016-09-01 11:44:13 by MarcBosch)