This package contains two nodes that talk to fovis (which is build by the fovis package): mono_depth_odometer and stereo_odometer. Both estimate camera motion based on incoming rectified images from calibrated cameras. The first one needs a registered depth image to associate a depth value to each pixel in the incoming image, the second one calculates this depth from a calibrated stereo system. Both odometers provide full 6DOF incremental motion estimates and should work out of the box.
Please read REP 105 for an explanation of odometry frame ids.
The chain of transforms relevant for visual odometry is as follows:
world → odom → base_link → camera
Visual odometry algorithms generally calculate camera motion. To be able to calculate robot motion based on camera motion, the transformation from the camera frame to the robot frame has to be known. Therefore this implementation needs to know the tf base_link → camera to be able to publish odom → base_link.
The name of the camera frame is taken from the incoming images, so be sure your camera driver publishes it correctly.
NOTE: The coordinate frame of the camera is expected to be the optical frame, which means x is pointing right, y downwards and z from the camera into the scene. The origin is where the camera's principle axis hits the image plane (as given in sensor_msgs/CameraInfo).
To learn how to publish the required tf base_link → camera, please refer to the tf tutorials. If the required tf is not available, the odometer assumes it as the identity matrix which means the robot frame and the camera frame are identical.
fovis was designed to estimate the motion of a MAV (micro aerial vehicle) using a Kinect sensor. As the used feature descriptors are not rotation invariant, the odometer needs to work at high frequencies to estimate in-plane rotations correctly.
Common for mono_depth_odometer and stereo_odometer
Published Topics~pose (geometry_msgs/PoseStamped)
- The robot's current pose according to the odometer.
- Odometry information that was calculated, contains pose and twist. NOTE: pose and twist covariance is not published.
- Image showing feature matches as well as some internal information.
- Message containing internal information such as number of features, matches, timing etc.
- Name of the world-fixed frame where the odometer lives.
- Name of the moving frame whose pose the odometer should report.
- If true, the odometer publishes tf's (see above).
Odometry ParametersPlease see this page for a list of all parameters and their meanings. NOTE: To comply with ROS naming standards you have to replace hyphens by underscore when setting the parameters through ROS. All parameters are strings, even the numeric parameters have to be given as strings.
Required tf Transforms~base_link_frame_id → <frame_id attached to image messages>
- Transformation from the robot's reference point (base_link in most cases) to the camera's optical frame.
Provided tf Transforms~odom_frame_id → ~base_link_frame_id
- Transformation from the odometry's origin (e.g. odom) to the robot's reference point (e.g. base_link)
Subscribed Topics<camera>/rgb/image_rect (sensor_msgs/Image)
- The rectified input image. There must be a corresponding camera_info topic as well.
- The corresponding depth image. There must be a corresponding camera_info topic as well. Values must be given in floating point format (distance in meters).
Subscribed Topics<stereo>/left/<image> (sensor_msgs/Image)
- Left rectified input image.
- Right rectified input image.
- Camera info for left image.
- Camera info for right image.
If you have a problem, please look on ROS Answers (FAQ link above) and post a question if you could not find an answer.
Please use the stack's issue tracker at Github to submit bug reports and feature requests regarding the ROS wrapper of fovis: https://github.com/srv/fovis/issues/new.