<> <> == Overview == This package provides the functionality of libelas in the form of a ROS node. In addition it provides simple point cloud construction functionality. {{http://cyphy-elas-ros.googlecode.com/files/prev.png|St Lucia Point Cloud|width="600"}} This shows an example point cloud that can be constructed. This uses the St Lucia dataset, which is available at https://wiki.qut.edu.au/display/cyphy/UQ+St+Lucia. The visualisation uses [[rviz]]. * Source: https://github.com/jeffdelmerico/cyphy-elas-ros == Relevant Publications == {{{ LATEX BIBTEX CITATION ENTRY: @INPROCEEDINGS{Geiger10, author = {Andreas Geiger and Martin Roser and Raquel Urtasun}, title = {Efficient Large-Scale Stereo Matching}, booktitle = {Asian Conference on Computer Vision}, year = {2010}, month = {November}, address = {Queenstown, New Zealand} } }}} == Nodes == {{{ #!clearsilver CS/NodeAPI node.0 { name = elas_ros desc = This node performs the disparity matching, and generates point clouds in the camera's local reference frame. sub { 0.name = /left/ 0.type = sensor_msgs/Image 0.desc = Left rectified stereo image. 1.name = /right/ 1.type = sensor_msgs/Image 1.desc = Right rectified stereo image. 2.name = /left/camera_info 2.type = sensor_msgs/CameraInfo 2.desc = Camera info for left camera. 3.name = /right/camera_info 3.type = sensor_msgs/CameraInfo 3.desc = Camera info for right camera. } pub { 0.name = disparity 0.type = sensor_msgs/Image 0.desc = Image representing the calculated disparity at each point. Note that the values are scaled to be in the range [0, 255] 1.name = point_cloud 1.type = sensor_msgs/PointCloud2 1.desc = The point cloud assosciated with the calculated disparity in the left camera's local reference frame. 2.name = frame_data 2.type = elas_ros/ElasFrameData 2.desc = Frame data used for the point cloud concatenation process. See node pc_construction. } param { 0.name = ~queue_size 0.type = int 0.desc = Length of the input queues for left and right camera synchronization. 0.default = 5 1.name = ~approximate_sync 1.type = bool 1.desc = Whether the node should use approximate synchronization of incoming messages. Set to true if cameras do not have synchronised timestamps. 1.default = false } } node.1 { name = pc_construction desc = This node concatenates the point clouds generated by the elas_ros node. sub { 0.name = 0.type = elas_ros/ElasFrameData 0.desc = The frame data generated by the elas_ros node. 1.name = 1.type = geometry_msgs/PoseStamped 1.desc = The pose transformation from base frame to pose frame. This accounts for motion of the camera. This can typically be determined by appropriate visual odometry. See usage notes below. } pub { 0.name = point_cloud 0.type = sensor_msgs/PointCloud2 0.desc = The concatenated point cloud. } param { 0.name = ~queue_size 0.type = int 0.desc = Length of the input queues for frame data and pose synchronization. 0.default = 5 1.name = ~approximate_sync 1.type = bool 1.desc = Whether the node should use approximate synchronization of incoming messages. Set to true if frame data and pose do not have synchronised timestamps. 1.default = false 2.name = base_frame_id 2.type = string 2.desc = The name of the base frame that the concatenated point cloud should be cast into. 3.name = pose_frame_id 3.type = string 3.desc = The name of the frame that the given pose transforms into. } } }}} == Usage == This code should be used in conjunction with visual odometry to concatenate point clouds. Testing so far has used the [[viso2|ROS wrapper for libviso]], but any visual odometry node that publishes a <> message is suitable. Usage steps: 1. Publish transforms from the pose frame to the camera frame 1. Publish left/right images and camera info 1. Run [[viso2|libviso]] or similar on the images, which generates pose topic used by the `pc_construction` node 1. Run `elas_ros` on the images, which generates elas frame data used by the `pc_construction` node 1. Run `pc_construction` to generate reconstructed 3D point clouds A sample launch file, given that images are published on `/stereo/left/image_rect` and `/stereo/right/image_rect` and camera info is published on `/stereo/left/camera_info` and `/stereo/right/camera_info` is supplied below. The relevant frames are: `odom_combined` → `base_footprint` → `base_link` → `camera_0` libviso publishes the transform from `odom_combined` → `base_footprint`, and in this example the other two transforms were static (and published using [[static_transform_publisher]]). The current frame's point cloud is published to `/elas/point_cloud`, the concatenated point cloud is published to `/elas_pc/point_cloud` {{{ }}} == Contacts == '''Patrick Ross''': patrick.ross@connect.qut.edu.au (Code maintainer) '''Andreas Geiger''': geiger@kit.edu (libelas author) '''Ben Upcroft''': ben.upcroft@qut.edu.au '''David Ball''': david.ball@qut.edu.au ## AUTOGENERATED DON'T DELETE ## CategoryPackage