Simple utility used to convert pixels on an image to 3d points using point cloud data.
- Author: Kelsey Hawkins / email@example.com, Advisor: Prof. Charlie Kemp (Healthcare Robotics Lab at Georgia Tech)
- License: BSD
- Source: git https://code.google.com/p/gt-ros-pkg.hrl-sensor-utils/ (branch: master)
The current algorithm takes advantage of the RGB-D ordering of the point cloud to find the relevant point quickly and reliably. This ordering is found natively on the point clouds produced by the openni_kinect stack. The hrl_clickable_display package provides an easy interface for using this code.
These launch files, properly modified, will publish a pose to the /pixel3d topic with every mouse click on the image.
roslaunch pixel_2_3d pixel_2_3d_run.launch # modify the remaps in this file roslaunch hrl_clickable_display kinect_clickable_UI.launch
pixel_2_3dConverts pixel on image to a pose in 3D by projecting the point cloud onto the image and finding the closest projected point in the image space to the given pixel. The orientation is found by computing the surface normal and making the z-axis point along the normal in the direction of the camera.
Subscribed Topics/l_mouse_click (geomertry_msgs/PointStamped)
- Along with /pixel3d performs the same functionality as /pixel_2_3d. References the x and y values of the point for the u and v pixel values. Designed for use with hrl_clickable_display.
- Image to reference for pixel values.
- Point cloud for 3D data.
Published Topics/pixel3d (geometry_msgs/PoseStamped)
- Output 3D pose published with both the service call and subscription call.
- Input: pixel in /image frame. Output: pose in /base_footprint frame. Also contains an error flag indicating the result of the call. A non-zero value suggests failure.
Parameters~normal_radius (double, default: 0.03)
- The search radius used to find the normal. Higher values are more time consuming and less sensitive. Lower values are sensitive to noise but return quickly.