Only released in EOL distros:  

pr2_object_manipulation: active_realtime_segmentation | fast_plane_detection | manipulation_worlds | object_recognition_gui | object_segmentation_gui | pick_and_place_demo_app | pr2_create_object_model | pr2_grasp_adjust | pr2_gripper_grasp_controller | pr2_gripper_grasp_planner_cluster | pr2_gripper_reactive_approach | pr2_gripper_sensor_action | pr2_gripper_sensor_controller | pr2_gripper_sensor_msgs | pr2_handy_tools | pr2_interactive_gripper_pose_action | pr2_interactive_manipulation | pr2_interactive_object_detection | pr2_manipulation_controllers | pr2_marker_control | pr2_navigation_controllers | pr2_object_manipulation_launch | pr2_object_manipulation_msgs | pr2_pick_and_place_demos | pr2_tabletop_manipulation_launch | pr2_wrappers | rgbd_assembler | robot_self_filter_color | segmented_clutter_grasp_planner | simple_Jtranspose_controller | tabletop_collision_map_processing | tabletop_object_detector | tabletop_vfh_cluster_detector | vfh_recognition | vfh_recognizer_db | vfh_recognizer_fs

Package Summary

Performs object segmentation and simple object recognition for constrained scenes.

tabletop_object_perception: active_realtime_segmentation | fast_plane_detection | object_recognition_gui | object_segmentation_gui | tabletop_collision_map_processing | tabletop_object_detector

pr2_object_manipulation: active_realtime_segmentation | fast_plane_detection | manipulation_worlds | object_recognition_gui | object_segmentation_gui | pick_and_place_demo_app | pr2_create_object_model | pr2_grasp_adjust | pr2_gripper_grasp_controller | pr2_gripper_grasp_planner_cluster | pr2_gripper_reactive_approach | pr2_gripper_sensor_action | pr2_gripper_sensor_controller | pr2_gripper_sensor_msgs | pr2_handy_tools | pr2_interactive_gripper_pose_action | pr2_interactive_manipulation | pr2_interactive_object_detection | pr2_manipulation_controllers | pr2_marker_control | pr2_navigation_controllers | pr2_object_manipulation_launch | pr2_object_manipulation_msgs | pr2_pick_and_place_demos | pr2_pick_and_place_tutorial | pr2_tabletop_manipulation_launch | pr2_wrappers | rgbd_assembler | robot_self_filter_color | segmented_clutter_grasp_planner | tabletop_collision_map_processing | tabletop_object_detector | tf_throttle

Package Summary

Performs object segmentation and simple object recognition for constrained scenes.

pr2_object_manipulation: interactive_perception_msgs | manipulation_worlds | object_recognition_gui | pr2_create_object_model | pr2_gripper_grasp_controller | pr2_gripper_grasp_planner_cluster | pr2_gripper_reactive_approach | pr2_gripper_sensor_action | pr2_gripper_sensor_controller | pr2_gripper_sensor_msgs | pr2_interactive_gripper_pose_action | pr2_interactive_manipulation | pr2_interactive_manipulation_frontend | pr2_interactive_object_detection | pr2_interactive_object_detection_frontend | pr2_manipulation_controllers | pr2_marker_control | pr2_navigation_controllers | pr2_object_manipulation_launch | pr2_object_manipulation_msgs | pr2_pick_and_place_demos | pr2_tabletop_manipulation_launch | pr2_wrappers | rgbd_assembler | robot_self_filter_color | segmented_clutter_grasp_planner | tabletop_collision_map_processing | tabletop_object_detector | tf_throttle

Package Summary

Performs object segmentation and simple object recognition for constrained scenes.

Object Perception

There are two main components of object perception:

  • object segmentation: which part of the sensed data corresponds to the object we want to pick up?
  • object recognition: which object is it?

Our manipulation pipeline always requires segmentation. If recognition is performed and is successful, this further informs the grasp point selection mechanism. If not, we can still grasp the unknown object based only on perceived data.

Note that it is possible to avoid segmentation as well and select grasp points without knowing object boundaries. The current pipeline does not have this functionality, but it might gain it in the future.

To perform these tasks we rely the following assumptions:

  • the objects are resting on a table, which is the dominant plane in the scene
  • the minimum distance between two objects exceeds a given threshold (3cm in our demo)
  • in addition, object recognition can only handle 2 degrees of freedom: translation along the X and Y axes (with the Z axis assumed to be pointing "up" relative to the table). Therefore, in order to recognize an object:
    • it must be rotationally symmetric
    • it must have a known orientation, such as a glass or a bowl sitting "upright" on the table.

The sensor data that we use consists of a point cloud from the narrow stereo or Kinect cameras. We perform the following steps:

  • segmentation
    • the table is detected by finding the dominant plane in the point cloud using RANSAC;
    • points above the table are considered to belong to graspable objects. We use a clustering algorithm to identify individual objects. We refer to the points corresponding to an individual object as clusters.

  • recognition
    • for each cluster, a simple iterative fitting technique (a distant cousin of ICP) is used to see how well it corresponds to each mesh in our database of models. If a good fit is found, the database id of the model is returned along with the cluster.
    • note that our current fitting method operates in 2D, since based on our assumptions (object resting upright on the known surface of the table) the other 4 dimensions are fixed

The output from this components consists of the location of the table, the identified point clusters, and the corresponding database object id and fitting pose for those clusters found to be similar to database objects.

detect_1.png

Narrow Stereo image of a table and three objects

detect_2.png

Detection result: table plane has been detected (note the yellow contour). Objects have been segmented (note the different color point clouds superimposed on them). The bottle and the glass have also been recognized (as shown by the cyan mesh further superimposed on them)

Nodes

tabletop_segmentation

Segmentation is performed by the tabletop_segmentation node. It implements the TabletopSegmentation service.

Subscribed Topics

cloud_in (sensor_msgs/PointCloud2)
  • The pointcloud input that is processed to find the table and tabletop objects.
markers_out (visualization_msgs/Marker)
  • Markers for visualization of objects segmented in rviz.

Published Topics

markers_out (visualization_msgs/Marker)
  • Markers for visualization of objects segmented in rviz.

Services

segmentation_srv (tabletop_object_detector/TabletopSegmentation)
  • The service to call to get the segmented objects.

Parameters

quality_threshold (float, default: 0.005)
  • a unit-less threshold that is compared to the results of the model recognition to decide if a model recognition is good enough to be used. Lower numbers indicate a better fit. Reduce this if you want the recognition to be more strict.
clustering_distance (float, default: 0.03)
  • the min distance between objects, provided as input to the segmentation algorithm. Objects closer to each other than this threshold will be merged into the same cluster.
processing_frame (string, default: empty)
  • if non-empty, any received point clouds are first transformed in this frame, before any processing. Useful for operating in a frame where intuitive filters can be applied (e.g. tables can usually be found between z=0.5m and z=1.5m if operating in a frame with the origin on the floor and z pointing up, such as the base_link frame of the PR2)
inlier_threshold (float)
plane_detection_voxel_size (float)
flatten_table (bool)
table_padding (float)

Filtering Parameters

All of these parameters are applied to decide what part of the point cloud the detector should focus on. They must make sense in the processing_frame. If processing_frame is unspecified, all processing takes place in the native frame of the incoming cloud, which on the PR2 robot is the camera frame. Therefore, all these parameters have default values which makes sense in the camera frame.

z_filter_min (float, default: 0.4)

  • Incoming point cloud first filtered and only points with z coordinate greater than this value are kept.
z_filter_max (float, default: 1.0)
  • Incoming point cloud first filtered and only points with z coordinate less than this value are kept.
x_filter_min (float)
  • Incoming point cloud first filtered and only points with x coordinate greater than this value are kept.
x_filter_max (float)
  • Incoming point cloud first filtered and only points with z coordinate less than this value are kept.
y_filter_min (float)
  • Incoming point cloud first filtered and only points with x coordinate greater than this value are kept.
y_filter_max (float)
  • Incoming point cloud first filtered and only points with z coordinate less than this value are kept.
table_z_filter_min (float, default: 0.01)
  • after the table is detected, only points with height above the table by at least this value are kept
table_z_filter_max (float, default: 0.5)
  • after the table is detected, only points with height above the table by at most this value are kept
up_direction (float, default: -1.0)
  • Whether the natural "up" direction (relative to gravity) is closer to positive or negative

tabletop_object_recognition

Recognition is performed by the tabletop_object_recognition node. It implements the TabletopObjectRecognition service. Note that this service directly takes as input the result of the segmentation service above. Note that object recognition needs a connection to the database of known models to perform recognition. This database itself is documented in household_objects_database. The object recognition service will look for and use the services provided by the household_object_database_node. The names of these services are provided as parameters.

Published Topics

markers_out (visualization_msgs/Marker)
  • Markers for visualization of objects recognized in rviz.

Services

object_recognition_srv (tabletop_object_detector/TabletopObjectRecognition)
  • Runs object recognition.
clear_exclusions_srv (tabletop_object_detector/ClearExclusionsList)
  • Clears the exclusions list.
add_exclusion_srv (tabletop_object_detector/AddModelExclusion)
  • Adds the given model to the exclusion list.
negate_exclusions_srv (tabletop_object_detector/NegateExclusions)
  • Negates exclusions

Parameters

fit_merge_threshold (float, default: 0.05)
min_marker_quality (float, default: 0.003)
model_set ()
get_model_list_srv ()
  • Service name for get_model_list from the object database node.
get_model_mesh_srv ()
  • Service name for get_model_mesh from the object database node.}

tabletop_complete

Most often, segmentation and recognition are used together. To simplify this process, the tabletop_complete node will pipe them together, passing the results of segmentation directly to recognition so you don't have to worry about that. It implements the TabletopDetection service.

Services

(tabletop_object_detector/TabletopDetection)

Running the Manipulation Pipeline

To launch the manipulation pipeline, complete with the sensor processing provided here, and execute pickup and place tasks using the PR2 robot, tutorials and launch files are provided in pr2_tabletop_manipulation_apps.

Wiki: tabletop_object_detector (last edited 2015-12-20 07:26:36 by Furushchev)