Wiki

Only released in EOL distros:  

pr2_object_manipulation: active_realtime_segmentation | fast_plane_detection | manipulation_worlds | object_recognition_gui | object_segmentation_gui | pick_and_place_demo_app | pr2_create_object_model | pr2_grasp_adjust | pr2_gripper_grasp_controller | pr2_gripper_grasp_planner_cluster | pr2_gripper_reactive_approach | pr2_gripper_sensor_action | pr2_gripper_sensor_controller | pr2_gripper_sensor_msgs | pr2_handy_tools | pr2_interactive_gripper_pose_action | pr2_interactive_manipulation | pr2_interactive_object_detection | pr2_manipulation_controllers | pr2_marker_control | pr2_navigation_controllers | pr2_object_manipulation_launch | pr2_object_manipulation_msgs | pr2_pick_and_place_demos | pr2_tabletop_manipulation_launch | pr2_wrappers | rgbd_assembler | robot_self_filter_color | segmented_clutter_grasp_planner | simple_Jtranspose_controller | tabletop_collision_map_processing | tabletop_object_detector | tabletop_vfh_cluster_detector | vfh_recognition | vfh_recognizer_db | vfh_recognizer_fs

Package Summary

Performs object segmentation and simple object recognition for constrained scenes.

tabletop_object_perception: active_realtime_segmentation | fast_plane_detection | object_recognition_gui | object_segmentation_gui | tabletop_collision_map_processing | tabletop_object_detector

pr2_object_manipulation: active_realtime_segmentation | fast_plane_detection | manipulation_worlds | object_recognition_gui | object_segmentation_gui | pick_and_place_demo_app | pr2_create_object_model | pr2_grasp_adjust | pr2_gripper_grasp_controller | pr2_gripper_grasp_planner_cluster | pr2_gripper_reactive_approach | pr2_gripper_sensor_action | pr2_gripper_sensor_controller | pr2_gripper_sensor_msgs | pr2_handy_tools | pr2_interactive_gripper_pose_action | pr2_interactive_manipulation | pr2_interactive_object_detection | pr2_manipulation_controllers | pr2_marker_control | pr2_navigation_controllers | pr2_object_manipulation_launch | pr2_object_manipulation_msgs | pr2_pick_and_place_demos | pr2_pick_and_place_tutorial | pr2_tabletop_manipulation_launch | pr2_wrappers | rgbd_assembler | robot_self_filter_color | segmented_clutter_grasp_planner | tabletop_collision_map_processing | tabletop_object_detector | tf_throttle

Package Summary

Performs object segmentation and simple object recognition for constrained scenes.

pr2_object_manipulation: interactive_perception_msgs | manipulation_worlds | object_recognition_gui | pr2_create_object_model | pr2_gripper_grasp_controller | pr2_gripper_grasp_planner_cluster | pr2_gripper_reactive_approach | pr2_gripper_sensor_action | pr2_gripper_sensor_controller | pr2_gripper_sensor_msgs | pr2_interactive_gripper_pose_action | pr2_interactive_manipulation | pr2_interactive_manipulation_frontend | pr2_interactive_object_detection | pr2_interactive_object_detection_frontend | pr2_manipulation_controllers | pr2_marker_control | pr2_navigation_controllers | pr2_object_manipulation_launch | pr2_object_manipulation_msgs | pr2_pick_and_place_demos | pr2_tabletop_manipulation_launch | pr2_wrappers | rgbd_assembler | robot_self_filter_color | segmented_clutter_grasp_planner | tabletop_collision_map_processing | tabletop_object_detector | tf_throttle

Package Summary

Performs object segmentation and simple object recognition for constrained scenes.

Object Perception

There are two main components of object perception:

Our manipulation pipeline always requires segmentation. If recognition is performed and is successful, this further informs the grasp point selection mechanism. If not, we can still grasp the unknown object based only on perceived data.

Note that it is possible to avoid segmentation as well and select grasp points without knowing object boundaries. The current pipeline does not have this functionality, but it might gain it in the future.

To perform these tasks we rely the following assumptions:

The sensor data that we use consists of a point cloud from the narrow stereo or Kinect cameras. We perform the following steps:

The output from this components consists of the location of the table, the identified point clusters, and the corresponding database object id and fitting pose for those clusters found to be similar to database objects.

detect_1.png

Narrow Stereo image of a table and three objects

detect_2.png

Detection result: table plane has been detected (note the yellow contour). Objects have been segmented (note the different color point clouds superimposed on them). The bottle and the glass have also been recognized (as shown by the cyan mesh further superimposed on them)

Nodes

tabletop_segmentation

Segmentation is performed by the tabletop_segmentation node. It implements the TabletopSegmentation service.

Subscribed Topics

cloud_in (sensor_msgs/PointCloud2) markers_out (visualization_msgs/Marker)

Published Topics

markers_out (visualization_msgs/Marker)

Services

segmentation_srv (tabletop_object_detector/TabletopSegmentation)

Parameters

quality_threshold (float, default: 0.005) clustering_distance (float, default: 0.03) processing_frame (string, default: empty) inlier_threshold (float) plane_detection_voxel_size (float) flatten_table (bool) table_padding (float)

Filtering Parameters

All of these parameters are applied to decide what part of the point cloud the detector should focus on. They must make sense in the processing_frame. If processing_frame is unspecified, all processing takes place in the native frame of the incoming cloud, which on the PR2 robot is the camera frame. Therefore, all these parameters have default values which makes sense in the camera frame.

z_filter_min (float, default: 0.4)

z_filter_max (float, default: 1.0) x_filter_min (float) x_filter_max (float) y_filter_min (float) y_filter_max (float) table_z_filter_min (float, default: 0.01) table_z_filter_max (float, default: 0.5) up_direction (float, default: -1.0)

tabletop_object_recognition

Recognition is performed by the tabletop_object_recognition node. It implements the TabletopObjectRecognition service. Note that this service directly takes as input the result of the segmentation service above. Note that object recognition needs a connection to the database of known models to perform recognition. This database itself is documented in household_objects_database. The object recognition service will look for and use the services provided by the household_object_database_node. The names of these services are provided as parameters.

Published Topics

markers_out (visualization_msgs/Marker)

Services

object_recognition_srv (tabletop_object_detector/TabletopObjectRecognition) clear_exclusions_srv (tabletop_object_detector/ClearExclusionsList) add_exclusion_srv (tabletop_object_detector/AddModelExclusion) negate_exclusions_srv (tabletop_object_detector/NegateExclusions)

Parameters

fit_merge_threshold (float, default: 0.05) min_marker_quality (float, default: 0.003) model_set () get_model_list_srv () get_model_mesh_srv ()

tabletop_complete

Most often, segmentation and recognition are used together. To simplify this process, the tabletop_complete node will pipe them together, passing the results of segmentation directly to recognition so you don't have to worry about that. It implements the TabletopDetection service.

Services

(tabletop_object_detector/TabletopDetection)

Running the Manipulation Pipeline

To launch the manipulation pipeline, complete with the sensor processing provided here, and execute pickup and place tasks using the PR2 robot, tutorials and launch files are provided in pr2_tabletop_manipulation_apps.

Wiki: tabletop_object_detector (last edited 2015-12-20 07:26:36 by Furushchev)