Wiki

Only released in EOL distros:  

rail_pick_and_place: graspdb | rail_grasp_collection | rail_pick_and_place_msgs | rail_pick_and_place_tools | rail_recognition

Package Summary

Construction and Use of a Recognition Database for Grasping Purposes

rail_pick_and_place: graspdb | rail_grasp_collection | rail_pick_and_place_msgs | rail_pick_and_place_tools | rail_recognition

Package Summary

Construction and Use of a Recognition Database for Grasping Purposes

About

The rail_recognition package contains nodes for object recognition and demonstration grasp selection using models from a grasp database, handled by graspdb. The package also contains a node used for generating object models from grasp demonstrations collected with the rail_grasp_collection package. The recognizer has support for a 2D image recognizer to run first in the full recognition pipeline, significantly increasing the runtime of the point cloud recognizer by limiting the number of candidate classes to test with point cloud registration. This package also contains nodes for data collection, training, and testing of the 2D recognizer.

Nodes

model_generator

Generate object models for recognition and manipulation using demonstrations from the grasp database.

Action Goal

model_generator/generate_models (rail_pick_and_place_msgs/GenerateModelsGoal)

Action Result

model_generator/generate_models (rail_pick_and_place_msgs/GenerateModelsResult)

Published Topics

model_generator/debug_pc (sensor_msgs/PointCloud2) model_generator/debug_poses (geometry_msgs/PoseArray)

Parameters

/graspdb/host (string, default: "127.0.0.1") /graspdb/port (int, default: graspdb::Client::DEFAULT_PORT) /graspdb/user (string, default: "ros") /graspdb/password (string, default: "") /graspdb/db (string, default: "graspdb") debug (bool, default: false)

object_recognizer

Object recognition of passed-in segmented objects, returns the recognized object with name, database id, and grasps filled in.

Action Goal

rail_recognition/recognize_object/goal (rail_manipulation_msgs/RecognizeObjectGoal)

Action Result

rail_recognition/recognize_object/result (rail_manipulation_msgs/RecognizeObjectResult)

Parameters

/graspdb/host (string, default: "127.0.0.1") /graspdb/port (int, default: graspdb::Client::DEFAULT_PORT) /graspdb/user (string, default: "ros") /graspdb/password (string, default: "") /graspdb/db (string, default: "graspdb")

object_recognition_listener

Object recognition of segmented objects, specified by object index in the segmented object list.

Action Goal

rail_recognition/recognize (rail_manipulation_msgs/RecognizeGoal) rail_recognition/recognize_all (rail_manipulation_msgs/RecognizeAllGoal)

Action Result

rail_recognition/recognize (rail_manipulation_msgs/RecognizeResult) rail_recognition/recognize_all (rail_manipulation_msgs/RecognizeAllResult)

Subscribed Topics

object_topic param (rail_manipulation_msgs/SegmentedObjectList)

Published Topics

object_recognition_listener/recognized_objects (rail_manipulation_msgs/SegmentedObjectList) object_recognition_listener/debug (geometry_msgs/PoseArray)

Parameters

/graspdb/host (string, default: "127.0.0.1") /graspdb/port (int, default: graspdb::Client::DEFAULT_PORT) /graspdb/user (string, default: "ros") /graspdb/password (string, default: "") /graspdb/db (string, default: "graspdb") object_topic (string, default: "/rail_segmentation/segmented_objects") debug (bool, default: false) use_image_recognition (bool, default: true)

rail_grasp_model_retriever

Grasp model retrieval from the grasp database via action server.

Action Goal

rail_grasp_model_retriever/retrieve_grasp_model/goal (rail_pick_and_place_msgs/RetrieveGraspModelGoal)

Action Result

rail_grasp_model_retriever/retrieve_grasp_model/result (rail_pick_and_place_msgs/RetrieveGraspModelResult)

Published Topics

rail_grasp_model_retriever/point_cloud (sensor_msgs/PoitnCloud2) rail_grasp_model_retriever/poses (geometry_msgs/PoseArray)

Parameters

/graspdb/host (string, default: "127.0.0.1") /graspdb/port (int, default: graspdb::Client::DEFAULT_PORT) /graspdb/user (string, default: "ros") /graspdb/password (string, default: "") /graspdb/db (string, default: "graspdb")

metric_trainer

Generate data sets for training registration metric decision trees used in classifying pairwise point cloud registrations for the object model generation and recognition pipelines.

Action Goal

metric_trainer/train_metrics/goal (rail_pick_and_place_msgs/TrainMetricsGoal)

Action Result

metric_trainer/train_metrics/result (rail_pick_and_place_msgs/TrainMetricsResult)

Actions Called

metric_trainer/get_yes_no_feedback (rail_pick_and_place_msgs/GetYesNoFeedback)

Published Topics

metric_trainer/base_pc (invalid message type for MsgLink(msg/type)) metric_trainer/aligned_pc (invalid message type for MsgLink(msg/type))

Parameters

/graspdb/host (string, default: "127.0.0.1") /graspdb/port (int, default: graspdb::Client::DEFAULT_PORT) /graspdb/user (string, default: "ros") /graspdb/password (string, default: "") /graspdb/db (string, default: "graspdb")

collect_images

Run an instance of the 2D image recognizer to save 2D images as training/test data at every segmentation call.

Subscribed Topics

rail_segmentation/segmented_objects (rail_manipulation_msgs/SegmentedObjectList)

Parameters

collect_images/saved_data_dir (string, default: ros::package::getPath("rail_recognition")/data/saveddata) collect_images/images_dir (string, default: ros::package::getPath("rail_recognition")/data/images) collect_images/test_images_dir (string, default: ros::package::getPath("rail_recognition")/data/testimages) collect_images/new_images_dir (string, default: ros::package::getPath("rail_recognition")/data/newimages) collect_images/save_new_images (bool, default: false) collect_images/segmented_objects_topic (string, default: /rail_segmentation/segmented_objects)

save_image_recognizer_features

Run an instance of the 2D image recognizer to build the dictionaries for the recognizer, then exit.

Subscribed Topics

rail_segmentation/segmented_objects (rail_manipulation_msgs/SegmentedObjectList)

Parameters

save_image_recognizer_features/saved_data_dir (string, default: ros::package::getPath("rail_recognition")/data/saveddata) save_image_recognizer_features/images_dir (string, default: ros::package::getPath("rail_recognition")/data/images) save_image_recognizer_features/test_images_dir (string, default: ros::package::getPath("rail_recognition")/data/testimages) save_image_recognizer_features/new_images_dir (string, default: ros::package::getPath("rail_recognition")/data/newimages) save_image_recognizer_features/save_new_images (bool, default: false) save_image_recognizer_features/segmented_objects_topic (string, default: /rail_segmentation/segmented_objects)

train_image_recognizer

Run an instance of the 2D image recognizer to train the neural net on the training dataset, then exit.

Subscribed Topics

rail_segmentation/segmented_objects (rail_manipulation_msgs/SegmentedObjectList)

Parameters

train_image_recognizer/saved_data_dir (string, default: ros::package::getPath("rail_recognition")/data/saveddata) train_image_recognizer/images_dir (string, default: ros::package::getPath("rail_recognition")/data/images) train_image_recognizer/test_images_dir (string, default: ros::package::getPath("rail_recognition")/data/testimages) train_image_recognizer/new_images_dir (string, default: ros::package::getPath("rail_recognition")/data/newimages) train_image_recognizer/save_new_images (bool, default: false) train_image_recognizer/segmented_objects_topic (string, default: /rail_segmentation/segmented_objects)

test_image_recognizer

Run an instance of the 2D image recognizer to test the trained neural net on the test dataset, then exit. Results of the test are printed to the terminal.

Subscribed Topics

rail_segmentation/segmented_objects (rail_manipulation_msgs/SegmentedObjectList)

Parameters

test_image_recognizer/saved_data_dir (string, default: ros::package::getPath("rail_recognition")/data/saveddata) test_image_recognizer/images_dir (string, default: ros::package::getPath("rail_recognition")/data/images) test_image_recognizer/test_images_dir (string, default: ros::package::getPath("rail_recognition")/data/testimages) test_image_recognizer/new_images_dir (string, default: ros::package::getPath("rail_recognition")/data/newimages) test_image_recognizer/save_new_images (bool, default: false) test_image_recognizer/segmented_objects_topic (string, default: /rail_segmentation/segmented_objects)

The Recognition Pipeline

recognition_pipeline.png

The full recognition pipeline takes in a list of unrecognized segmented objects, such as what the rail_segmentation package provides. Each unrecognized object will first be classified by the 2D image classifier to determine a set of candidate object classes of high probability. These candidate classes will then be retrieved from the object model database, handled by graspdb, which are then used by the point cloud recognizer to provide a final object label and set of example grasps for picking up the now-recognized object.

Note that the initial 2D image recognition is an optional step. It can be disabled by setting the use_object_recognition parameter of the object_recognition_listener node to false. Disabling this step will significantly lower the runtime of the algorithm, however, as every model in the database will be considered by the point cloud recognizer.

Setting up the 2D Image Recognizer

The 2D image recognizer can be trained in a few steps. First, an image set representative of the objects in the environment must be collected. This can be accomplished by running a segmentation node, such as the one found in rail_segmentation, and the collect_images node as follows:

The images will be saved in the directory specified to the collect_images node. These must then be sorted by hand into a training dataset organized by each object class. See the data directory for an example. The collected images, or a subset of them, can also be sorted into the test dataset.

The recognizer uses a bag-of-words model with a neural network classifier. The next step is to train the dictionary for the bag-of-words, which can be done by running the node below. Note that if the recognizer is being retrained for added objects or data, the same dictionaries can be re-used if the dataset hasn't changed dramatically, and this step can be skipped.

Next, the neural net itself needs to be trained:

The 2D image recognizer is now trained and ready to use. For verification of its accuracy, the test dataset can be used for evaluation:

Setting up the Point Cloud Recognizer

See the tutorials section of rail_pick_and_place for details on how to set up an object model database, provide grasp demonstrations, train new object models, and refine existing object models.

Installation

To install the rail_pick_and_place package, you can install from source with the following commands:

Startup

The rail_recognition package contains a launch file for launching the object_recognition_listener node with the various database, segmentation, and recognition parameters set, executed with:

Other major nodes in the package contain similar launch files with parameters set, listed below, where each launch file corresponds to a node of the same name.

The set of nodes used for training the 2D image recognizer can be run as follows:

Model generation can be run with an rviz plugin for visualization found in rail_pick_and_place_tools.

Wiki: rail_recognition (last edited 2015-11-09 16:42:37 by davidkent)