## For instruction on writing tutorials ## http://www.ros.org/wiki/WritingTutorials #################################### ##FILL ME IN #################################### ## for a custom note with links: ## note = Some of the files in here haven't been released yet and may not work. Sorry! ## for the canned note of "This tutorial assumes that you have completed the previous tutorials:" just add the links ## note.0= [[arm_navigation/Tutorials/Planning Scene/Adding Virtual Objects to the Planning Scene | Adding Virtual Objects]] ## descriptive title for the tutorial ## title = Attaching Virtual Objects to the Robot ## multi-line description to be displayed in search ## description = In this tutorial we describe attaching virtual objects to the robot's body - these are objects that are assumed to move with the robot instead of being static in the environment. ## the next tutorial description (optional) ## next = ## links to next tutorial (optional) ## next.0.link= ## next.1.link= ## what level user is this tutorial for ## level= IntermediateCategory ## keywords = attached_objects planning_scene electric #################################### <> <> == Setting up the tutorial == In one terminal run the following: {{{ roslaunch pr2_arm_navigation_tutorials pr2_floorobj_world.launch }}} In another terminal run: {{{ roslaunch pr2_arm_navigation_tutorials planning_scene_architecture.launch }}} This will bring up an Rviz window. == Running the tutorial == This tutorial uses the executable in ''pr2_arm_navigation_tutorials'' called ''get_state_validity_virtual_attached_object''. In another terminal, run the following command: {{{ rosrun pr2_arm_navigation_tutorials get_state_validity_virtual_attached_object }}} You should see something like the following (note that the object may flicker between green and brown in Rviz): {{attachment:virtual_attached_object_one.png | current state | width=800}} In this example we've attached a pole to one of the robot's right fingers. The link to which the object is attached is very important. When we attach the object a fixed transform is computed between the link for attachment and the object - note that it's not required that the object actually touch the link for attachment, though we assume it's ok if they do touch. Then when we do forward kinematics in the future and update the link of attachment we will also update the position of the attached object given the position of the attached object. Thus if we chose to attach the object to the base link of the robot and moved the robot's arm then the object would not move, as we would not have updated the position of the base link. Generally the appropriate link to attach to is one that is actually in contact with the object, like the fingertip pads. The second important part of the attached object specification are the touch links - the set of links that the object is allowed to touch. To illustrate, try the following command: {{{ rosrun pr2_arm_navigation_tutorials get_state_validity_virtual_attached_object 0 }}} You should get the following: {{attachment:virtual_attached_object_two.png | no touch links | width=800}} In this case we're in the same position, but with a 0 as the first argument of the call we haven't added any touch links. This means that the attached object is now in collision with other parts of the gripper. We need to tell the collision checking algorithm that it's ok if the object contacts the gripper, though it's not ok if touches other parts on the right arm - we don't want to swing the poll into the upper arm, for instance. By default the only link that is added to the touch links is the link for attachment. Now quit and try the following: {{{ rosrun pr2_arm_navigation_tutorials get_state_validity_virtual_attached_object 1 .8 }}} You should get something that looks like this: {{attachment:virtual_attached_object_three.png | ok pose | width=800}} Now we're adding the entire right gripper to the touch links and also specifying a second argument, which maps to a position for the ''r_wrist_roll_joint'' - as we give different commands it spins the wrist around. In this case we've attached the object to the robot's gripper, and when we update the position for the ''r_wrist_roll_joint'', which is further up in the kinematic tree, it will also compute a new position for the fingers, and thus a new position for the object. The above pose for the wrist results in a valid state for the pole, but this command won't: {{{ rosrun pr2_arm_navigation_tutorials get_state_validity_virtual_attached_object 1 .9 }}} You should get something that looks like this: {{attachment:virtual_attached_object_four.png | collision environment | width=800}} Rotating the wrist a little further brings the object in contact with the pole, which isn't allowed. Finally, quit and try this: {{{ rosrun pr2_arm_navigation_tutorials get_state_validity_virtual_attached_object 1 -1.55 }}} You should get something that looks like this: {{attachment:virtual_attached_object_five.png | self-collision | width=800}} Spinning the other way brings us in contact with the other gripper. Though we've specified that it's ok for the attached object to touch the right gripper, it's not ok if it touches the left. == The code == {{{ #!cplusplus block=tree_block #include #include #include static const std::string SET_PLANNING_SCENE_DIFF_NAME = "/environment_server/set_planning_scene_diff"; int main(int argc, char **argv){ ros::init (argc, argv, "get_state_validity_test"); ros::NodeHandle rh; ros::Publisher vis_marker_publisher_; ros::Publisher vis_marker_array_publisher_; vis_marker_publisher_ = rh.advertise("state_validity_markers", 128); vis_marker_array_publisher_ = rh.advertise("state_validity_markers_array", 128); ros::service::waitForService(SET_PLANNING_SCENE_DIFF_NAME); ros::ServiceClient get_planning_scene_client = rh.serviceClient(SET_PLANNING_SCENE_DIFF_NAME); arm_navigation_msgs::GetPlanningScene::Request planning_scene_req; arm_navigation_msgs::GetPlanningScene::Response planning_scene_res; bool add_touch = true; if(argc > 1) { std::stringstream s(argv[1]); s >> add_touch; } //add a cylinder into the collision space attached to the r_gripper_r_finger_tip_link arm_navigation_msgs::AttachedCollisionObject att_object; att_object.link_name = "r_gripper_r_finger_tip_link"; att_object.object.id = "attached_pole"; att_object.object.operation.operation = arm_navigation_msgs::CollisionObjectOperation::ADD; att_object.object.header.frame_id = "r_gripper_r_finger_tip_link"; att_object.object.header.stamp = ros::Time::now(); arm_navigation_msgs::Shape object; object.type = arm_navigation_msgs::Shape::CYLINDER; object.dimensions.resize(2); object.dimensions[0] = .02; object.dimensions[1] = 1.2; geometry_msgs::Pose pose; pose.position.x = 0.0; pose.position.y = 0.0; pose.position.z = 0.0; pose.orientation.x = 0; pose.orientation.y = 0; pose.orientation.z = 0; pose.orientation.w = 1; att_object.object.shapes.push_back(object); att_object.object.poses.push_back(pose); if(add_touch) { att_object.touch_links.push_back("r_end_effector"); } planning_scene_req.planning_scene_diff.attached_collision_objects.push_back(att_object); if(!get_planning_scene_client.call(planning_scene_req, planning_scene_res)) { ROS_WARN("Can't get planning scene"); return -1; } planning_environment::CollisionModels collision_models("robot_description"); planning_models::KinematicState* state = collision_models.setPlanningScene(planning_scene_res.planning_scene); std::vector arm_names = collision_models.getKinematicModel()->getModelGroup("right_arm")->getUpdatedLinkModelNames(); std::vector joint_names = collision_models.getKinematicModel()->getModelGroup("right_arm")->getJointModelNames(); if(argc > 2) { std::stringstream s(argv[2]); double val; s >> val; std::map nvalues; nvalues["r_wrist_roll_joint"] = val; state->setKinematicState(nvalues); } std_msgs::ColorRGBA good_color, collision_color, joint_limits_color; good_color.a = collision_color.a = joint_limits_color.a = .8; good_color.g = 1.0; collision_color.r = 1.0; joint_limits_color.b = 1.0; std_msgs::ColorRGBA point_markers; point_markers.a = 1.0; point_markers.r = 1.0; point_markers.g = .8; std_msgs::ColorRGBA color; visualization_msgs::MarkerArray arr; if(!state->areJointsWithinBounds(joint_names)) { color = joint_limits_color; } else if(collision_models.isKinematicStateInCollision(*state)) { color = collision_color; collision_models.getAllCollisionPointMarkers(*state, arr, point_markers, ros::Duration(0.2)); } else { color = good_color; } collision_models.getRobotMarkersGivenState(*state, arr, color, "right_arm", ros::Duration(0.2), &arm_names); collision_models.getAttachedCollisionObjectMarkers(*state, arr, "right_arm", color, ros::Duration(0.2), false, &arm_names); while(ros::ok()) { vis_marker_array_publisher_.publish(arr); ros::spinOnce(); ros::Duration(.1).sleep(); } collision_models.revertPlanningScene(state); ros::shutdown(); } }}} === Understanding the Code === The code is largely identical to the previous tutorial except for a couple a places. Here's where we setup the attached object: <> The attached object specification contains an object as well as a few additional fields for link_name and touch_links. In this case we set the link name to be ''r_gripper_r_finger_tip_link'', which is one of the links that will generally be in contact with an object. In this case we also specify the attachment point in the frame of the attached link, though any other frame can be used. The attachment point will be computed given the planning scene robot state. Finally, we set the entire ''r_end_effector'' group for the touch links - we could also specify individual links if we wanted to. '''NOTE''': if porting this tutorial to another robot, be sure that the frame to which you attach has a collision model, otherwise attachment will fail. <> Finally at the bottom we add the markers for attached objects to our marker array. The brown marker for the attached object is produced by the planning_scene_validity_server, and this program produces the green and red markers. ## AUTOGENERATED DO NOT DELETE ## TutorialCategory ## FILL IN THE STACK TUTORIAL CATEGORY HERE