|Note: This tutorial assumes that you have completed the previous tutorials: Add Kinect camera frame to the PR2 urdf tree.|
|Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags.|
Calibrate camera pose to the map frameDescription: Suppose you have a camera mounted somewhere in the room, this tutorial teaches you how to find out the camera pose relative to the map frame.
Tutorial Level: BEGINNER
Next Tutorial: Simplify the calibration process for multiple cameras
In a room, there is a camera rigidly mounted on the wall, and we have the map of the room. You want to localize the camera in the room, i.e. find out the pose of the camera relative to the map frame of the room. The strategy is to use PR2 as a bridge. With PR2 2d navigation stack and robot_state_publisher, we can find out the transformation between the map frame and any camera frame on-board PR2. Then with camera_pose_calibration and camera_pose_toolkits, we can find out the transformation between any PR2 camera frame and the wall camera frame.
Run PR2 Navigation
ssh to PR2 computer, bring up the robot, then
$ roslaunch wg_pr2_2dnav pr2_2dnav.launch
This launches everything that PR2 needs to localize and autonomously navigate itself in the pre-mapped willow garage facility. However, in our case, we only need the localization function.
Launch the Kinect on PR2's head
Suppose the head-mounted Kinect's rgb optical frame is already in the PR2's urdf tree. Remember to modify the default Kinect launch file, like what we did in the previous tutorial. We are gonna use the Kinect rgb camera on PR2's head as the urdf camera. Launch it from the PR2 computer.
Launch the Kinect on the wall
Still you need to use the modified version of the Kinect launch file, but make sure to use a different namespace so that you can tell it apart from the Kinect on PR2's head. Launch the wall kinect from a local machine and remap the ROS_MASTER_URI to the PR2 ROS master.
On your local machine,
$ export ROS_MASTER_URI=http://<your_pr2_name>:11311 $ rosrun rviz rviz
In RViz displays, choose TF, Map, Laser Scan (base scanner), Pose Array (particles) and Robot Model. Click on the '2D Pose Estimate' button to set up an initial guess for the robot's current location and then drive the robot around in the room so that AMCL can find the pose of the robot against the know 2D map.
After you localize the robot in the map (or more or less so), drive the robot to the front of the wall-mounted Kinect, like what the picture at the beginning of the page shows. Now in RViz, you should be able to see something similar to the left image below:
Launch camera pose calibration along with camera pose toolkits
First copy the urdf_updater.launch in the camera_pose_toolkits/launch folder (the one you used in the previous tutorial) to your own directory and rename it to 'single_cam_localization.launch'. Then modify it by deleting the following lines:
... <arg name="urdf_input_file" /> <arg name="urdf_output_file" /> ... <param name="urdf_output_filename" value="$(arg urdf_output_file)" /> <param name="robot_description" textfile="$(arg urdf_input_file)" /> ... <node pkg="camera_pose_toolkits" type="urdf_writer_node" name="urdf_writer" output="screen"/>
Since there is no input urdf file.
Then on PR2 computer:
$ cd [your_directory] $ roslaunch single_cam_localization.launch new_cam_ns:=kinect_wall/rgb urdf_cam_ns:=/kinect_pr2/rgb mounting_frame:=/map checker_rows:=4 checker_cols:=5 checker_size:=0.0249 headless:=1
You can pick your own kinect namespaces, as long as they are not the same.
After you measure the checkerboard, and receives "successfully ran optimization" screen, You will see the wall-mounted camera frame pops out in RViz (the right image above).
Here is from a different angle in RViz: