(!) Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags.

How to collect data

Description: How to collect data

Tutorial Level:

This tutorial explains how to use the person_data package to collect data for the person detection and tracking data set.

Data Recorded

  1. narrow and wide stereo raw images
  2. TF frames (including localization on a map)
  3. base_scan
  4. tilting laser scans and the tilting laser scanner signal
  5. mechanism state (just in case, this may be unnecessary)

Set-up

This tutorial assumes that you have set up your .bashrc on the robot to include:

export ROS_MASTER_URI=http://prX1:11311

Compile Offboard

rosmake stereo_image_proc rviz image_view pr2_bringup pr2_dashboard person_data

Compile Onboard

rosmake pr2_bringup person_data

Make A Map

Prep on the robot:

roscore
roscd pr2_bringup
roslaunch pr2.launch
roscd pr2_teleop
roslaunch teleop_joystick.launch

Off the robot, to view the map:

rosmake nav_view
rosrun nav_view nav_view /static_map:=/dynamic_map

On the robot, to create the map:

rosrun gmapping slam_gmapping scan:=base_scan _delta:=0.025

To save the data, just in case, on the robot run:

rosrecord -f maplaserdata /base_scan /tf

Save the map using:

rosrun map_server map_saver map:=dynamic_map

To record

One terminal per box:

roscore

roscd pr2_bringup
roslaunch pr2.launch

Look at the images and make sure the brightness is ok. Change settings if necessary.
ROS_NAMESPACE=narrow_stereo rosrun image_view image_view
rosrun dynamic_reconfigure reconfigure_gui

Change the map name to the newly created map in person_data.launch

roscd person_data
roslaunch collecting/person_data.launch

Localize on the map!!!

Collect data of a stationary chessboard for possible future calibration.
Collect data of a color calibration target for possible future calibration.

Data will record on the /removable drive on prX1, with a name in the form: location-<mm>-<dd>-<yyyy>-s<#> (Rename the files to change location to the actual location)

Joystick controls

To drive the robot, use the usual controls.

The bottom-left shoulder button on the joystick has been overridden to control recording. Push and hold the bottom-most left shoulder button to enable recording controls. Then the circle button starts recording and the square button stops recording. You should see console output indicating the recording state changes from the joy_record node. (You only have to hold down the record button to start/stop recording, not during recording itself. When you start/stop recording, do not press the driving buttons at the same time.)

After recording

  1. Copy data from the robot to /wg/osx/person_data

  2. The map is an image whose location can be found in <your_bag_location>/map_server.xml. Copy the image to <your_bag_location> and edit <your_bag_location>/map_server.xml to include the new image location.

  3. CHECK_THE_BAG

Topics

The following topics should be recorded:

  1. /base_scan
  2. /tf
  3. /narrow_stereo/left/image_raw
  4. /narrow_stereo/left/camera_info
  5. /narrow_stereo/right/image_raw
  6. /narrow_stereo/right/camera_info
  7. /wide_stereo/left/image_raw
  8. /wide_stereo/left/camera_info
  9. /wide_stereo/right/image_raw
  10. /wide_stereo/right/camera_info
  11. /tilt_scan
  12. /laser_tilt_controller/laser_scanner_signal
  13. /mechanism_state

The map is an image which can be played back using <your_bag_location>/map_server.xml.

TO DO

  • Align stereo point cloud and the laser point cloud manually for now. Perhaps collect data so that they can be properly calibrated later.
  • Calibration check for the stereo??? NEED TO REWRITE THE CAMERA_OFFSETTER TO WORK ON POINT CLOUDS AND REWRITE THE TF FRAMES FOR THE CAMERAS DIRECTLY.
  • Extract laser snapshots corresponding to images (last x seconds corresponding to an image time stamp). (To find x, look at pr2_mechanism_controllers/scripts/send_laser_traj_cmd_ms2.py.)
  • Scripts for evaluating results.

Using Mech Turk to Generate Labels

Information about the (internal) process of setting up a Mechanical Turk interface for labeling can be found here.

Converting to standard formats

  • Play back a bag, process the stereo images, and write out the left image.
    • IPR
  • Converting uint8 images to png: person_data/launch/save_images.launch

    • Runs: person_data/scripts/save_images.py

    • Will save all of the images as pngs in <bag_dir>/<topic_with_underscores>/png/<time>.png

Wiki: person_data/Tutorials/CollectingData (last edited 2010-01-14 02:10:01 by CarolinePantofaru)