(!) Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags.

Making collision maps from self-filtered laser data

Description: This tutorial introduces the processing pipeline that takes scans from the tilting laser on the PR2, self-filters the robot from the data, and constructs a collision map that can then be used for checking potential collisions.

Tutorial Level: INTERMEDIATE

Next Tutorial: Checking a given state for collisions

Note: The collision map is generated by a node from the collision_map package, which requires input of type PointCloud. If using an OpenNI device that outputs only PointCloud2, you should use a node from collider or octomap_server instead.

This tutorial focuses on using several important packages in the ROS sensor and collision environment pipeline to populate the collision environment with filtered 3D sensor data. In this tutorial we focus on the PR2's tilting laser as the primary sensor; we start a node that makes the tilting laser nodding, take the raw data and filter it for shadow points while forming a point cloud, pass the data through a self-filter to remove points associated with the robot, take full scan snapshots, and finally form the snapshotted points clouds into collision maps. These collision maps will be used in the collision environment to check potential positions of the robot for collisions in subsequent tutorials.

ROS Setup

Make sure to set an environment variable called ROBOT to sim. E.g., in a bash shell,

 export ROBOT=sim

Finally, make sure that you have run a rosmake on pr2_arm_navigation_tutorials, as it will build all the packages used in this tutorial.

Launching the PR2 in simulation

We'll be using the PR2 to illustrate all the concepts in this tutorial, and thus the first step is to launch the PR2 in simulation. To launch the PR2 in simulation assuming that the pr2_arm_navigation_tutorials in your ROS_PACKAGE_PATH:

roslaunch pr2_arm_navigation_tutorials pr2_floorobj_world.launch

Launching rviz with preconfigured topics

In order to view the data topics we use in this tutorial run the following command to launch rviz with a preconfigured set of topics:

roslaunch pr2_arm_navigation_tutorials rviz_collision_tutorial_1.launch

In rviz you should see a list of topics associated with the laser. If you enable the third topic called "Laser Scan" you should see laser hits on the pole that are not moving as we have not started the laser nod yet.

Composing the tilt laser processing file

In this section we discuss the launch file for laser-perception that you can find in pr2_arm_navigation_tutorials/launch/laser-perception.launch. This launch file launches a number of nodes that will culminate in forming a self-and-shadow filtered collision map from a full laser scan snapshot.

The file laser-perception.launch contains the following code:

   1 <launch>
   2 
   3   <!-- set laser tilt rate -->
   4   <node pkg="pr2_mechanism_controllers" type="send_periodic_cmd_srv.py" name="laser_tilt_controller_3dnav_params" args="laser_tilt_controller linear 3 .75 .25" />
   5   
   6 
   7   <!-- convert tilt laser scan to pointcloud -->
   8   <node pkg="laser_filters" type="scan_to_cloud_filter_chain" output="screen" name="scan_to_cloud_filter_chain_tilt_laser">
   9     <remap from="scan" to="tilt_scan"/>
  10     <remap from="cloud_filtered" to="tilt_scan_cloud"/>
  11     <param name="target_frame" type="string" value="base_link"/>
  12     <rosparam command="load" file="$(find pr2_arm_navigation_tutorials)/config/shadow_filter.yaml" />
  13   </node>
  14 
  15   <!-- need to clear known objects from scans -->
  16   <node pkg="planning_environment" type="clear_known_objects" name="laser_clear_objects" output="screen">
  17     <remap from="cloud_in" to="tilt_scan_cloud" />
  18     <remap from="cloud_out" to="tilt_scan_cloud_known" />
  19     <param name="sensor_frame" type="string" value="laser_tilt_mount_link" />              
  20 
  21     <param name="fixed_frame" type="string" value="base_link" />
  22     <param name="object_padd" type="double" value="0.02" />
  23     <param name="object_scale" type="double" value="1.0" />
  24     
  25 
  26   </node>
  27 
  28   <node pkg="robot_self_filter" type="self_filter" respawn="true" name="tilt_laser_self_filter" output="screen">
  29 
  30      <!-- The topic for the input cloud -->
  31      <remap from="cloud_in" to="tilt_scan_cloud_known" />
  32      
  33 
  34      <!-- The topic for the output cloud -->
  35      <remap from="cloud_out" to="tilt_scan_cloud_filtered" />
  36 
  37      <!-- The frame of the sensor used to obtain the data to be
  38        filtered; This parameter is optional. If it is not specified,
  39        shadow points will be considered outside -->
  40      <param name="sensor_frame" type="string" value="laser_tilt_link" />
  41 
  42      <param name="min_sensor_dist" type="double" value=".05"/>
  43      <param name="self_see_default_padding" type="double" value=".01"/>
  44      <param name="self_see_default_scale" type="double" value="1.0"/>
  45 
  46    </node>
  47    
  48 
  49   <!-- assemble pointcloud into a full world view -->
  50   <node pkg="laser_assembler" type="point_cloud_assembler" output="screen"  name="point_cloud_assembler">
  51     <remap from="cloud" to="tilt_scan_cloud_filtered"/>
  52     <param name="tf_cache_time_secs" type="double" value="10.0" />
  53     <param name="tf_tolerance_secs" type="double" value="0.0" />
  54     <param name="max_clouds" type="int" value="400" />
  55     <param name="ignore_laser_skew" type="bool" value="true" />
  56     <param name="fixed_frame" type="string" value="base_link" />
  57   </node>
  58   
  59 
  60   <!-- make snapshots whenever laser reaches the top or bottom of a nod -->
  61   <node pkg="pr2_arm_navigation_perception" type="pr2_laser_snapshotter" output="screen" name="snapshotter">
  62     <remap from="laser_scanner_signal" to="laser_tilt_controller/laser_scanner_signal"/>
  63     <remap from="build_cloud" to="point_cloud_assembler/build_cloud" />
  64     <remap from="full_cloud" to="full_cloud_filtered" />
  65   </node>
  66 
  67   <!-- make point cloud snapshots into collision maps -->
  68   <node pkg="collision_map" type="collision_map_self_occ_node" name="collision_map_self_occ_node" respawn="true" output="screen">
  69     
  70     <!-- The default padding to be added for the body parts the robot can see -->
  71     <param name="self_see_default_padding" type="double" value="0.02" />
  72     
  73     <!-- The default scaling to be added for the body parts the robot can see -->
  74     <param name="self_see_default_scale" type="double" value="1.0" />
  75      
  76     <!-- if someone asks for a stable map, publish the static map on the dynamic
  77     map topic and no longer publish dynamic maps -->
  78     <param name="publish_static_over_dynamic_map" type="bool" value="true" />
  79     <param name="fixed_frame" type="string" value="base_link" />
  80 
  81     <!-- define a box of size 2x3x4 around (1.1,  0, 0) in the robot frame -->
  82     <param name="robot_frame" type="string" value="base_link" />
  83 
  84     <param name="origin_x" type="double" value="1.1" />
  85     <param name="origin_y" type="double" value="0.0" />
  86     <param name="origin_z" type="double" value="0.0" />
  87     
  88     <param name="dimension_x" type="double" value="1.0" />
  89     <param name="dimension_y" type="double" value="1.5" />
  90     <param name="dimension_z" type="double" value="2.0" />
  91 
  92     <!-- set the resolution (1.0 cm) -->
  93     <param name="resolution" type="double" value="0.01" />
  94     
  95     <!-- cloud sources -->
  96     <rosparam command="load" file="$(find pr2_arm_navigation_tutorials)/config/collision_map_sources.yaml" />   
  97   </node>
  98 
  99 </launch>

Launching the file

Having created the file in arm_navigation_tutorials, run the following command:

roslaunch pr2_arm_navigation_tutorials laser-perception.launch

When you launch this file you will see the laser

Making the laser tilt

   3   <!-- set laser tilt rate -->
   4   <node pkg="pr2_mechanism_controllers" type="send_periodic_cmd_srv.py" name="laser_tilt_controller_3dnav_params" args="laser_tilt_controller linear 3 .75 .25" />

The first command in the launch file doesn't have anything to do with sensor processing; instead, it just makes the laser nod with a particular specification. See pr2_mechanism_controllers/LaserScannerTrajController for more information.

If you look in rviz you should now see the robot's laser tilting and the data associated with laser should be moving up and down over the scene.

Turning the tilt laser data into a shadow-filtered point cloud

   7   <!-- convert tilt laser scan to pointcloud -->
   8   <node pkg="laser_filters" type="scan_to_cloud_filter_chain" output="screen" name="scan_to_cloud_filter_chain_tilt_laser">
   9     <remap from="scan" to="tilt_scan"/>
  10     <remap from="cloud_filtered" to="tilt_scan_cloud"/>
  11     <param name="target_frame" type="string" value="base_link"/>
  12     <rosparam command="load" file="$(find pr2_arm_navigation_tutorials)/config/shadow_filter.yaml" />
  13   </node>

The next node call in the launch file serves two functions. First, the raw data from the laser scanner is not in the form of a point cloud. The scan_to_cloud_filter_chain takes the raw laser scan and forms it into a point cloud. Second, during this filter chain we can call additional filters to do useful things. One form of noise in the raw laser scans are shadow points that occur when the laser hits an edge in the scene and ends up returning an average value between the edge and whatever object is behind it. We can filter these points by loading a shadow filter. The resulting filtered point cloud is published on the topic tilt_scan_cloud. Enabling the fourth topic "Tilt Scan Cloud" in rviz will allow you to visualize the resulting point cloud.

Clearing known objects from the laser data

  16   <node pkg="planning_environment" type="clear_known_objects" name="laser_clear_objects" output="screen">
  17     <remap from="cloud_in" to="tilt_scan_cloud" />
  18     <remap from="cloud_out" to="tilt_scan_cloud_known" />
  19     <param name="sensor_frame" type="string" value="laser_tilt_mount_link" />              
  20 
  21     <param name="fixed_frame" type="string" value="base_link" />
  22     <param name="object_padd" type="double" value="0.02" />
  23     <param name="object_scale" type="double" value="1.0" />
  24     
  25 
  26   </node>

This node will not be used in the this tutorial, but will be used in tutorial TODO. It serves to filter points from the point cloud that are associated with objects that have been recognized in the scene.

Self-filtering laser data

  28   <node pkg="robot_self_filter" type="self_filter" respawn="true" name="tilt_laser_self_filter" output="screen">
  29 
  30      <!-- The topic for the input cloud -->
  31      <remap from="cloud_in" to="tilt_scan_cloud_known" />
  32      
  33 
  34      <!-- The topic for the output cloud -->
  35      <remap from="cloud_out" to="tilt_scan_cloud_filtered" />
  36 
  37      <!-- The frame of the sensor used to obtain the data to be
  38        filtered; This parameter is optional. If it is not specified,
  39        shadow points will be considered outside -->
  40      <param name="sensor_frame" type="string" value="laser_tilt_link" />
  41 
  42      <param name="min_sensor_dist" type="double" value=".05"/>
  43      <param name="self_see_default_padding" type="double" value=".01"/>
  44      <param name="self_see_default_scale" type="double" value="1.0"/>
  45 
  46    </node>

If you look in rviz at the topic tilt_scan_cloud you will see that the robot's body shows up in the scans. For the purposes of composing a collision map that will let us avoid obstacles in the environment we don't want the arms to appear in this obstacle map; we have other mechanisms that allow us to avoid self-collisions. Thus we need to filter out the robot's body from the scans. The robot_self_filter allows us to do this. To read more about the parameters associated with this node please see the robot_self_filter page.

If you enable the fifth entry (Tilt scan self) in the rviz configuration file you should see that the robot's body no longer appears in the scans.

Assembling point clouds into composite point clouds

  49   <!-- assemble pointcloud into a full world view -->
  50   <node pkg="laser_assembler" type="point_cloud_assembler" output="screen"  name="point_cloud_assembler">
  51     <remap from="cloud" to="tilt_scan_cloud_filtered"/>
  52     <param name="tf_cache_time_secs" type="double" value="10.0" />
  53     <param name="tf_tolerance_secs" type="double" value="0.0" />
  54     <param name="max_clouds" type="int" value="400" />
  55     <param name="ignore_laser_skew" type="bool" value="true" />
  56     <param name="fixed_frame" type="string" value="base_link" />
  57   </node>

The point clouds that we have seen thus far consist of single 2D laser scans. To get a full 3D world view we need to create composite point clouds that consist of many different points. We can do this using an assembler, which essentially creates compact buffers of point cloud data. When the node is told to assemble scans it will produce a composite point cloud containing the scans from a particular indicated time period. For more information on creating composite points clouds or laser scans see the tutorial How To Assemble Laser Scans.

Taking composite point cloud snapshots

  60   <!-- make snapshots whenever laser reaches the top or bottom of a nod -->
  61   <node pkg="pr2_arm_navigation_perception" type="pr2_laser_snapshotter" output="screen" name="snapshotter">
  62     <remap from="laser_scanner_signal" to="laser_tilt_controller/laser_scanner_signal"/>
  63     <remap from="build_cloud" to="point_cloud_assembler/build_cloud" />
  64     <remap from="full_cloud" to="full_cloud_filtered" />
  65   </node>

Different applications may have the laser nod at different speeds. For the purposes of obstacle avoidance we generally want to update the collision map whenever we have a full new scan consisting of a sweep from bottom to top or top to bottom. The snapshotter will do exactly this: when a signal indicating that the laser has reached the top or bottom of a sweep has been generated, the snapshotter will call out to the cloud assembler to get all scans produced during the sweep. This composite point cloud is then published as a single point cloud.

Enable the sixth topic (Full Laser Cloud) in the rviz display to see the laser snapshots, which will only be updated when the laser has completed a full sweep.

Composing the laser collision map

  67   <!-- make point cloud snapshots into collision maps -->
  68   <node pkg="collision_map" type="collision_map_self_occ_node" name="collision_map_self_occ_node" respawn="true" output="screen">
  69     
  70     <!-- The default padding to be added for the body parts the robot can see -->
  71     <param name="self_see_default_padding" type="double" value="0.02" />
  72     
  73     <!-- The default scaling to be added for the body parts the robot can see -->
  74     <param name="self_see_default_scale" type="double" value="1.0" />
  75      
  76     <!-- if someone asks for a stable map, publish the static map on the dynamic
  77     map topic and no longer publish dynamic maps -->
  78     <param name="publish_static_over_dynamic_map" type="bool" value="true" />
  79     <param name="fixed_frame" type="string" value="base_link" />
  80 
  81     <!-- define a box of size 2x3x4 around (1.1,  0, 0) in the robot frame -->
  82     <param name="robot_frame" type="string" value="base_link" />
  83 
  84     <param name="origin_x" type="double" value="1.1" />
  85     <param name="origin_y" type="double" value="0.0" />
  86     <param name="origin_z" type="double" value="0.0" />
  87     
  88     <param name="dimension_x" type="double" value="1.0" />
  89     <param name="dimension_y" type="double" value="1.5" />
  90     <param name="dimension_z" type="double" value="2.0" />
  91 
  92     <!-- set the resolution (1.0 cm) -->
  93     <param name="resolution" type="double" value="0.01" />
  94     
  95     <!-- cloud sources -->
  96     <rosparam command="load" file="$(find pr2_arm_navigation_tutorials)/config/collision_map_sources.yaml" />   
  97   </node>

The final piece of the laser processing pipeline actually turns the processed laser data into a collision map. The node collision_map_self_occ actually can do a number of other things in terms of filtering. It can self-filter the robot out of the data, preserve points in previous data that are now occluded by parts of the robot which have moved, and take static collision maps of the environment at the request of the user. This node produces a binary occupancy collision map which can be passed into collision checking nodes.

Enable the seventh and final topic (Collision Map) in the rviz display to see the collision map. In the next tutorial we will actually use this collision map to check robot positions for collisions with the environment.

Wiki: motion_planning_environment/Tutorials/Making collision maps from self-filtered tilting laser data (last edited 2013-01-17 10:48:50 by DavidButterworth)