(!) Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags.

Mapping an environment with Volta simulation

Description: Use of gmapping with Volta simulation

Keywords: Volta

Tutorial Level: INTERMEDIATE

Next Tutorial: Running AMCL with Volta simulation

Mapping the Environment

The following tutorial illustrates the use of gmapping with Volta simulation. You will require the repository for Physical Robot here and the repository for Volta_Simulation here for this tutorial. You can also refer to this guide here on installing the repositories.

To learn more about move_base, gmapping and the navigation stack, refer to the Navigation tutorials.

Instructions:

  • Once the gazebo world is launched and the Volta robot is loaded into the simulation environment, the mapping node can be launched by running:
  • $ roslaunch volta_navigation navigation.launch gmapping:=true
  • This will launch the gmapping node. On a separate terminal, launch the rviz visualization tool by running:
  • $ rosrun rviz rviz
  • You can then open the volta configured rviz environment by opening the volta rviz config file, located under volta_navigation->rviz_config->navigation.rviz, from the rviz tool

  • :Tutorial_2a

  • In order to control the robot, launch the teleoperation node by running:
  • $ roslaunch volta_teleoperator teleoperator.launch keyboard:=true
  • As the robot moves, a grey static map will grow.

    https://lh3.googleusercontent.com/pw/ACtC-3f2b5frLl5ajMrpqznZeev3-zf62KNWAsjFTLQAFFyv7LR-H0ag1S9zTCCQz18rW8L24YbX2OgVDsVTikhiL5n4fU1qmfIa0VWnx_LBbZbhpTWXvQx4W0qXVsugyS42MbiLF2I33zGTCbrtTi4q9Yo=w544-h306-no?authuser=6

  • Once the mapping of the entire environment is completed, the map can be saved by running:
  • $ rosrun map_server map_saver –f <filename>

With Depth Camera

The following tutorial illustrates add on of Depth Camera with Volta and visualizing the depth cloud on RVIZ.

To learn more about visualization on RVIZ and Gazebo plugin models, you can refer to the tutorials on Gazebo Plugins.

Instructions:

  • To include the depth point cloud in the simulation, the following changes have to be made in the camera urdf file.
  • You can then open the volta configured urdf by opening the volta urdf file, located under volta_description->urdf->volta.urdf.xacro on the file explorer

  • Change arg name=”camera_enable” default =”true”
  • Next, open the camera configured urdf by opening the camera urdf file, located under volta_description->urdf->accessories->camera_volta,urdf.xacro on the file explorer

  • Inside the following lines after the camera frame link, <link name=”${frame}”/><visua....</link>

    <joint name="trans_joint" type="fixed">
                <origin xyz="0 0 0" rpy="-1.570796 0 -1.570796 "/>
                <parent link="${frame}"/>
                <child link="trans_link"/>
            </joint>
            <link name="trans_link"> </link>
    
            <gazebo reference="${frame}">
                <sensor type="depth" name="${frame}">
                    <always_on>1</always_on>
                    <update_rate>${update_rate}</update_rate>
                    <visualize>true</visualize>
                    <camera name="head">
                        <horizontal_fov>1.3962634</horizontal_fov>
                        
                        <clip>
                        <near>0.02</near>
                        <far>300</far>
                        </clip>
                        <noise>
                          <type>gaussian</type>
                          <mean>0.0</mean>
                          <stddev>0.007</stddev>
                        </noise>
                    </camera>
                    <plugin name="${frame}_controller" filename="libgazebo_ros_openni_kinect.so">
                        <baseline>0.0</baseline>
                        <alwaysOn>true</alwaysOn>
                            <updateRate>${update_rate}</updateRate>
                            <cameraName>${frame}</cameraName>
                            <frameName>trans_link</frameName>
                        <imageTopicName>${frame}/color/image_raw</imageTopicName>
                        <cameraInfoTopicName>${frame}/color/camera_info</cameraInfoTopicName>
                        <depthImageTopicName>${frame}/depth/image_raw</depthImageTopicName>
                        <depthImageInfoTopicName>${frame}/camera/depth/camera_info</depthImageInfoTopicName>
                        <pointCloudTopicName>${frame}/depth/points</pointCloudTopicName>
                        <pointCloudCutoff>0.5</pointCloudCutoff>
                            <hackBaseline>0</hackBaseline>
                            <distortionK1>0.0</distortionK1>
                            <distortionK2>0.0</distortionK2>
                            <distortionK3>0.0</distortionK3>
                            <distortionT1>0.0</distortionT1>
                            <distortionT2>0.0</distortionT2>
                        <CxPrime>0</CxPrime>
                        <Cx>0</Cx>
                        <Cy>0</Cy>
                        <focalLength>0</focalLength>
    
                    </plugin>
                </sensor>
            </gazebo>
  • Once the gazebo world is launched and the Volta robot is loaded into the simulation environment, the mapping node can be launched by running:
    $ roslaunch volta_navigation navigation.launch gmapping:=true 
  • This will launch the gmapping node. On a separate terminal, launch the rviz visualization tool by running:
    $ rosrun rviz rviz 
  • You can then open the volta configured rviz environment by opening the volta rviz config file, located under volta_navigation->rviz_config->navigation.rviz, from the rviz tool

  • To visualise the and picture the Depth Cloud, you can select the “Add” button -> By Topic ->/camera -> /camera -> /depth ->/points -> pointcloud2 as illustrated in the picture below

    :Tutorial_2b

  • Once you are done, the visualization should similar to the image below with the depth of the wall reflected:

    :Tutorial_2c

  • In order to control the robot, launch the teleoperation node by running:
    $ roslaunch volta_teleoperator teleoperator.launch keyboard:=true
  • As the robot moves, a grey static map will grow.

    https://lh3.googleusercontent.com/pw/ACtC-3dAoqnmXr8XGP6ZQo6aS0o5uKsfMuNsyC9xFEXDMG8VNTLeS-IfI6oV_Jb12WyPF7lzuNPFkWjDOzdesmsh2fosgn_RssancO05o9lbCMdF-f9VlODFsHduy21XOSI3ZdmU67qpn6nScqILI2tlZRs=w544-h306-no?authuser=6

  • Once the mapping of the entire environment is completed, the map can be saved by running:
    $ rosrun map_server map_saver –f <filename>

Wiki: Robots/Volta/Tutorials/Mapping an environment with Volta simulation (last edited 2020-11-26 09:53:14 by Botsync)