Show EOL distros: 

Overview

!!! !!! !!!

The ARIAC 2017 competition is complete. If you are interested in competing in an active ARIAC competition you are probably in the wrong place: this page is only available for archival reasons.

!!! !!! !!!

The purpose of this tutorial is to introduce you to the sensors available to you in the Agile Robotics for Industrial Automation Competition (ARIAC) and how to interface with them from the command-line. See the Hello World tutorial for an example of how to programmatically read the sensor data.

Note: this tutorial will only work with ROS indigo (Ubuntu Trusty 14.04) or ROS kinetic (Ubuntu Xenial 16.04). ROS indigo is recommended.

Prerequisites

You should have already completed the GEAR interface tutorial.

Reading sensor data

As described in the competition specifications, there are sensors available for you to place in the environment. How you can select which sensors to use is covered in the competition configuration specifications.

To start with, launch ARIAC with a sample work cell environment configuration that contains a UR10 arm and some sensors in various locations:

  • $ rosrun osrf_gear gear.py --development-mode -f `catkin_find --share osrf_gear`/config/sample.yaml

Break beam

This is a simulated photoelectric sensor, such as the Sick W9L-3. This sensor has a detection range of 1 meter and the binary output will tell you whether there is an object crossing the beam. There are two ROS topics that show the output of the sensor: /ariac/{sensor_name} and /ariac/{sensor_name}_change. A message is periodically published on topic /ariac/{sensor_name} with the output of the sensor. Once GEAR has been started, you can display the output of the break_beam sensor with:

  • $ rostopic echo /ariac/break_beam

You'll receive periodic updates showing if an object was detected. Alternatively, you could subscribe to the /ariac/{sensor_name}_change which will only show one message per transition from object not detected to object detected or vice versa:

  • $ rostopic echo /ariac/break_beam_change

For demonstration purposes, let's spawn a part that interrupts the beam:

  • $ rosrun gazebo_ros spawn_model -sdf -model pulley_part_0 -x 1.11 -y 2.28 -z 0.9 -file `catkin_find osrf_gear --share`/models/pulley_part_ariac/model.sdf

You should see that the break beam says that it detects an object.

Proximity sensor

This is a simulated ultrasound proximity sensor such as the SU2-A0-0A. This sensor has a detection range of ~0.15 meters and the output will tell you how far an object is from the sensor. The ROS topic /ariac/{sensor_name} publishes the output of the sensor with the sensor_msgs/Range message type. We have installed a proximity sensor in the grill next to the belt. Once GEAR has been started, you can subscribe to the proximity sensor topic with:

  • $ rostopic echo /ariac/proximity_sensor

For demonstration purposes, let's spawn a part that trips the sensor:

  • $ rosrun gazebo_ros spawn_model -sdf -model pulley_part_1 -x 1.11 -y 2.58 -z 0.9 -file `catkin_find osrf_gear --share`/models/pulley_part_ariac/model.sdf

You should see that the sensor detects the part.

  • ---
    header: 
      seq: 16
      stamp: 
        secs: 8
        nsecs: 950000000
      frame_id: proximity_sensor_frame
    radiation_type: 0
    field_of_view: 0.125
    min_range: 0.00999999977648
    max_range: 0.15000000596
    range: 0.0201679486781

Laser profiler

This is a simulated 3D laser profiler such as the Cognex DS1300. The output of the sensor is an array of ranges and intensities. The size of the array is equal to the number of beams in the sensor. The maximum range of each beam is ~0.725m. The output of the sensor is periodically published on the topic /ariac/{sensor_name}. Once GEAR has been started you can subscribe to the laser profiler topic with:

  • $ rostopic echo /ariac/laser_profiler

Spawn a model under the sensor:

  • $ rosrun gazebo_ros spawn_model -sdf -x 1.192 -y 3.92 -z 0.9 -model gear_part_0 -file `catkin_find osrf_gear --share`/models/gear_part_ariac/model.sdf

You should observe a change in the output when an object passes beneath the sensor. Note that the ranges reported will vary over time as there is noise in the sensor output.

To visualize the laser scan, launch RViz:

  • $ rosrun rviz rviz -d `catkin_find osrf_gear --share`/rviz/ariac.rviz

Add the LaserScan to RViz by selecting Add > By topic > LaserScan in the Displays panel. You should see a scan relative to the laser profiler's TF frame.

Logical camera

This is a simulated camera with a built-in object classification and localization system. The sensor reports the position and orientation of the camera in the world, as well as a collection of the objects detected within its frustum. For each object detected, the camera reports its type and pose from the camera reference frame. In the sample environment, there is a logical camera above the bins that store parts. Run ARIAC and subscribe to the logical camera topic (/ariac/logical_camera):

  • $ rostopic echo /ariac/logical_camera

You should see that the logical camera reports the pose of multiple piston rod parts, and its own pose in the world. Note that there is some noise in the poses reported by the camera.

  • models: 
      - 
        type: piston_rod_part
        pose: 
          position: 
            x: 1.20364758775
            y: 0.0126666966062
            z: 0.200562155962
          orientation: 
            x: -0.284407173974
            y: -0.659465287473
            z: 0.270614620537
            w: 0.641081758563
      - 
        type: piston_rod_part
        pose: 
          position: 
            x: 1.20478950671
            y: 0.145987651878
            z: 0.199396440747
          orientation: 
            x: -0.269646111218
            y: -0.65241583998
            z: 0.267911007071
            w: 0.655643377713
    
    ...
    
      - 
        type: piston_rod_part
        pose: 
          position: 
            x: 1.20478962954
            y: 0.280495576443
            z: 0.000606280476748
          orientation: 
            x: -0.269002886946
            y: -0.653896854825
            z: 0.268697106146
            w: 0.654108718191
      - 
        type: piston_rod_part
        pose: 
          position: 
            x: 1.20405892542
            y: -0.119528825027
            z: 0.199229954389
          orientation: 
            x: -0.2785943689
            y: -0.652678882037
            z: 0.276029529097
            w: 0.648230787318
    pose: 
      position: 
        x: -0.3
        y: 0.15
        z: 1.93
      orientation: 
        x: 0.0
        y: 0.707108079859
        z: 0.0
        w: 0.707105482511
    ---

TF transforms

In addition to publishing to the /ariac/{logical_camera_name} ROS topic, logical cameras also publish transforms on the /tf ROS topic that can be used by the TF2 library. The transforms published by cameras have the following form, for example:

  • $ rostopic echo /tf
    ---
    transforms: 
      - 
        header: 
          seq: 0
          stamp: 
            secs: 383
            nsecs:   1000000
          frame_id: world
        child_frame_id: logical_camera_frame
        transform: 
          translation: 
            x: -0.3
            y: 0.15
            z: 1.93
          rotation: 
            x: 0.0
            y: 0.707108079859
            z: 0.0
            w: 0.707105482511
    ---
    transforms: 
      - 
        header: 
          seq: 0
          stamp: 
            secs: 383
            nsecs:   1000000
          frame_id: logical_camera_frame
        child_frame_id: logical_camera_piston_rod_part_2_frame
        transform: 
          translation: 
            x: 1.20499854605
            y: 0.0136336106964
            z: -0.199954151589
          rotation: 
            x: -0.269260316766
            y: -0.655361050369
            z: 0.266052028024
            w: 0.653618461993

The TF2 library can be used to calculate the pose of the parts detected by the logical cameras in the world co-ordinate frame (see http://wiki.ros.org/tf2). It does this by combining the the transform from the /world frame to /logical_camera_frame, with the transform from /logical_camera_frame to the frame of the detected parts.

Here's an example of using TF2 command-line tools to do the conversion to the world co-ordinate frame. Note that the frame of the detected parts is prefixed by the name of the camera that provides the transform, so that transforms provided by multiple cameras that see the same parts are unique.

  • $ rosrun tf tf_echo /world /logical_camera_piston_rod_part_1_frame
    At time 2041.126
    - Translation: [-0.499, 0.030, 0.724]
    - Rotation: in Quaternion [-0.002, -0.002, 0.381, 0.925]
                in RPY (radian) [-0.006, -0.002, 0.781]
                in RPY (degree) [-0.344, -0.134, 44.748]

As another example, let's use the logical camera that is positioned above AGV1 (named logical_camera_over_agv1). To know the pose of the kit tray on AGV1 in world co-ordinates, using information reported by logical_camera_over_agv1, run:

  • $ rosrun tf tf_echo /world /logical_camera_over_agv1_kit_tray_1_frame
    At time 42.900
    - Translation: [-0.501, 0.029, 0.724]
    - Rotation: in Quaternion [-0.002, -0.007, 0.379, 0.926]
                in RPY (radian) [-0.008, -0.011, 0.777]
                in RPY (degree) [-0.485, -0.636, 44.507]

There are many ROS tools for interacting with TF frames. For example the pose of the models detected by the logical camera can be visualized in RViz:

  • $ rosrun rviz rviz -d `catkin_find osrf_gear --share`/rviz/ariac.rviz

For more information on working with TF frames programmatically see the tf2 tutorials.

Note that GEAR uses tf2_msgs and not the deprecated tf_msgs. Accordingly, you should use the tf2 package instead of tf.

Next steps

Now that you are familiar with the sensors available, continue to the Hello World tutorial to see how to programmatically interface with ARIAC.

Wiki: ariac/Tutorials/SensorInterface (last edited 2018-01-26 19:35:53 by DHood)