Revision 1 as of 2016-10-10 15:12:32

Clear message

Author: Jordi Pages < jordi.pages@pal-robotics.com >

Maintainer: Jordi Pages < jordi.pages@pal-robotics.com >

Support: tiago-support@pal-robotics.com

Source: https://github.com/pal-robotics/tiago_tutorials.git

(!) Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags.

Planar object detection and pose estimation (C++)

Description: Planar textured object detection based on feature matching between live video feed an a reference image of the object. Then, the pose of the object is determined by homography estimation and provided the size of the object.

Keywords: OpenCV

Tutorial Level: ADVANCED

Purpose

This tutorial presents a ROS node that subscribes to the live video feed of TIAGo and looks for keypoints in order to detect a known planar textured object. When found, the homography between the current view and the reference view is estimated. Then, using the known width and height of the object its 3D pose is also estimated. OpenCV is used in order to extract and match keypoints and to estimate the homography.

Pre-Requisites

First, make sure that the tutorials are properly installed along with the TIAGo simulation, as shown in the Tutorials Installation Section.

Download texture detector

In order to execute the demo first we need to download the source code of the person detector in the public simulation workspace in a console

$ cd ~/tiago_public_ws/src
$ git clone https://github.com/pal-robotics/pal_texture_detector.git

Building the workspace

Now we need to build the workspace

cd ~/tiago_public_ws
$ catkin build

Execution

Open trhee consoles and in each one source the workspace

cd ~/tiago_public_ws
source ./devel/setup.bash

In the first console launch the simulation

$ roslaunch tiago_gazebo tiago_gazebo.launch public_sim:=true robot:=steel world:=tutorial_office

Note that in this simulation world there are several person models

In the second console run the person detector node as follows

$ roslaunch pal_person_detector_opencv detector.launch image:=/xtion/rgb/image_raw

In the third console we may run an image visualizer to see the person detections

$  rosrun image_view image_view image:=/person_detector/debug

Note that persons detected are stroked with a ROI. The detections are not only represented in the /person_detector/debug Image topic but also in /person_detector/detections, which contains a vector with the image ROIs of all the detected persons. In order to see the contents of the topic we may use

$  rostopic -n 1 /person_detector/detections

which will prompt one message of the topic that when a person is detected will look like

header: 
  seq: 34
  stamp: 
    secs: 2646
    nsecs: 967000000
  frame_id: ''
detections: 
  - 
    x: 152
    y: 100
    width: 160
    height: 320
camera_pose: 
  header: 
    seq: 0
    stamp: 
      secs: 0
      nsecs:         0
    frame_id: ''
  child_frame_id: ''
  transform: 
    translation: 
      x: 0.0
      y: 0.0
      z: 0.0
    rotation: 
      x: 0.0
      y: 0.0
      z: 0.0
      w: 0.0
---

Moving around

In order to see better how the person detector performs we can open a new console and run the key_teleop node in order to move the robot around

source /opt/ros/indigo/setup.bash
rosrun key_teleop key_teleop.py

Then, using the arrow keys of the keyboard we can make TIAGo move around the simulated world to gaze toward the different persons and see how the detector works