Author: Jordi Pages < jordi.pages@pal-robotics.com >

Maintainer: Sara Cooper < sara.cooper@pal-robotics.com >

Support: ari-support@pal-robotics.com

Source: https://github.com/pal-robotics/ari_tutorials.git

(!) Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags.

Planar object detection and pose estimation (C++)

Description: Planar textured object detection based on feature matching between live video feed an a reference image of the object. Then, the pose of the object is determined by homography estimation and provided the size of the object.

Keywords: OpenCV

Tutorial Level: ADVANCED

homography_title.jpg

Purpose

This tutorial presents a ROS node that subscribes to the live video feed of ARI and looks for keypoints in order to detect a known planar textured object. When found, the homography between the current view and the reference view is estimated. Then, using the known width and height of the object its 3D pose is also estimated. OpenCV is used in order to extract and match keypoints and to estimate the homography.

Pre-Requisites

First, make sure that the tutorials are properly installed along with the ARI simulation, as shown in the Tutorials Installation Section.

Download texture detector

In order to execute the demo first we need to download the source code of the person detector in the public simulation workspace in a console

  • $ cd ~/ari_public_ws/src
    $ git clone https://github.com/pal-robotics/pal_texture_detector.git

Building the workspace

Now we need to build the workspace

  • cd ~/ari_public_ws
    $ catkin build

Execution

Open three consoles and in each one source the workspace

  • cd ~/ari_public_ws
    source ./devel/setup.bash

In the first console launch the simulation

  • $ roslaunch ari_gazebo ari_gazebo.launch public_sim:=true world:=tutorial_office gzpose:="-x -0.13 -y 2.88 -z -0.003 -R 0 -P 0 -Y 2.05"

ARI will be spawn in front of a wall with a textured poster.

gazebo_poster.jpg

Then, in the second console run the following command

  • $ roslaunch pal_texture_detector texture_detector.launch rectified_image/head_front_camera/image_raw camera_info:=/head_front_camera/camera_info

The texture detector will be inspecting the live video feed from ARI's RGBD camera looking for matches of the image in

  • `rospack find pal_texture_detector`/objects/pal_poster.png which is shown below

pal_poster.png

Three debug images will appear. One showing the keypoint matches found between the live video feed and the reference image; one with the current image wrapped according to the homography estimated in order to align it with the reference image; and the keypoint matches found between the wrapped and the reference image in order to validate the homography estimation.

texture_detector_debug_imgs.jpg

In case that enough matches are found between both images the following topics will be published:

  • /texture_detector/debug
    /texture_detector/detection
    /texture_detector/pose

The /texture_detector/debug is an image with the detected object marked with a green rectangle. One way to inspect this image is:

  • rosrun image_view image_view image:=/texture_detector/debug

gazebo_planar_object_detection.jpg

The /texture_detector/detection topic contains a copy of the image processed and the coordinates of the 4 pixels enclosing the detection.

Finally, the /texture_detector/pose is a topic of type geometry_msgs/PoseStamped with the 3D pose of the object estimated by the homography. This pose can be visualized as a 3D frame in the rviz launched as follows

  • rosrun rviz rviz -d `rospack find pal_texture_detector`/config/rviz.rviz

rviz_texture_detection_pose.jpg

Wiki: Robots/ARI/Tutorials/HomographyEstimation (last edited 2020-02-06 17:35:39 by SaraCooper)