!

Note: This tutorial assumes that you have completed the previous tutorials: First Steps, Parameters.
(!) Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags.

Camera Frames with Multiple Cameras

Description: This tutorial explains on how to use the different frames defined in the ensenso node or ensenso mono node, when using multiple cameras

Tutorial Level: INTERMEDIATE

Next Tutorial: Several Nodelets

Camera Frames with multiple Cameras

With the use of a multi camera setup, you have different choices of using the camera Tf frames. You either can chain cameras together, forming a kinematic chain. Or you can link each camera independently to one joint coordinate system. Both ways are explained in the following two chapters.

The general way of setting up a multi camera setup with the EnsensoSDK is described here.

The Link Calibration is not yet available within the ensenso_camera package. If you want to link cameras together, you will have to do that within the NxLib, described here

Cameras linked together

If two cameras are linked together via the Calibration command of the NxLib (Mandatory: there has been a calibration of the link, see here), there is a transformation link stored internally between two cameras. That link is stored in one of the camera's Link node. The link describes the transformation to the other camera used in the link calibration.

The introduced parameter link_frame takes the other camera's frame as argument. If there is a link stored internally (always after the Link calibration) and the parameter link_frame is given, the camera will publish its transformation (camera_frame to link_frame) to TF. The following graph illustrates the situation of the kinematic chain of cameras:

Camera 1
      |
      |  Camera 1's link_frame: Camera 2's frame
      v
Camera 2
      |
      |  Camera 2's link_frame: Camera 3's frame
      v
Camera 3

Cameras linked together for a multi camera setup

If you want to use the kinematic chain of cameras for a multi camera setup, one of the cameras must have the target_frame as link_frame, which would look like this:

Camera 1
      |
      |  Camera 1's link_frame: Camera 2's frame
      v
Camera 2
      |
      |  Camera 2's link_frame: Camera 3's frame
      v
Camera 3
      |
      |  Camera 3's link_frame: Workspace
      v
Workspace (or target_frame)

This camera is then the root of that chain, being the camera pointing to the Workspace. Because the link_frame defaults to the target_frame only camera 3 does not need to define a link_frame. All other cameras however need the link_frame to be the next camera's frame. Internally all 3D data from all cameras are then transformed to the target_frame.

An example of linked cameras is given in the launch file of the next tutorial, where a mono camera is linked to stereo camera, which is linked to a Workspace.

Cameras linked to a joint Workspace

You can also setup a multi camera system with all cameras independently linked to a global shared coordinate system. That is known as a Workspace calibration. If you execute a Workspace calibration beforehand, the internal link of the camera will store the transformation to that coordinate system.

Such a joint frame is given with the parameter target_frame. It also acts as a frame, to which the data will be transformed to. If you have for example two cameras linked to a target_frame, they all will publish its results to the target_frame. The following graph shall illustrate the problem:

Camera 1                                        Camera 2
      |                                              |
      |  Camera 1's target_frame: Workspace          |    Camera 2's target_frame: Workspace
      v                                              v
                        Workspace (or target_frame)

Wiki: ensenso_driver/Tutorials/CameraFramesMultiple (last edited 2019-08-07 14:10:01 by YasinGuenduez)