Note: This tutorial assumes that you have completed the previous tutorials: First Steps, Parameters.
(!) Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags.

Camera Frames

Description: This tutorial explains the different frames defined in the ensenso_camera_node or ensenso_camera_mono_node and how the 3D data is transformed.

Tutorial Level: INTERMEDIATE

Next Tutorial: Calibration Patterns Workspace Calibration Hand-Eye Calibration

We introduce different kind of frames: camera_frame, target_frame and link_frame, which will be explained in the following. In general the camera node automatically manages the transformation of the 3D data between different frames.

The camera frame (specified by the camera node's camera_frame parameter) is a tf frame in which the data is captured by the camera. By default, this frame is set to optical_frame_<serial>, where serial is a placeholder for the camera's serial number.

The link_frame is a helper frame. Internally a camera link is stored for each of the cameras in the NxLib. This camera link stores, for example, the transformation of a hand-eye calibration (fixed) from the camera to the robot base. It can also store a transformation to another camera or the transformation to the workspace, which is received by a workspace calibration. This internal camera link is static and will be published with tf. The link_frame will be published as parent tf frame and the camera_frame as child tf frame. If the link_frame parameter is not given, it defaults to the target_frame, if the target_frame is given. Otherwise link_frame will be the same as the camera_frame.

The target_frame is a tf frame in which you want to receive data from the camera. All data is automatically transformed using the current tf transformation between the link_frame and the target_frame (this includes the point cloud as well as poses of calibration patterns). By default, the target frame is the same as the camera frame, if not defined.

If neither the link_frame nor the target_frame is given, no transformation for the data is used and the data is received directly by the camera and is also published in the camera's camera_frame.

If a camera link is stored internally to the EEPROM of the camera (after a hand-eye/workspace calibration or link calibration to another camera), this transformation will be used in any case. If you do not want to use any transformation for the 3D data, make sure the transformation in the camera's Link node is empty or at least the identity transform.

Camera Frame (data as it is seen from the camera)
      |
      |  Static internal Camera Link (hand-eye/workspace calibration or link calibration)
      |  Gets published with tf from NxLib to ROS.
      v
Link Frame
      |
      |  Dynamic Global Link (defined from outside and fetched from current tf Tree)
      |  Is Updated with current tf Transformations from ROS to NxLib
      v
Target Frame (data from camera published in this frame)

Currently, the Link Calibration is not yet available within ROS. See our Multi-Camera-Setups documentation for details on how to calibrate a link between two cameras using the NxLib calibration wizard.

Camera Frames Usage Examples

Here are some launch files that illustrate what frames have to be given in order to get the 3D data with the right transform in the right frame.

Camera workspace calibrated, you want to receive data in the workspace frame

If the camera has been workspace calibrated before, the camera's internal Link node stores the transformation from the camera to the workspace. That transform will get published, if the target_frame or the link_frame is given. Because the link_frame defaults to the target_frame if the target_frame is given, we only need to define the target_frame as the workspace's frame.

   1 <launch>
   2   <arg name="serial" doc="The serial number of the camera."/>
   3   <arg name="target_frame" doc="The workspace frame you want to receive data in". />
   4 
   5   <node pkg="ensenso_camera" type="ensenso_camera_node" name="ensenso_camera_node">
   6     <param name="serial" value="$(arg serial)"/>
   7     <param name="target_frame" value="$(arg target_frame)"/>
   8     <!-- frame below is not needed, but can be given -->
   9     <!-- param name="link_frame" value="$(arg target_frame)"/ -->
  10   </node>
  11 </launch>

Camera workspace calibrated, you want to receive data in another frame

In this case you want to receive the data outside of the workspace frame, which the camera is calibrated to. Since the NxLib only knows the internal camera link which hols the transformation from the camera to the workspace, you have to publish the transform from the workspace to the target frame with tf yourself (see e.g. here) and this cannot automatically be done by the NxLib. Besides publishing the transform, you will have to give both the target_frame and the link_frame as shown below:

   1 <launch>
   2   <arg name="serial" doc="The serial number of the camera."/>
   3   <arg name="link_frame" doc="The workspace frame the camera was calibrated to". />
   4   <arg name="target_frame" doc="The frame you want to receive data in". />
   5 
   6   <node pkg="ensenso_camera" type="ensenso_camera_node" name="ensenso_camera_node">
   7     <param name="serial" value="$(arg serial)"/>
   8     <param name="target_frame" value="$(arg target_frame)"/>
   9     <!-- frame below IS needed -->
  10     <param name="link_frame" value="$(arg link_frame)"/>
  11   </node>
  12 </launch>

The NxLib publishes the known transform from the camera to the workspace (camera_frame -> link_frame) and will fetch the remaining transform from tf (link_frame -> target_frame). The 3D data will then be transformed from the camera_frame to the target_frame and also published in the target_frame.

Camera is not calibrated to any extrinsic point or any other camera

In that case you also want to have the 3D data in the camera_frame. Because both the target_frame and the link_frame will default to the camera_frame if none of them are given, no frame has to be given as shown below.

   1 <launch>
   2   <arg name="serial" doc="The serial number of the camera."/>
   3 
   4   <node pkg="ensenso_camera" type="ensenso_camera_node" name="ensenso_camera_node">
   5     <param name="serial" value="$(arg serial)"/>
   6     <!-- both frames below are not needed -->
   7     <!-- param name="target_frame" value="$(arg target_frame)"/ -->
   8     <!-- param name="link_frame" value="$(arg link_frame)"/ -->
   9   </node>
  10 </launch>

The camera has been hand-eye calibrated

There are two different types of a hand-eye calibration: A moving and a fixed one. In a moving hand-eye calibration (camera is moving on the robot) the transform between the camera and the mounting position will be stored in the camera's EEPROM, whereas in the fixed variant the transform between the camera and the robot base will be stored in the EEPROM.

After a moving hand-eye calibration

Since the camera's internal link stores the transform between the camera and the mounting position (e.g. the robot wrist), you will have to publish transforms with tf, that will be fetched from the camera.

   1 <launch>
   2   <arg name="serial" doc="The serial number of the camera."/>
   3   <arg name="link_frame" doc="The frame the camera is mounted on (e.g. wrist)". />
   4   <arg name="target_frame" doc="The frame you want to receive data in (e.g. robot base)". />
   5 
   6   <node pkg="ensenso_camera" type="ensenso_camera_node" name="ensenso_camera_node">
   7     <param name="serial" value="$(arg serial)"/>
   8     <param name="target_frame" value="$(arg target_frame)"/>
   9     <param name="link_frame" value="$(arg link_frame)"/>
  10   </node>
  11 </launch>

The NxLib publishes the known transform from the camera to the mounting position (camera_frame -> link_frame) and will fetch the remaining transform from tf (link_frame -> target_frame). The 3D data will then be transformed from the camera_frame to the target_frame and also published in the target_frame.

After a fixed hand-eye calibration

In this case the camera's internal link stores the transform between the camera and the robot base. It is more or less the same as a workspace calibration. So if you want to receive data relative to your robot base, you only need to set the target_frame to the robot's base frame.

   1 <launch>
   2   <arg name="serial" doc="The serial number of the camera."/>
   3   <arg name="target_frame" doc="The robot base frame you want to receive data in". />
   4 
   5   <node pkg="ensenso_camera" type="ensenso_camera_node" name="ensenso_camera_node">
   6     <param name="serial" value="$(arg serial)"/>
   7     <param name="target_frame" value="$(arg target_frame)"/>
   8   </node>
   9 </launch>

The NxLib also publishes the known transform from the camera to the target_frame, in that case the robot base frame. To be exact: The whole transform is given between the camera_frame and the link_frame (which is published to tf). The transform between the link_frame and the target_frame is a unit transform.

Wiki: ensenso_driver/Tutorials/CameraFrames (last edited 2022-03-24 06:55:19 by BenjaminThiemann)