Note: This tutorial assumes that you have completed the previous tutorials: First Steps, Parameters. |
Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags. |
Camera Frames with Multiple Cameras
Description: This tutorial explains on how to use the different frames defined in the ensenso_camera_node or ensenso_camera_mono_node, when using multiple camerasTutorial Level: INTERMEDIATE
Next Tutorial: Several Nodelets
Contents
Camera Frames with multiple Cameras
With the use of a multi camera setup, you have different choices of using the camera tf frames. You can either chain cameras together, forming a kinematic chain. Or you can link each camera independently to one joint coordinate system. Both ways are explained in the following two chapters.
The general way of setting up a multi camera setup with the EnsensoSDK is described in the multi camera setup guide.
The ensenso_camera ROS package does not provide a way to setting up a multi camera setup yet. If you want to link cameras together, please go the general way using the NxLib as described in the link above.
Cameras linked together
If two cameras are linked together via the Calibration command of the NxLib, there is a transformation link stored internally between two cameras. That link is stored in one of the camera's Link node. The link describes the transformation to the other camera used in the link calibration.
The introduced parameter link_frame takes the other camera's frame as argument. If there is a link stored internally (always after the link calibration, workspace or hand eye calibration) and the parameter link_frame is given, the camera will publish its transformation (camera_frame to link_frame) to tf. The following graph illustrates the situation of the kinematic chain of cameras:
Camera 1 | | Camera 1's link_frame: Camera 2's frame v Camera 2 | | Camera 2's link_frame: Workspace v Workspace (or target_frame)
The camera 2 is then the root of that chain, being the camera pointing to the Workspace. Because the link_frame defaults to the target_frame, camera 2 has only to define its target_frame. All other cameras however need to define the link_frame and target_frame.
The target_frame is the frame in which the data should be represented for each camera. The link_frame is a helper frame, which is used to build a tf tree from each of the cameras' camera_frame to the target_frame.
In this graphical example Camera 1 will fetch the transformation data from the tf tree from its link_frame (Camera 2's frame) to the target_frame (Workspace) and will publish its data to the target_frame. That way Camera 1 has a fully defined tf tree, which saves the transformation from the camera_frame to the link_frame and from the link_frame to the target_frame. Internally all 3D data from all cameras are transformed to the target_frame.
In Order to use the link kinematic to transform data to a different coordinate system, cameras have to be started as nodelets.
An example of linked cameras is given in the launch file of the next tutorial, where a mono camera is linked to stereo camera, which is linked to a Workspace.
Cameras linked to a joint Workspace
You can also setup a multi camera system with all cameras independently linked to a global shared coordinate system. That is known as a workspace calibration. If you perform a workspace calibration beforehand, the internal link of the camera will store the transformation to that coordinate system.
With this independent transformation, only the camera's target_frame has to be provided. In this example it is the Workspace.
Camera 1 Camera 2 | | | Camera 1's target_frame: Workspace | Camera 2's target_frame: Workspace v v Workspace (or target_frame)