This node opens an Ensenso camera and provides actions to configure this camera and get data from it.
The node is also provided as a nodelet with the name ensenso_camera/nodelet. For optimal performance, you should use this nodelet and get the point clouds as a pcl::PointCloud through the published topic. This avoids serialization and unnecessary copying. Note that the serialization always has to be done, when you call the request_data action with the include_results_in_response flag set.
When using multiple camera nodes at the same time, you should use ROS namespaces to push the node's actions and published topics into separate scopes. This can be done with the ROS_NAMESPACE environment variable or by using the ns attribute in roslaunch.
Please refer to this page for more information on name resolution in ROS.
- The serial number of the camera that is controlled by this node. If this is not specified, the node will try to open the first camera in the NxLib tree.
- A JSON file from which we load the camera parameters.
- The path to a file camera directory or zip file. When the camera with the given serial does not exist, the node will automatically create a file camera with this path. When the node is shut down, the camera will be deleted again.
- Whether the camera is fixed or moves with a robot. This only has an effect on the hand eye calibration.
- The number of threads that is used by the NxLib instance for this node.
- The camera's TF frame.
This frame is also used as the robot's wrist frame for the hand eye calibration. After the calibration is done, the camera link is updated so that this is actually correct and all of the camera's data is returned in wrist coordinates (transformed into the target frame, if that is used).
- TF frame in which the camera data will be returned.
- This TF frame is needed, if you want to link one camera to another one. As an example you can link a mono camera to a stereo camera. In this case the mono camera must have a link_frame defined, which is the stereo camera's camera_frame.
- The robot's base frame for the hand-eye calibration.
For a fixed camera, this defaults to camera_frame, for a moving camera it needs to be specified if you want to perform a hand-eye calibration.
- The robot's wrist frame for the hand-eye calibration.
For a moving camera, this defaults to camera_frame, for a fixed camera it needs to be specified if you want to perform a hand-eye calibration.
Almost all interactions with the camera node are provided through actions (see the actionlib documentation for information on how actions work). See the definition of the actions and the corresponding tutorials for more information on the how the different actions are used.
- Read camera parameters.
- Set camera parameters.
- Request data from the camera. Depending on the flags given in the action, the results will be included in the action result or published on the topics that are listed below.
- Locate a calibration pattern that is observed by the camera.
- Project a calibration pattern from an arbitrary pose into the camera and get the resulting image points.
- Perform a workspace calibration.
- Perform a hand-eye calibration.
- Fits primitives (planes, spheres, cylinders) onto the current camera point cloud.
- The point cloud will be rendered orthographic into the given view pose.
Only usable with a mono (rgb) camera linked to a stereo camera:
Performs a projection of the point cloud into the mono camera's camera_frame. Afterwards the pixel values of the mono camera's image will be applied to the point cloud. The result is a colored point cloud.
In addition to these actions, the node also provides to further actions that give direct access to the NxLib tree. You should probably not use these, unless you have to use some functionality of the NxLib that is not wrapped in this node.
Read and write JSON values in the NxLib tree.
Execute an NxLib command.
- The status of the camera node.
All of the following topics are only published when the request_data action is called with the publish_results flag set.
- The raw left image. Only the first one, when Flex View is enabled.
- The camera info for the raw left images.
- The raw right image. Only the first one, when Flex View is enabled.
- The camera info for the raw right images.
- The rectified left image. Only the first one, when Flex View is enabled.
- The camera info for the rectified left images.
- The rectified right image. Only the first one, when Flex View is enabled.
- The camera info for the rectified right images.
- The disparity map.
- The point cloud. Includes normals if they were requested.
- The rectified depth image in a canonical format (see REP 118).
- The camera info for the rectified depth image.