Note: This tutorial assumes that you have completed the previous tutorials: Runnung several Nodelets.
(!) Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags.

Generating a colored Point Cloud with Nodelets

Description: This tutorial shows how to run a stereo and mono nodelet and how to generate a colored point cloud.

Tutorial Level: INTERMEDIATE

Next Tutorial: Running a C-Series Camera with two Nodelets

Prerequisites

This tutorial requires you to have a calibrated multi camera setup consisting of (at least) one stereo camera and one mono camera. For more information see the tutorial on multi camera setup.

In general the nodes of the ensenso_camera ROS package, as well as this tutorial, make extensive use of ROS actions. If you are not familiar with ROS actions yet, we recommend you do the ROS action tutorials first.

Workflow

To recieve a colored point cloud from an Ensenso stereo camera linked to a mono camera we have to perform the following steps:

  1. Receive a point cloud from the stereo camera
  2. Receive an image from the linked mono camera
  3. Project the point cloud into the mono camera's frame
  4. Apply the pixel values of the mono camera image to the projected point cloud.

Step 1 and 2 are done with the corresponding RequestData actions defined for stereo and mono cameras:

Step 3 and 4 are done by the TexturedPointCloud action for stereo cameras:

In the next section we will write a Python ROS node that will define action clients for the corresponding action servers and make calls to them via action goals.

Python Node

The following node calls the actions servers of the mono and the stereo camera respectively. It is already defined in the ensenso_camera package as color_point_cloud, so you do not have to write it yourself or copy it to your workspace. Below is the complete script and after it we will go through the it in detail:

   1 #!/usr/bin/env python
   2 import rospy
   3 import sys
   4 
   5 import actionlib
   6 from actionlib_msgs.msg import GoalStatus
   7 from ensenso_camera_msgs.msg import RequestDataAction, RequestDataGoal
   8 from ensenso_camera_msgs.msg import RequestDataMonoAction, RequestDataMonoGoal
   9 from ensenso_camera_msgs.msg import TexturedPointCloudAction, TexturedPointCloudGoal
  10 
  11 
  12 def main():
  13     loop_rate = rospy.get_param("~rate", 2)
  14     mono_serial = rospy.get_param("~mono_serial")
  15     stereo_namespace = rospy.get_param("~stereo_ns", "stereo")
  16     mono_namespace = rospy.get_param("~mono_ns", "mono")
  17     timeout = rospy.get_param("~timeout", 60)
  18 
  19     # Use different namespaces in order to prevent duplicate topic names.
  20     request_stereo_data_client_name = stereo_namespace + "/request_data"
  21     request_mono_data_client_name = mono_namespace + "/request_data"
  22     texture_point_cloud_client_name = stereo_namespace + "/texture_point_cloud"
  23 
  24     request_stereo_data_client = actionlib.SimpleActionClient(
  25         request_stereo_data_client_name, RequestDataAction)
  26     request_mono_data_client = actionlib.SimpleActionClient(
  27         request_mono_data_client_name, RequestDataMonoAction)
  28     texture_point_cloud_client = actionlib.SimpleActionClient(
  29         texture_point_cloud_client_name, TexturedPointCloudAction)
  30 
  31     clients_names = [
  32         request_stereo_data_client_name,
  33         request_mono_data_client_name,
  34         texture_point_cloud_client_name
  35     ]
  36     clients = [
  37         request_stereo_data_client,
  38         request_mono_data_client,
  39         texture_point_cloud_client
  40     ]
  41     for client_name, client in zip(clients_names, clients):
  42         rospy.loginfo("Trying to connect to: {}".format(client_name))
  43         if not client.wait_for_server(rospy.Duration(timeout)):
  44             rospy.logerr("The camera node is not running!")
  45             sys.exit()
  46         rospy.loginfo("Connected")
  47 
  48     request_stereo_data_goal = RequestDataGoal()
  49     request_stereo_data_goal.request_point_cloud = True
  50 
  51     request_mono_data_goal = RequestDataMonoGoal()
  52     request_mono_data_goal.request_rectified_images = True
  53 
  54     rate = rospy.Rate(loop_rate)
  55     while not rospy.is_shutdown():
  56         request_stereo_data_client.send_goal(request_stereo_data_goal)
  57         request_mono_data_client.send_goal(request_mono_data_goal)
  58 
  59         request_stereo_data_client.wait_for_result()
  60         request_mono_data_client.wait_for_result()
  61 
  62         if request_stereo_data_client.get_state() != GoalStatus.SUCCEEDED:
  63             rospy.logwarn("Stereo action was not successful.")
  64         else:
  65             error = request_stereo_data_client.get_result().error
  66             if error.code != 0:
  67                 rospy.logerr("Stereo error {}: {}".format(error.code, error.message))
  68 
  69         if request_mono_data_client.get_state() != GoalStatus.SUCCEEDED:
  70             rospy.logwarn("Mono action was not successful.")
  71         else:
  72             error = request_mono_data_client.get_result().error
  73             if error.code != 0:
  74                 rospy.logerr("Mono error {}: {}".format(error.code, error.message))
  75 
  76         texture_goal = TexturedPointCloudGoal()
  77         texture_goal.use_opengl = True
  78         texture_goal.publish_results = True
  79         texture_goal.mono_serial = mono_serial
  80         texture_goal.far_plane = 4000.0
  81         texture_goal.near_plane = 100.0
  82 
  83         texture_point_cloud_client.send_goal(texture_goal)
  84         texture_point_cloud_client.wait_for_result()
  85 
  86         rate.sleep()
  87 
  88 
  89 if __name__ == "__main__":
  90     try:
  91         rospy.init_node("color_point_cloud")
  92         main()
  93     except rospy.ROSInterruptException:
  94         pass

The first few lines import the necessary actions for generating a colored point cloud. With the RequestDataAction you recieve data from the stereo camera. RequestDataMonoAction is the mono camera equivalent. The TexturedPointCloudAction combines the data of both mono and stereo requests and generates the colored point cloud.

   5 import actionlib
   6 from actionlib_msgs.msg import GoalStatus
   7 from ensenso_camera_msgs.msg import RequestDataAction, RequestDataGoal
   8 from ensenso_camera_msgs.msg import RequestDataMonoAction, RequestDataMonoGoal
   9 from ensenso_camera_msgs.msg import TexturedPointCloudAction, TexturedPointCloudGoal

At the beginning of the main-function the node args are parsed and stored into variables. If the argument is not given to the node, the variable is assigned with the provided default value. Please note that the namespaces for the stereo and the mono camera must have different values, because otherwise the topics and actions cannot be distinguished.

  12 def main():
  13     loop_rate = rospy.get_param("~rate", 2)
  14     mono_serial = rospy.get_param("~mono_serial")
  15     stereo_namespace = rospy.get_param("~stereo_ns", "stereo")
  16     mono_namespace = rospy.get_param("~mono_ns", "mono")
  17     timeout = rospy.get_param("~timeout", 60)

The following block of code defines the action clients, which will send a specific goal to their connected action server. The SimpleActionClient takes the published topic of the action and its type as argument. These servers are running within the camera, which we will open later using a launch file.

The three clients will wait for their corresponding server to come online. If there is no response within the previous defined timeout, the node will close itself with an error.

  19     # Use different namespaces in order to prevent duplicate topic names.
  20     request_stereo_data_client_name = stereo_namespace + "/request_data"
  21     request_mono_data_client_name = mono_namespace + "/request_data"
  22     texture_point_cloud_client_name = stereo_namespace + "/texture_point_cloud"
  23 
  24     request_stereo_data_client = actionlib.SimpleActionClient(
  25         request_stereo_data_client_name, RequestDataAction)
  26     request_mono_data_client = actionlib.SimpleActionClient(
  27         request_mono_data_client_name, RequestDataMonoAction)
  28     texture_point_cloud_client = actionlib.SimpleActionClient(
  29         texture_point_cloud_client_name, TexturedPointCloudAction)
  30 
  31     clients_names = [
  32         request_stereo_data_client_name,
  33         request_mono_data_client_name,
  34         texture_point_cloud_client_name
  35     ]
  36     clients = [
  37         request_stereo_data_client,
  38         request_mono_data_client,
  39         texture_point_cloud_client
  40     ]
  41     for client_name, client in zip(clients_names, clients):
  42         rospy.loginfo("Trying to connect to: {}".format(client_name))
  43         if not client.wait_for_server(rospy.Duration(timeout)):
  44             rospy.logerr("The camera node is not running!")
  45             sys.exit()
  46         rospy.loginfo("Connected")

The lines 48 to 52 define the variables which hold the goals of the request actions. If you want to know which parameters the goals can take, please refer to:

In this case the point cloud is requested from the stereo camera and the rectified (undistorted) image is requested from the mono camera.

  48     request_stereo_data_goal = RequestDataGoal()
  49     request_stereo_data_goal.request_point_cloud = True
  50 
  51     request_mono_data_goal = RequestDataMonoGoal()
  52     request_mono_data_goal.request_rectified_images = True

The rest of the program is an endless loop. Its rate is defined by the variable loop_rate, that defaults to 2 Hz.

The clients send their previously defined action goals to their servers. They also wait for the result of the sent goals. In the following two if-statements the results are tested for errors that might have occurred during the request process.

  54     rate = rospy.Rate(loop_rate)
  55     while not rospy.is_shutdown():
  56         request_stereo_data_client.send_goal(request_stereo_data_goal)
  57         request_mono_data_client.send_goal(request_mono_data_goal)
  58 
  59         request_stereo_data_client.wait_for_result()
  60         request_mono_data_client.wait_for_result()
  61 
  62         if request_stereo_data_client.get_state() != GoalStatus.SUCCEEDED:
  63             rospy.logwarn("Stereo action was not successful.")
  64         else:
  65             error = request_stereo_data_client.get_result().error
  66             if error.code != 0:
  67                 rospy.logerr("Stereo error {}: {}".format(error.code, error.message))
  68 
  69         if request_mono_data_client.get_state() != GoalStatus.SUCCEEDED:
  70             rospy.logwarn("Mono action was not successful.")
  71         else:
  72             error = request_mono_data_client.get_result().error
  73             if error.code != 0:
  74                 rospy.logerr("Mono error {}: {}".format(error.code, error.message))

If all requests have been successful, the texturing goal is defined and filled with parameters. Again, if you want to know which parameters the goal of the action can take, please refer to ensenso_camera_msgs/TexturedPointCloud.

The action itself projects the previously requested point cloud into the mono camera's frame. Therefore we have to define the mono camera's serial and whether we want to use OpenGL for rendering. Because of clipping, we also have to define a far and a near clipping plane, defined by far_plane and near_plane. All points behind the far_plane as well as all points in front of the near_plane are ignored when rendering. For further information regarding the rendering and clipping, refer to the EnsensoSDK manual, which describes the RenderPointMap command, that is used by this action internally.

Afterwards the goal is also send to the TexturedPointCloud action server of the stereo camera.

  76         texture_goal = TexturedPointCloudGoal()
  77         texture_goal.use_opengl = True
  78         texture_goal.publish_results = True
  79         texture_goal.mono_serial = mono_serial
  80         texture_goal.far_plane = 4000.0
  81         texture_goal.near_plane = 100.0
  82 
  83         texture_point_cloud_client.send_goal(texture_goal)
  84         texture_point_cloud_client.wait_for_result()
  85 
  86         rate.sleep()

Running the Node

One way to run the node is to run the following command in a terminal while the roscore, the stereo camera node and the mono camera node are already running in parallel:

rosrun ensenso_camera color_point_cloud

Alternatively, and to make things easier, we can create and run a launch file, which will open the cameras as nodelets in the same nodelet manager and run the previously defined color_point_cloud node. In the previous tutorial we learned how to start the camera nodelets within the same nodelet manager. As a reminder, you can launch the cameras via the predefined mono_stereo_nodelets.launch launch file, but you will have to provide the camera serials as well as the namespaces to the nodelets. The following launch file will do exactly this:

   1 <!-- Open both an Ensenso stereo camera and a mono camera within the same
   2 nodelet manager and run the color_point_cloud Python node. -->
   3 
   4 <launch>
   5   <arg name="serial_stereo" default=""
   6        doc="Serial of the stereo camera (default: first found stereo camera)" />
   7   <arg name="serial_mono" default=""
   8        doc="Serial of the mono mamera (default: first found mono camera)" />
   9   <arg name="settings_stereo" default=""
  10        doc="JSON file with stereo camera parameters" />
  11   <arg name="settings_mono" default=""
  12        doc="JSON file with mono camera parameters" />
  13   <arg name="stereo_ns" default="stereo"
  14        doc="Namespace of the stereo camera" />
  15   <arg name="mono_ns" default="mono"
  16        doc="Namespace of the mono camera" />
  17   <arg name="camera_frame" default=""
  18        doc="The stereo camera's tf frame (default: optical_frame_SERIAL_STEREO)" />
  19   <arg name="link_frame" default=""
  20        doc="The stereo camera's link tf frame (default: camera_frame)" />
  21   <arg name="target_frame" default=""
  22        doc="The tf frame the data will be returned in (default: camera_frame)" />
  23   <arg name="tcp_port" default="-1"
  24        doc="The TCP port to open on the NxLib (-1=off, 0=autoSelect, >0=portNumber)" />
  25   <arg name="threads" default="-1"
  26        doc="The number of threads used by the NxLib for this node (-1=autoDetect)" />
  27   <arg name="wait_for_camera" default="false"
  28        doc="Whether this node should wait for the camera to become available" />
  29 
  30   <include file="$(find ensenso_camera)/launch/mono_stereo_nodelets.launch">
  31     <arg name="serial_stereo" value="$(arg serial_stereo)" />
  32     <arg name="serial_mono" value="$(arg serial_mono)" />
  33     <arg name="settings_stereo" value="$(arg settings_stereo)" />
  34     <arg name="settings_mono" value="$(arg settings_mono)" />
  35     <arg name="stereo_ns" value="$(arg stereo_ns)" />
  36     <arg name="mono_ns" value="$(arg mono_ns)" />
  37     <arg name="camera_frame" value="$(arg camera_frame)" />
  38     <arg name="link_frame" value="$(arg link_frame)" />
  39     <arg name="target_frame" value="$(arg target_frame)" />
  40     <arg name="tcp_port" value="$(arg tcp_port)" />
  41     <arg name="threads" value="$(arg threads)" />
  42     <arg name="wait_for_camera" value="$(arg wait_for_camera)" />
  43   </include>
  44 
  45   <node pkg="ensenso_camera" type="color_point_cloud" name="color_point_cloud">
  46     <param name="mono_serial" type="string" value="$(arg serial_mono)" />
  47     <param name="stereo_ns" type="string" value="$(arg stereo_ns)" />
  48     <param name="mono_ns" type="string" value="$(arg mono_ns)" />
  49   </node>
  50 </launch>

The launch file accepts all the arguments that the ensenso_camera_mono_node and the ensenso_camera_node accepts and has default values for each argument. If you do not provide serials for the cameras the wrapped nodes will select and open the first mono or stereo camera, respectively, they can find.

   5   <arg name="serial_stereo" default=""
   6        doc="Serial of the stereo camera (default: first found stereo camera)" />
   7   <arg name="serial_mono" default=""
   8        doc="Serial of the mono mamera (default: first found mono camera)" />
   9   <arg name="settings_stereo" default=""
  10        doc="JSON file with stereo camera parameters" />
  11   <arg name="settings_mono" default=""
  12        doc="JSON file with mono camera parameters" />
  13   <arg name="stereo_ns" default="stereo"
  14        doc="Namespace of the stereo camera" />
  15   <arg name="mono_ns" default="mono"
  16        doc="Namespace of the mono camera" />
  17   <arg name="camera_frame" default=""
  18        doc="The stereo camera's tf frame (default: optical_frame_SERIAL_STEREO)" />
  19   <arg name="link_frame" default=""
  20        doc="The stereo camera's link tf frame (default: camera_frame)" />
  21   <arg name="target_frame" default=""
  22        doc="The tf frame the data will be returned in (default: camera_frame)" />
  23   <arg name="tcp_port" default="-1"
  24        doc="The TCP port to open on the NxLib (-1=off, 0=autoSelect, >0=portNumber)" />
  25   <arg name="threads" default="-1"
  26        doc="The number of threads used by the NxLib for this node (-1=autoDetect)" />
  27   <arg name="wait_for_camera" default="false"
  28        doc="Whether this node should wait for the camera to become available" />

The following block of code will include the mono_stereo_nodelets.launch file. This file will run the two camera nodelets for the mono and stereo camera with all the above defined arguments.

  30   <include file="$(find ensenso_camera)/launch/mono_stereo_nodelets.launch">
  31     <arg name="serial_stereo" value="$(arg serial_stereo)" />
  32     <arg name="serial_mono" value="$(arg serial_mono)" />
  33     <arg name="settings_stereo" value="$(arg settings_stereo)" />
  34     <arg name="settings_mono" value="$(arg settings_mono)" />
  35     <arg name="stereo_ns" value="$(arg stereo_ns)" />
  36     <arg name="mono_ns" value="$(arg mono_ns)" />
  37     <arg name="camera_frame" value="$(arg camera_frame)" />
  38     <arg name="link_frame" value="$(arg link_frame)" />
  39     <arg name="target_frame" value="$(arg target_frame)" />
  40     <arg name="tcp_port" value="$(arg tcp_port)" />
  41     <arg name="threads" value="$(arg threads)" />
  42     <arg name="wait_for_camera" value="$(arg wait_for_camera)" />
  43   </include>

The last block of code will run the color_point_cloud node from above.

  45   <node pkg="ensenso_camera" type="color_point_cloud" name="color_point_cloud">
  46     <param name="mono_serial" type="string" value="$(arg serial_mono)" />
  47     <param name="stereo_ns" type="string" value="$(arg stereo_ns)" />
  48     <param name="mono_ns" type="string" value="$(arg mono_ns)" />
  49   </node>

In order to run the launch file, which does already exists in the ensenso_camera package, run the following command in a terminal and fill in your camera serials:

roslaunch ensenso_camera \
    color_point_cloud.launch \
    serial_stereo:=<serial of Ensenso stereo camera> \
    serial_mono:=<serial of mono camera>

Wiki: ensenso_driver/Tutorials/TexturedPointCloud (last edited 2023-12-08 12:31:27 by BenjaminThiemann)