Note: This tutorial assumes that you have completed the previous tutorials: Runnung several Nodelets.
(!) Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags.

Generating a coloured Point Cloud with Nodelets

Description: This tutorial shows how to run a stereo and mono nodelet and how to generate a coloured point cloud.

Tutorial Level: INTERMEDIATE

General Idea

To recieve a coloured point cloud from an Ensenso stereo camera, we have to do several steps, which are briefly:

  1. Recieve a point cloud from the stereo camera
  2. Recieve an image from the mono camera
  3. Project the point cloud into the mono camera's frame
  4. Apply the pixel values of the mono camera image to the projected point cloud.

Step 1 and 2 are done with the request data actions, defined for stereo cameras and mono cameras. Step 3 and 4 are done by the TexturedPointCloud action for stereo cameras. In the next section we will write a ros node, that will define clients to these action servers and make calls to them via action goals.

Define Clients to Action Servers in one Node

In general the node/nodelets of the ensenso_camera package, as well as this node, make heavy use of ros actions. If you do not know what actions are, we recommend that you first do the ros action tutorials.

The following node calls the actions servers of the mono and the stereo camera respectively. We will go through the node in detail. First of all the complete script:

   1 #!/usr/bin/env python
   2 import rospy
   3 import sys
   4 
   5 import actionlib
   6 from actionlib_msgs.msg import GoalStatus
   7 from ensenso_camera_msgs.msg import RequestDataAction, RequestDataGoal
   8 from ensenso_camera_msgs.msg import RequestDataMonoAction, RequestDataMonoGoal
   9 from ensenso_camera_msgs.msg import TexturedPointCloudAction, TexturedPointCloudGoal
  10 
  11 
  12 def main():
  13     loop_rate = rospy.get_param("~rate", 2)
  14     rgb_serial = rospy.get_param("~rgb_serial")
  15     stereo_namespace = rospy.get_param("~stereo_ns", "/stereo_camera")
  16     rgb_namespace = rospy.get_param("~rgb_ns", "/rgb_camera")
  17     timeout = rospy.get_param("~timeout", 60)
  18 
  19     # The servers are scoped into the namespaces of the different cameras. Otherwise the topics would interfere.
  20     request_data_client_stereo = actionlib.SimpleActionClient(stereo_namespace + "/request_data",
  21                                                               RequestDataAction)
  22     request_data_client_mono = actionlib.SimpleActionClient(rgb_namespace + "/request_data",
  23                                                             RequestDataMonoAction)
  24 
  25     texture_point_cloud_client = actionlib.SimpleActionClient(stereo_namespace + "/texture_point_cloud",
  26                                                               TexturedPointCloudAction)
  27 
  28     for client in [request_data_client_stereo, request_data_client_mono, texture_point_cloud_client]:
  29         if not client.wait_for_server(rospy.Duration(timeout)):
  30             rospy.logerr("The camera node is not running!")
  31             sys.exit()
  32 
  33     request_data_goal_stereo = RequestDataGoal()
  34     request_data_goal_stereo.request_point_cloud = True
  35 
  36     request_data_goal_mono = RequestDataMonoGoal()
  37     request_data_goal_mono.request_rectified_images = True
  38 
  39     rate = rospy.Rate(loop_rate)
  40     while not rospy.is_shutdown():
  41         request_data_client_stereo.send_goal(request_data_goal_stereo)
  42         request_data_client_mono.send_goal(request_data_goal_mono)
  43         request_data_client_stereo.wait_for_result()
  44         request_data_client_mono.wait_for_result()
  45 
  46         if request_data_client_stereo.get_state() != GoalStatus.SUCCEEDED:
  47             rospy.logwarn("Stereo request action was not successful.")
  48             result = request_data_client_stereo.get_result()
  49             message = result.error
  50             rospy.logwarn(message)
  51         if request_data_client_mono.get_state() != GoalStatus.SUCCEEDED:
  52             rospy.logwarn("Mono request action was not successful.")
  53             result = request_data_client_mono.get_result()
  54             message = result.error
  55             rospy.logwarn(message)
  56 
  57         # Request the textured PC after getting mono rgb image and stereo point cloud
  58         texture_goal = TexturedPointCloudGoal()
  59         texture_goal.use_openGL = True
  60         texture_goal.publish_results = True
  61         texture_goal.mono_serial = rgb_serial
  62         texture_goal.far_plane = 4000.0
  63         texture_goal.near_plane = 100.0
  64 
  65         texture_point_cloud_client.send_goal(texture_goal)
  66         texture_point_cloud_client.wait_for_result()
  67 
  68         rate.sleep()
  69 
  70 
  71 if __name__ == "__main__":
  72     try:
  73         rospy.init_node("color_point_cloud")
  74         main()
  75     except rospy.ROSInterruptException:
  76         pass

The Node in Detail

The first few lines import the needed actions for generating a coloured point cloud. With the RequestDataAction, you generally recieve data from the stereo camera. RequestDataMonoAction is the mono camera action, which serves the same purpose. The TexturedPointCloudAction combines the data of both mono and stereo requests and generates the coloured point cloud.

   5 import actionlib
   6 from actionlib_msgs.msg import GoalStatus
   7 from ensenso_camera_msgs.msg import RequestDataAction, RequestDataGoal
   8 from ensenso_camera_msgs.msg import RequestDataMonoAction, RequestDataMonoGoal
   9 from ensenso_camera_msgs.msg import TexturedPointCloudAction, TexturedPointCloudGoal

In our main()-function several variables are defined in the beginning. Some of them take default values, if they are not assigned to any value. Crucial however are the namespaces of the stereo camera stereo_ns and the mono camera rgb_camera. Because all topics and actions are published on the global namespace, these have to be defined in order to distinguish the actions between the mono and stereo camera.

  12 def main():
  13     loop_rate = rospy.get_param("~rate", 2)
  14     rgb_serial = rospy.get_param("~rgb_serial")
  15     stereo_namespace = rospy.get_param("~stereo_ns", "/stereo_camera")
  16     rgb_namespace = rospy.get_param("~rgb_ns", "/rgb_camera")
  17     timeout = rospy.get_param("~timeout", 60)

The following block of code defines the actions clients, which will send a specific goal to the action server. The SimpleActionClient takes the published topic of the action and its type as argument. These servers are running within the camera nodelets, which we will launch in a launch file later on.

The three clients will wait with a timeout for the specific servers to come online. If there is no response within the previous defined timeout, the node will close itself with an error.

  19     # The servers are scoped into the namespaces of the different cameras. Otherwise the topics would interfere.
  20     request_data_client_stereo = actionlib.SimpleActionClient(stereo_namespace + "/request_data",
  21                                                               RequestDataAction)
  22     request_data_client_mono = actionlib.SimpleActionClient(rgb_namespace + "/request_data",
  23                                                             RequestDataMonoAction)
  24 
  25     texture_point_cloud_client = actionlib.SimpleActionClient(stereo_namespace + "/texture_point_cloud",
  26                                                               TexturedPointCloudAction)
  27 
  28     for client in [request_data_client_stereo, request_data_client_mono, texture_point_cloud_client]:
  29         if not client.wait_for_server(rospy.Duration(timeout)):
  30             rospy.logerr("The camera node is not running!")
  31             sys.exit()

The lines 33 to 37 define the the variables which hold the goals of the request actions. If you want to know, which parameters the goals can take, please refer to RequestData or RequestDataMono. In this case the point cloud is requested from the stereo camera, as well as the rectified (undistorted) image from the mono camera.

  33     request_data_goal_stereo = RequestDataGoal()
  34     request_data_goal_stereo.request_point_cloud = True
  35 
  36     request_data_goal_mono = RequestDataMonoGoal()
  37     request_data_goal_mono.request_rectified_images = True

The rest of the program is an endless loop. Its rate is defined by the variable loop_rate, that defaults to 2 hz. The clients send their previous defined requesting goals to their action servers. They also wait for the result of sent goals. In the following two if-statements, the results are tested for errors, that might have occurred during the request process.

  39     rate = rospy.Rate(loop_rate)
  40     while not rospy.is_shutdown():
  41         request_data_client_stereo.send_goal(request_data_goal_stereo)
  42         request_data_client_mono.send_goal(request_data_goal_mono)
  43         request_data_client_stereo.wait_for_result()
  44         request_data_client_mono.wait_for_result()
  45 
  46         if request_data_client_stereo.get_state() != GoalStatus.SUCCEEDED:
  47             rospy.logwarn("Stereo request action was not successful.")
  48             result = request_data_client_stereo.get_result()
  49             message = result.error
  50             rospy.logwarn(message)
  51         if request_data_client_mono.get_state() != GoalStatus.SUCCEEDED:
  52             rospy.logwarn("Mono request action was not successful.")
  53             result = request_data_client_mono.get_result()
  54             message = result.error
  55             rospy.logwarn(message)

If all requests have been successful, the texturing goal is defined and filled with parameters. Again, if you want to know which parameters the goal of the action can take, please refer to the TexturedPointCloud action. The action itself projects the previous requested point cloud into the mono camera's frame. Therefore we have to define whether we want to use OpenGL for rendering and the mono camera serial. Because of clipping, we also have to define a far and a near clipping plane, defined by far_plane and near_plane. All points behind the far_plane as well as all points in front of the near_plane are ignored when rendering.

Afterwards the goal is also send to the TexturedPointCloud action server of the stereo camera.

  57         # Request the textured PC after getting mono rgb image and stereo point cloud
  58         texture_goal = TexturedPointCloudGoal()
  59         texture_goal.use_openGL = True
  60         texture_goal.publish_results = True
  61         texture_goal.mono_serial = rgb_serial
  62         texture_goal.far_plane = 4000.0
  63         texture_goal.near_plane = 100.0
  64 
  65         texture_point_cloud_client.send_goal(texture_goal)
  66         texture_point_cloud_client.wait_for_result()
  67 
  68         rate.sleep()

For further information regarding the rendering and clipping, refer to the EnsensoSDK manual, which describes the RenderPointMap command, that is used by this action internally.

The last lines of code are needed to define a ros node written in python. Here is the entry point of the program, the node is initialized and our defined main() function is called.

  71 if __name__ == "__main__":
  72     try:
  73         rospy.init_node("color_point_cloud")
  74         main()
  75     except rospy.ROSInterruptException:
  76         pass

Running the Node

This node is already defined in the ensenso_camera package, so you do not have to write it yourself. In order to run this node, type

$ rosrun ensenso_camera color_point_cloud

into your terminal. But beware the timeouts will be triggered (see lines 28-31), if the camera nodelets are not running. We will come to this later in the last section of this tutorial.

Running the Cameras as Nodelets

In the previous tutorial we learned how to start the camera nodelets within the same nodelet manager. As reminder, you can launch the cameras via the predefined launch-file

$ roslaunch ensenso_camera mono_stereo_nodelets.launch

, but you will have to define the camera serials first, as well as the namespaces of the nodelets.

Combining all Steps in a Launch File

To make things easier, we can create and run a launch file, which will open the cameras as nodelets in the same nodelet manager and runs the previous defined color_point_cloud node. The following launch file will open the cameras and run the node:

   1 <launch>
   2   <arg name="serial_stereo" doc="Serial of the Ensenso stereo camera" />
   3   <arg name="serial_mono" doc="Serial of a mono camera supported by the NxLib" />
   4   <arg name="stereo_ns" value="stereo" />
   5   <arg name="mono_ns" value="mono" />
   6 
   7   <include file="$(find ensenso_camera)/launch/mono_stereo_nodelets.launch">
   8     <arg name="serial_stereo" value="$(arg serial_stereo)" />
   9     <arg name="serial_mono" value="$(arg serial_mono)" />
  10     <arg name="stereo_ns" value="$(arg stereo_ns)" />
  11     <arg name="mono_ns" value="$(arg mono_ns)" />
  12   </include>
  13 
  14   <node pkg="ensenso_camera" type="color_point_cloud" name="color_point_cloud" >
  15     <param name="rgb_serial" type="string" value="$(arg serial_mono)" />
  16     <param name="stereo_ns" type="string" value="$(arg stereo_ns)" />
  17     <param name="rgb_ns" type="string" value="$(arg mono_ns)" />
  18   </node>
  19 </launch>

The Launch File in Detail

The beginning of the file will define arguments, used by the subsequent code blocks. These are the serials of the stereo camera and the mono camera, and the namespaces of the two camera nodelets.

   1 <launch>
   2   <arg name="serial_stereo" doc="Serial of the Ensenso stereo camera" />
   3   <arg name="serial_mono" doc="Serial of a mono camera supported by the NxLib" />
   4   <arg name="stereo_ns" value="stereo" />
   5   <arg name="mono_ns" value="mono" />

The following block of code will include the mono_stereo_nodelets.launch file. This file will run the two camera nodelets for the mono and stereo camera and will use the defined namespaces as well as the camera serials.

   7   <include file="$(find ensenso_camera)/launch/mono_stereo_nodelets.launch">
   8     <arg name="serial_stereo" value="$(arg serial_stereo)" />
   9     <arg name="serial_mono" value="$(arg serial_mono)" />
  10     <arg name="stereo_ns" value="$(arg stereo_ns)" />
  11     <arg name="mono_ns" value="$(arg mono_ns)" />
  12   </include>

The last block of code will run the color_point_cloud node (which is the node we created earlier).

  14   <node pkg="ensenso_camera" type="color_point_cloud" name="color_point_cloud" >
  15     <param name="rgb_serial" type="string" value="$(arg serial_mono)" />
  16     <param name="stereo_ns" type="string" value="$(arg stereo_ns)" />
  17     <param name="rgb_ns" type="string" value="$(arg mono_ns)" />
  18   </node>
  19 </launch>

Running the launch file

In order to run the launch file, which does already exist in the ensenso_camera package, enter the following line into your terminal of choice.

roslaunch ensenso_camera color_point_cloud.launch

Do not forget to define the arguments in line 2-5.

Wiki: ensenso_driver/Tutorials/TexturedPointCloud (last edited 2019-08-07 14:25:04 by YasinGuenduez)