Depthcloud encoder that enables ros3d.js-based point cloud streaming


This package subscribes to the depth and rgb image of the openni_camera node and generates a single image stream that can be used to generate 3D point clouds in the web browser.

Published Topics

depthcloud_encoded (sensor_msg/Image)
  • Combined depth and color image


~depth (string, default: /camera/depth/image)
  • Depth image topic to subscribe to
~rgb (string, default: /camera/rgb/image)
  • Color image topic to subscribe to


The depthcloud encoded images can be streamed to a web browser with the help of the ros_web_video package. In order to increase the dynamic range of the streamed depth image, it is split into two individual frames that encode the captured depth information from 0 to 3 meters and from 3 to 6 meters, respectively. Furthermore, compression artifacts are reduced by filling areas of unknown depth information with interpolated sample data. A binary mask is used to detect and omit these samples during decoding. Once this video stream is received by the web browser, it is assigned to a WebGL texture object which allows for fast rendering of the point cloud on the GPU. Here, a vertex shader is used to reassemble the depth and color data followed by generating a colored point cloud. In addition, a filter based on local depth variance is used to further reduce the impact of video compression distortion.

Source Code

Source code is available at


Please send bug reports to the GitHub Issue Tracker. Feel free to contact us at any point with questions and comments.

Wiki: depthcloud_encoder (last edited 2013-05-29 17:19:04 by DavidGossow)