If you are a new user of the ROS OpenNI drivers (on Electric or later), see openni_launch and openni_camera. This page is provided to help users of the old driver migrate their code, and for historical interest.

openni_camera_deprecated is unmaintained and likely to be removed in Groovy.


ROS has long had two distinct OpenNI camera drivers:

  • The older, monolithic node.
  • The new, minimal driver nodelet, with 2D & 3D processing split off into separate nodelets. Launch files tie it all together.

The question of which to use has caused plenty of confusion. There are slight differences in the ROS APIs for the two drivers. Finally, the APIs have tended to move around to various packages across ROS releases.


Microsoft released the Kinect in November 2010. Almost immediately the open-source libfreenect community deciphered the USB protocol and provided basic access to the depth and RGB image streams. We in the ROS community participated in the early development of libfreenect and wrote an experimental ROS driver for the Kinect.

In December 2010, PrimeSense (the company behind the Kinect's depth-sensing technology) released the OpenNI natural interaction framework. It provided fuller access to the Kinect's hardware as well as software depth registration and skeleton tracking. We quickly ported our ROS driver to OpenNI.

The openni_kinect stack, including the openni_camera driver package, was first released in ROS Diamondback in March 2011 following months of feverish development.


openni_camera provides:

  • bin/openni_node

  • Nodelet openni_camera/OpenNINodelet

  • Example launch file launch/openni_node.launch


The new driver (discussed below) was backported to the Diamondback-only package openni_camera_unstable. It provides:

  • bin/openni_node

  • Nodelet openni_camera/driver

  • The nodelets from Electric's depth_image_proc

  • Launch file launch/openni.launch from Electric's openni_launch


The original openni_camera driver was developed rapidly and organically, as multiple developers hacked new features onto the same monolithic node. Over time we discovered drawbacks of the original ROS API, regretted its lack of flexibility, and found the codebase more and more difficult to extend.

In Electric, we introduced a new "unstable" version of the driver, intended to eventually replace the old monolithic one. It greatly slimmed down the driver node(let), splitting most of the device-independent data processing into separate nodelets in depth_image_proc. At the same time it added features such as calibration, support for Asus Xtion devices, access to the IR image stream, and registration of the depth stream with any (even external) RGB camera.


Accessed as in Diamondback.


openni_camera also provides the new, minimal driver:

  • bin/openni_node_unstable

  • Nodelet openni_camera/driver

depth_image_proc provides various nodelets for 2D/3D processing.

openni_launch provides launch/openni.launch, which composes the driver and processing nodelets into a unified system.


In Fuerte the new API has officially reached a stable state. There should not be any breaking ROS API changes from Electric. However the openni_kinect stack has been reorganized, resulting in a couple of name changes.


No longer lives in openni_camera. It has been split off into openni_camera_deprecated, which still provides:

  • bin/openni_node

  • Nodelet openni_camera/OpenNINodelet

  • Example launch file launch/openni_node.launch


openni_camera (now a unary stack) provides only the new, minimal driver:

  • bin/openni_node (note the renaming)

  • Nodelet openni_camera/driver

depth_image_proc has been moved to image_pipeline.

openni_launch (also a unary stack) still provides launch/openni.launch. It has gained some flexibility since Electric.

Migration guide

Updating from deprecated to stable API

In the stable API, the driver node(let) is minimal, publishing only the device outputs. Instead, the deprecated node(let) API corresponds closely to the launch file API of openni_launch. If you need processed outputs such as point clouds, replace use of openni_camera[_deprecated]/openni_node and openni_camera/OpenNINodelet with openni_launch.

Note the following ROS API changes between openni_camera_deprecated and openni_launch:

  • Namespace camera/depth/ now contains only unregistered (in the original depth/IR camera frame) outputs. Outputs registered to the RGB camera frame (including the XYZRGB point cloud) are published in namespace camera/depth_registered/.

  • Topic camera/rgb/points is replaced by camera/depth_registered/points.

  • There is no longer a mechanism for publishing an indexed subset of points. As far as we are aware, the only use case in practice was selecting some (possibly down-sampled) region(s)-of-interest in the depth image. One or more image_proc/crop_decimate nodelets is a superior solution for that.

Diamondback/Electric to Fuerte (deprecated API)

For users of the deprecated API, the only change is the package name. Instead of:

# Diamondback, Electric
rosrun openni_camera openni_node
roslaunch openni_camera openni_node.launch


# Fuerte
rosrun openni_camera_deprecated openni_node
roslaunch openni_camera_deprecated openni_node.launch

The node(let) ROS API remains unchanged. The nodelet can still be loaded by the same name, openni_camera/OpenNINodelet.

This package is scheduled for removal in Groovy. Please update your packages to use the stable openni_camera and openni_launch APIs.



Deprecated OpenNI camera driver.

Subscribed Topics

camera/depth/indices (pcl/PointIndices)
  • If ~use_indices is set, the subset of points to include when publishing a point cloud.

Published Topics

RGB camera
camera/rgb/camera_info (sensor_msgs/CameraInfo)
  • Camera calibration and metadata.
camera/rgb/image_raw (sensor_msgs/Image)
  • Raw image from device. Format is Bayer GRBG for Kinect, YUV422 for PSDK.
camera/rgb/image_mono (sensor_msgs/Image)
  • Monochrome unrectified image.
camera/rgb/image_color (sensor_msgs/Image)
  • Color unrectified image.
camera/rgb/points (sensor_msgs/PointCloud2)
  • Registered XYZRGB point cloud. If using PCL, subscribe as PointCloud<PointXYZRGB>. Published only if ~depth_registration is on.
Depth camera
If ~depth_registration is off, all images are in the original IR camera frame. If on, all images are registered to the RGB camera frame.
camera/depth/camera_info (sensor_msgs/CameraInfo)
  • Camera calibration and metadata.
camera/depth/image_raw (sensor_msgs/Image)
  • Raw image from device. Contains uint16 depths in mm.
camera/depth/image (sensor_msgs/Image)
  • Unrectified depth image. Contains float depths in m.
camera/depth/disparity (stereo_msgs/DisparityImage)
  • Disparity image (inversely related to depth), for interop with stereo processing nodes.
camera/depth/points (sensor_msgs/PointCloud2)
  • Unregistered XYZ point cloud. If using PCL, subscribe as PointCloud<PointXYZ>. Published only if ~depth_registration is off.


~device_id (string)
  • Specifies which device to open. The following formats are recognized:
    • #1 Use first device found
      2@3 Use device on USB bus 2, address 3
      B00367707227042B Use device with given serial number
~rgb_frame_id (string, default: /openni_rgb_optical_frame)
  • The tf frame of the RGB camera.
~depth_frame_id (string, default: /openni_depth_optical_frame)
  • The tf frame of the IR/depth camera.
~use_indices (bool, default: false)
  • If true, listen on camera/depth/indices and publish point clouds containing only the requested points.
Dynamically Reconfigurable Parameters
See the dynamic_reconfigure package for details on dynamically reconfigurable parameters.
~image_mode (int, default: 2)
  • Image output mode for the color/grayscale image Possible values are: SXGA_15Hz (1): 1280x1024@15Hz, VGA_30Hz (2): 640x480@30Hz, VGA_25Hz (3): 640x480@25Hz, QVGA_25Hz (4): 320x240@25Hz, QVGA_30Hz (5): 320x240@30Hz, QVGA_60Hz (6): 320x240@60Hz, QQVGA_25Hz (7): 160x120@25Hz, QQVGA_30Hz (8): 160x120@30Hz, QQVGA_60Hz (9): 160x120@60Hz
~debayering (int, default: 0)
  • Bayer to RGB algorithm Possible values are: Bilinear (0): Fast debayering algorithm using bilinear interpolation, EdgeAware (1): debayering algorithm using an edge-aware algorithm, EdgeAwareWeighted (2): debayering algorithm using a weighted edge-aware algorithm
~depth_mode (int, default: 2)
  • depth output mode Possible values are: SXGA_15Hz (1): 1280x1024@15Hz, VGA_30Hz (2): 640x480@30Hz, VGA_25Hz (3): 640x480@25Hz, QVGA_25Hz (4): 320x240@25Hz, QVGA_30Hz (5): 320x240@30Hz, QVGA_60Hz (6): 320x240@60Hz, QQVGA_25Hz (7): 160x120@25Hz, QQVGA_30Hz (8): 160x120@30Hz, QQVGA_60Hz (9): 160x120@60Hz
~depth_registration (bool, default: False)
  • Depth data registration
~depth_time_offset (double, default: 0.0)
  • depth image time offset in seconds Range: -1.0 to 1.0
~image_time_offset (double, default: 0.0)
  • image time offset in seconds Range: -1.0 to 1.0

openni_camera/OpenNINodelet nodelet

Nodelet version of the deprecated OpenNI driver. Has the same ROS API as openni_node above.

Launch files


A simple example launch file. It opens the first enumerated device with OpenNI depth registration enabled. The ROS API is as described above.


Publishes default transforms relating the IR and RGB cameras to tf. Included by openni_node.launch.

Wiki: openni_camera_deprecated (last edited 2012-04-23 21:44:04 by PatrickMihelich)