Show EOL distros: 

rc_visard: rc_visard_driver

Package Summary

The rc_visard_driver package

  • Maintainer status: developed
  • Maintainer: Heiko Hirschmueller <heiko.hirschmueller AT roboception DOT de>
  • Author: Heiko Hirschmueller <heiko.hirschmueller AT roboception DOT de>, Christian Emmerich <christian.emmerich AT roboception DOT de>, Felix Ruess <felix.ruess AT roboception DOT de>
  • License: BSD
  • External website: http://roboception.com/rc_visard
  • Source: git https://github.com/roboception/rc_visard_ros.git (branch: 1.2.0)
rc_visard: rc_hand_eye_calibration_client | rc_pick_client | rc_roi_manager_gui | rc_silhouettematch_client | rc_tagdetect_client | rc_visard_description | rc_visard_driver

Package Summary

The rc_visard_driver provides data from a Roboception rc_visard 3D sensor on several ROS topics.

rc_visard: rc_hand_eye_calibration_client | rc_visard_description | rc_visard_driver

Package Summary

The rc_visard_driver provides data from a Roboception rc_visard 3D sensor on several ROS topics.

rc_visard: rc_hand_eye_calibration_client | rc_pick_client | rc_silhouettematch_client | rc_tagdetect_client | rc_visard_description | rc_visard_driver

Package Summary

The rc_visard_driver provides data from a Roboception rc_visard 3D sensor on several ROS topics.

rc_visard: rc_hand_eye_calibration_client | rc_pick_client | rc_silhouettematch_client | rc_tagdetect_client | rc_visard_description | rc_visard_driver

Package Summary

The rc_visard_driver provides data from a Roboception rc_visard 3D sensor on several ROS topics.

Official ROS driver for Roboception rc_visard 3D sensor.

See http://rc-visard.com and http://doc.rc-visard.com for more details.

This package is in maintenance mode. Unless you need the dynamics functionality, please use rc_genicam_driver which also supports the rc_cube.

Overview

The rc_visard is the world’s first 3D sensor that allows robots to perceive their environment in 3D and localize themselves in space.

  • Passive perception enables recognition of the environment both in natural light outdoors and in enclosed spaces with poor lighting.
  • Passive shutter technology enables accurate image data acquisition even with fast movements.
  • With an integrated graphics card, depth images are processed directly in the sensor and do not require any external computing power. Point clouds can be easily generated out of the depth images.
  • Camera images (image of the left camera) serve as reference images for the image data sets.
  • Depth images are computed through triangulation of the stereo images (images of left and right camera) using the world class SGM method.
  • Confidence images show the confidence in each point in the depth images.
  • Using the most advanced visual inertial odometry algorithms, rc_visard records its current position and alignment with highest precision.

rc_visard (65 mm baseline, color) rc_visard (160 mm baseline, color)

The rc_visard_driver is the official ROS driver for the rc_visard which provides ROS parameters (configuration), ROS services (control of rc_visards dynamic module) and ROS topics (sensor data: Images, Stereo Data, Point Clouds, Dynamic State i.e. poses and IMU data, TF).

Configuration

Parameters

Parameters to be set to the ROS param server before run-time.

  • device: The ID of the device, i.e. Roboception rc_visard sensor. This can be either the

    • serial number, e.g. 02912345

      • IMPORTANT: preceed with a colon (:02912345) when passing this on the commandline or setting it via rosparam (see https://github.com/ros/ros_comm/issues/1339). This is not neccessary when specifying it as a string in a launch file.

    • user defined name (factory default is the name of the rc_visard's model), must be unique among all reachable sensors
    • internal ID, which is generated by the used GenTL producer. Often, this ID contains the MAC address in some way. This ID can change with the implementation of the transport layer.

      See https://github.com/roboception/rc_genicam_api#device-id for more details. By default this parameter works if just one rc_visard is connected.

  • gev_access: The gev_access mode, i.e.:

    • control Configuration and streaming with the possibility of other clients to read GenICam parameters. This is the default.

    • exclusive Exclusive access to the sensor. This prevents other clients to read GenICam parameters.

    • off Switches gev access completely off. The node only streams pose information if switched on.

  • max_reconnects: Maximum number of consecutive recovery trials in case the driver lost connection to the device or another error happened, e.g. during streaming data.

    • If 0, no recovery is tried at all. If negative, the driver keeps trying to re-connect forever until a connection is re-established. Default: 5.
  • enable_tf: If true then the node subscribes to the rc_visard's dynamics-pose stream and publishes them on tf. Default: false

  • enable_visualization_markers: If true, additional visualization markers are published that visualize the rc_visard's dynamics state (velocities and accelerations), see /dynamics_visualization_markers topic. Default: false

  • autostart_dynamics: If true, the rc_visard's dynamics module is turned on with this ROS node's start up. Default: false

  • autostart_dynamics_with_slam: If true, the rc_visard's dynamics module tries to turn on SLAM with this ROS node's start up. If SLAM is not available (no license) only a warning is printed. Default: false

  • autostop_dynamics: If true, the rc_visard's dynamics module is turned off when this ROS node shuts down. Default: false

  • autopublish_trajectory: If true, results of the get_trajectory service calls are automatically published to /trajectory topic. Default: false

Dynamic-reconfigure Parameters

These parameters can be changed during runtime via dynamic reconfigure:

  • ptp_enabled: Enable PTP slave (PrecisionTimeProtocol, IEEE1588)

  • camera_fps: Frames per second that are published by this nodelet.

    • Publishing frames will be slowed down depending on this setting. Setting it higher than the real framerate of the specific device has no effect.
  • camera_exp_auto: If true, then the exposure time is chosen automatically,

    • up to exp_max as maximum. If false, then exp_value is used as exposure time in seconds.
  • camera_exp_auto_mode: Auto-exposure mode, i.e. Normal, Out1High or AdaptiveOut1.

  • camera_exp_max: Maximum exposure time in seconds if exp_auto is true.

  • camera_exp_value: Exposure time in seconds if exp_auto is false.

  • camera_gain_value: Gain factor in decibel if exp_auto is false.

  • Auto exposure region: Definition of a region in the left image, if the region has zero size or is outside the image, then the full left and right image is used to determine the auto exposure.
    • camera_exp_width: Width of auto exposure region. 0 for whole image.

    • camera_exp_height: Height of auto exposure region. 0 for whole image.

    • camera_exp_offset_x: First column of auto exposure region

    • camera_exp_offset_y: First row of auto exposure region

  • depth_acquisition_mode: Can be either SingleFrame or Continuous. Only the first letter will be checked, thus giving S or C is sufficient.

  • depth_quality: Quality can be Low, Medium, High and Full. Only the first letter will be checked, thus specification of L, M, H or F is sufficient. The quality setting effectively downscales the image after the downscale factor as given above:

    • Full does not downscale the image, i.e. factor is 1 (e.g. 1280x960).

      • NOTE: This mode requires the 'stereo_plus' license on the rc_visard.
    • High quality means computation with 640x480 pixel.

    • Medium quality means computation with 320x240 pixel.

    • Low quality means computation with 214x160 pixel.

    • Default: High.

  • depth_static_scene: This parameter can be set to true if the scene and camera is static. It only has an effect if quality is either High or Full.

    • If active, input images are accumulated and averaged for 300 ms to reduce noise. This limits the frame rate to a maximum of 3 Hz. The timestamp of the disparity image is taken from the first image that was used for accumulation.
  • depth_double_shot: This parameter can be set to true to combine disparity images from two subsequent stereo image pairs. This is meant to be used in conjunction with a projector and ExposureAlternateActive mode.

  • depth_fill: Higher numbers fill gaps with measurements with potentially higher errors.

  • depth_seg: Maximum size of isolated disparity regions that will be invalidated, related to full resolution.

  • depth_smooth: Switching smoothing of disparities on or off.NOTE: Smoothing requires the 'stereo_plus' license on the rc_visard.

  • depth_minconf: Minimal confidence. All disparities with lower confidence will be set to invalid.

  • depth_mindepth: Minimum depth in meter. All disparities with lower depth will be set to invalid.

  • depth_maxdepth: Maximum depth in meter. All disparities with higher depth will be set to invalid.

  • depth_maxdeptherr: Maximum depth error in meter. All disparities with a higher depth error will be set to invalid.

  • depth_exposure_adapt_timeout: Maximum time in seconds to wait after triggering in SingleFrame modes until auto exposure has finished adjustments.

  • out1_mode: Mode for the digital GPIO out1. Possible values are:

    • Low for switching out1 permanently off.

    • High for switching out1 permanently on.

    • ExposureActive for switching out1 on for the exposure time of every image.

    • ExposureAlternateActive for switching out1 on for the exposure time of every second image. The value can only be changed if the rc_visard has an IO Control license. The default is ExposureActive.

  • out2_mode: Mode for the digital GPIO out2. The functionality is the same as for out1_mode. The default is Low.

For color sensors, the following dynamic-reconfigure parameters are additionally available:

  • camera_wb_auto: If true, then white balancing is done automatically. If false, then the red and blue to green ratios can be chosen manually.

  • camera_wb_ratio_red: Red to green ratio for color balancing if camera_wb_auto is false.

  • camera_wb_ratio_blue: Blue to green ratio for color balancing if camera_wb_auto is false.

Provided Topics

The following topics are provided. The nodelet tries to request only data (e.g., images, poses) from the sensor if there is subscriber to the corresponding topic.

Images, Stereo Data, Point Clouds

  • /stereo/left/camera_info (sensor_msgs::CameraInfo)

  • /stereo/right/camera_info (sensor_msgs::CameraInfo)

  • /stereo/left/camera_param (rc_common_msgs::CameraParam)

  • /stereo/right/camera_param (rc_common_msgs::CameraParam)

  • /stereo/left/image_rect (sensor_msgs::Image, MONO8)
  • /stereo/right/image_rect (sensor_msgs::Image, MONO8)
  • /stereo/disparity (stereo_msgs::DisparityImage)

  • /stereo/disparity_color (sensor_msgs::Image, RGB8, visually pleasing)
  • /stereo/depth (sensor_msgs::Image, TYPE_32FC1)
  • /stereo/confidence (sensor_msgs::Image, TYPE_32FC1, values between 0 and 1)
  • /stereo/error_disparity (sensor_msgs::Image, TYPE_32FC1)
  • /stereo/error_depth (sensor_msgs::Image, TYPE_32FC1)
  • /stereo/points2 (sensor_msgs::PointCloud2)

The proprietary CameraParam messages are sent for every image and contain information like the exposure time, gain and values of digital inputs and outputs at the time of image capture.

For color sensors, the following topics are additionally available:

  • /stereo/left/image_rect_color (sensor_msgs::Image, format: RGB8)
  • /stereo/right/image_rect_color (sensor_msgs::Image, format: RGB8)

If the connected rc_visard has an IO Control license, then the following topics are additionally provided for images where the GPIO out1 is either low or high. These topics only useful if out1_mode is set to the special mode ExposureAlternateActive.

  • /stereo/left/image_rect_out1_low (sensor_msgs::Image, MONO8)
  • /stereo/left/image_rect_out1_high (sensor_msgs::Image, MONO8)
  • /stereo/right/image_rect_out1_low (sensor_msgs::Image, MONO8)
  • /stereo/right/image_rect_out1_high (sensor_msgs::Image, MONO8)

For color sensors with an IO Control license, the following topics are additionally available:

  • /stereo/left/image_rect_color_out1_low (sensor_msgs::Image, format: RGB8)
  • /stereo/left/image_rect_color_out1_high (sensor_msgs::Image, format: RGB8)
  • /stereo/right/image_rect_color_out1_low (sensor_msgs::Image, format: RGB8)
  • /stereo/right/image_rect_color_out1_high (sensor_msgs::Image, format: RGB8)

Dynamic State (i.e. poses, IMU data, etc.)

These topics deliver the rc_visard's estimated dynamic state such as its position, orientation, and velocity. For these topics to work properly, the rc_visard's dynamics module must be turned on (see respective service calls or startup-parameters).

  • /pose (geometry_msgs/PoseStamped; same data as provided via tf if
    • enable_tf is set to true)

  • /pose_ins (geometry_msgs/PoseStamped)
  • /pose_rt (geometry_msgs/PoseStamped)
  • /pose_rt_ins (geometry_msgs/PoseStamped)
  • /dynamics (nav_msgs/Odometry)
  • /dynamics_ins (nav_msgs/Odometry)

This topic delivers raw measurements from the on-board IMU sensor:

  • /imu (sensor_msgs/Imu)

TF

If the parameter enable_tf is set to true, the node subscribes to the rc_visard's pose stream (same data published on /pose topic) and publishes them on tf.

Relevant Coordinate Frames

The following coordinate frames are relevant for interpreting the data provided by the rc_visard:

  • camera: The pupil's center of the rc_visard's left camera. All

    • stereo-camera data such as images and point clouds are given in this frame.
  • world: Relevant for navigation applications. The world frame’s origin is

    • located at the origin of the rc_visard’s IMU coordinate frame at the instant when state estimation is switched on. Estimated poses of the rc_visard are given in this frame, i.e. as the rc_visard moves in the world and state

      estimation is running, the camera frame will change w.r.t. this frame.

  • imu: The IMU coordinate frame is inside the rc_visard’s housing. The raw

    • IMU measurements are given in this frame.

Running multiple rc_visard's in one ros environment

For operating multiple rc_visard's in one ros environment, each ros node must be started in separate namespaces, e.g., my_visard. As a result, all frame_ids in all ros messages will be prefixed, e.g., to my_visard_world or my_visard_camera.

Services

The following service is offered to trigger stereo matching in SingleFrame mode. It returns an error if the depth_acquisition_mode is Continuous.

  • depth_acquisition_trigger

The following services are offered to start, stop, and restart the rc_visard's dynamic module (which needs to be started for working dynamic-state estimates).

  • dynamics_start

  • dynamics_restart

  • dynamics_stop

  • dynamics_start_slam

  • dynamics_restart_slam

  • dynamics_stop_slam

The trajectory constructed and stored by the rc_slam node can be retrieved by

  • slam_get_trajectory

The onboard map of the rc_slam node can be saved on the rc_visard for loading it after a SLAM restart or power cycle:

  • slam_save_map

  • slam_load_map

  • slam_remove_map

The onboard rc_slam node can be "reset" (clears the internal state of the SLAM component, including the trajectory) to free the memory with

  • slam_reset

Diagnostics

The rc_visard_driver uses the diagnostics_updater class from the ROS diagnostics stack to regularly publish a DiagnosticStatus Message.

The regular publishing rate can be set via the ~diagnostic_period parameter and defaults to 1 second.

Currently two status are published:

  • Device: Information about the device that the driver is connected to. It covers the rc_visard's serial number, mac address, user-defined GeV ID, and the firmware image version.

  • Connection: Status of the current connection between rc_visard_driver and rc_visard. It publishes 4 different messages:

    • Disconnected (Error): The driver is currently not (yet) connected to the sensor and might try to reconnect several times according to the max_reconnects parameter.

    • Idle (Ok): The driver is connected but not publishing any data because no one is subscribed to any.

    • No data (Warning): The driver is connected and required to publish data but itself does not receive any data from the sensor.

    • Streaming (Ok): The driver is connected and properly streaming data.

The published status values are connection_loss_total, incomplete_buffers_total, image_receive_timeouts_total, and current_reconnect_trial. If not Disconnected, additionally the current ip_address and gev_packet_size are published.

Launching

  • Using command line parameters:

rosrun rc_visard_driver rc_visard_driver _device:=:02912345 _enable_tf:=True _autostart_dynamics:=True _autostop_dynamics:=True
  • As a nodelet, and in a separate **namespace**:

ROS_NAMESPACE=my_visard rosrun nodelet nodelet standalone rc_visard_driver _device:=:02912345

Note that in this setup all frame_ids in all ros messages (including tf-messages) will be prefixed with my_visard, e.g., the frame_id of the published camera images will be my_visard_camera, the frame_id of the poses will be my_visard_world, and the frame_id of the Imu messages will be my_visard_imu.

Report a Bug

Use GitHub to report bugs or submit feature requests. [View active issues]

Acknowledgements

ROSIN Logo

Supported by ROSIN - ROS-Industrial Quality-Assured Robot Software Components.
More information: rosin-project.eu

EU Flag

This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement No. 732287.

Wiki: rc_visard_driver (last edited 2022-05-04 16:21:12 by FelixRuess)