Overview
This stack describes the ROS interface to the Visual-Inertial (VI-) Sensor developed by Skybotix and the Autonomous Systems Lab (ASL), ETH Zurich. This sensor provides fully time-synchronized and factory calibrated IMU- and stereo-camera datastreams. A detailed spec sheet of the sensor can be found here.
Powering the Sensor
You can use the provided USB 3.0 micro cable to power your sensor. Your USB controller on your computer should be able to provide at least 500 mA. When your USB port provides less than 500 mA, the sensor might still start up, but the image stream will be flickering in this case.
Otherwise, you can connect your sensor to a 12 V power supply using the pre-crimped cable. Please refer to the figure below for polarity information. The input voltage can vary from 10-15V, so connecting the sensor directly to a battery is feasible.
Connecting the Sensor
Using the Ethernet cable that came with your sensor, you can connect your VI-sensor to a computer or to a network switch. Please note, that the network and the computer should support Gigabit Ethernet as a 100 MBit LAN does not provide sufficient bandwidth for the sensor data. If you dont have a Gigabit ethernet port but a USB 3.0 port (for example in the Odroid-XU board), a USB3.0 to GigE adapter should work fine. We successfully tested the ASIX AX88179 chipset which is widely available. Any other standard-compliant USB3.0 dongle should do the job. As the Ethernet port on the sensor does not follow Ethernet standards, use only the provided cable to connect your sensor to either your PC or a network hub/switch.
When the sensor is powered-up (boot up can take up to 20-30s) and connected directly to your computer, the sensor will act as a dhcp server and you will obtain an IP (most likely 10.0.0.5). In case the sensor is connected to a network with an existing dhcp server, the sensor should automatically disable its dhcp server and obtain an IP from the network.
If you want to run the sensor driver in a virtual machine (this is possible), please refer to the FAQ .
Interfacing the Sensor From ROS
You can either create a standalone executable for accessing the vi-sensor or you can stream the vi-sensor output to ROS (Robotic Operating System, www.ros.org ) using the corresponding ROS-frontend. For the standalone interface, refer to this section.
You can download the ROS frontend and the low-level driver from github into your rosbuild workspace (for information on how to set up your rosbuild workspace, please refer to this tutorial).:
#making sure you have all the necessary dependencies sudo apt-get install cmake libboost-dev libboost-thread-dev libboost-system-dev libboost-chrono-dev libboost-regex-dev libcurl3-dev libssh2-1-dev libopencv-highgui-dev libopencv-core-dev #change directory into rosbuild folder roscd #checkout low-level driver git clone https://github.com/ethz-asl/libvisensor.git #checkout ROS frontend git clone https://github.com/ethz-asl/visensor_node.git # make visensor rosmake visensor_node
Assuming that the sensor is powered and connected, you can run the driver as
rosrun visensor_node visensor_node
If there is an error (most likely failed autodiscovery), please refer to the FAQ.
If the ROS frontend started properly, you can check the image stream using
rosrun image_view image_view image:=/cam0/image_raw rosrun image_view image_view image:=/cam1/image_raw
The IMU stream is at the topic /imu0
rostopic echo /imu0
If you have downloaded and properly installed the vi-sensor calibration files (see this section), you can launch the ROS block-matcher stereo pipeline using
roslaunch visensor_node stereo_ros.launch
If the disparity output is very noisy, please refer to the FAQ.
The VI-sensor comes preconfigured with standard settings, such as automatic gain and exposure control for example. If you want to change these parameters, you can do this using rqt_configure. This tool allows the fine-grained control of the imaging sensors (e.g. gain-, exposure settings etc.) and the IMU.
Running the VI Sensor Standalone (Without ROS)
You can create your own applications without ROS by linking it to the low-level sensor driver library. You can download the library as
#checkout low-level driver git clone https://github.com/ethz-asl/libvisensor.git
You can build and install the library using the buildscripts
#change scripts directory cd libvisensor/build_scripts ./build_and_install
If you don't want to install the library system-wide, you find the correpsonding script in the same folder. In this case, don't forget to set the proper include- and library search directory for your application. We provide a stand-alone sample application that you can use as a starting-point for the development of your algorithms. You can download and build it as
# checkout sample applications git clone https://github.com/skybotix/visensor_sample_applications # build one cd visensor_interface mkdir build cmake .. make -j8
If the sensor is connected, you can start the application
../bin/visensor_interface
and you should now see both image streams displayed. There is also a sample application that allows the fine-grained control of the imaging sensors (e.g. gain-, exposure settings etc.) as well as another one which performs image rectification and dense block-matching. They are also part of the visensor_sample_applications stack.
Sensor Calibration Data
The VI-Sensor is calibrated for camera intrinics as well as IMU-camera or camera-camera extrinsic respectively. The calibration follows the standard openCV camera calibration (pinhole camera model + radial tangential lense model). You can download the calibration parameters from our webserver as
wget http://www.skybotix.com/downloads/vi-calibration/p<serial number>.zip
You can find the serial number on the GigE port of your sensor (e.g. the filename could be p21005.zip). The files are documented and should be self-explanatory. For the ROS-frontend to stream the calibration data in the corresponding camera_info message, you should copy the contents of the zip file to the calibration folder of the aslam_sensor_driver folder in your rosbuild workspace
#change to calibration directory of driver roscd visensor_node/calibration/ wget http://www.skybotix.com/downloads/vi-calibration/p<serial number>.zip unzip p<serial number>.zip
If the VI-sensor ROS-node is already running, please restart it, to include the new calibration files.
Running the Visual-Odometry Framework FOVIS
You can download the fovis framework in your ROS catkin folder by
cd ~/<CATKIN_WS>/src git clone https://github.com/skybotix/fovis.git git clone https://github.com/skybotix/libfovis.git cd .. catkin_make
This should hopefully build the fovis framework. You can run the framework
roslaunch fovis_ros fovis_stereo.launch
Voila, you should now have a working VO framework running. You can check the feature correspondences with
rosrun image_view image_view image:=/stereo_odometer/features
Please note that displaying and rendering this videostream consumes considerable CPU load. If you want to run fovis with a considerably reduced CPU load (at the cost of sacrificing a bit of accuracy), you can remove the # from
#add_definitions( -DUSE_FAST_SETTINGS )
in the CMakeLists.txt in the libfovis directory. On an Intel i7, this will reduce the CPU load for a single core from approximately 35% to less than 20%. For more info on how to tune the gains, please refer to the fovis paper.
Running the Visual-Odometry Framework VISO2
Equivalently to fovis, you can download the viso2 framework in your ROS catkin folder by
cd ~/<CATKIN_WS>/src git clone https://github.com/skybotix/viso2.git cd .. catkin_make
This should hopefully build the viso2 framework. You can run the framework
roslaunch viso2_ros demo.launch
Voila, you should now have a working VO framework running. In order to tune the framework for speed or accuracy, check out the viso2 ROS wiki page.
Accessing The Sensor Inside a Virtual Machine
You should be able to access the sensor inside a virtual machine. As a default configuration, VMs usually use a NAT network configuration, which blocks the sensor datastream. Instead, we have to set the network configuration of the Gigabit Ethernet adapter to "bridged". Using this setting, the VM will have direct access to the adapter and subsequently to the sensor. If you connect your sensor directly to your PC, the network card inside your VM should have the IP 10.0.0.5 if everything is configured correctly.
FAQ
Can I run the sensor in Windows?
The driver currently only supports Ubuntu 12.04 or higher. If you want to use the sensor in Windows, the users are encouraged to use a Virtual Machine. Refer to this section for more details.
It is noted that the host driver is open source and it should be fairly easy to port the driver to Windows as it only uses a set of standard libraries such as boost and openssh.
Why Is My Sensor Not Detected
If you have problems detecting the sensor, connect the sensor directly to your PCs ethernet port and do a power cycle. It is important that your ethernet port is configured to be operated in DHCP mode where you automatically receive an IP. After a few seconds after booting up your sensor, you should get an IP from the sensor, namely 10.0.0.5 . If not, try running
sudo dhclient eth0
assuming that eth0 is the network interface of your ethernet adapter. If you still do not get an IP, please contact our support info@skybotix.com