Overview
This package contains one single node: mcl3d_node, which estimates robot localization based on incoming visual odometry, 3D point clouds, radio-based measurements and a 3D map of the environment, fused into a Monte Carlo Localization (MCL) algorithm.
Tf tree
The transforms tree (following REP 105) is as follows:
map → odom → base_link → camera
Visual odometry algorithms generally calculate camera motion with respect to an initial reference frame odom.To be able to calculate robot motion based on camera motion, the transformation from the camera frame to the robot frame has to be known. Therefore this implementation needs to know the tf base_link → camera and the tf odom → base_link to be able to publish map → odom in order to correct odometry drift. The node currently uses default values from the sensor setup on the AscTec Neo Research platform.
Nodes
Citing
If you use viodom in an academic context, please cite the following publication: http://ieeexplore.ieee.org/document/7502653/
@INPROCEEDINGS{7502653, author={F. J. Perez-Grau and F. R. Fabresse and F. Caballero and A. Viguria and A. Ollero}, booktitle={2016 International Conference on Unmanned Aircraft Systems (ICUAS)}, title={Long-term aerial robot localization based on visual odometry and radio-based ranging}, year={2016}, pages={608-614}, doi={10.1109/ICUAS.2016.7502653}, month={June},}