ROS for Human-Robot Interaction
ROS for Human-Robot Interaction (or ROS4HRI) is an umbrella for all the ROS packages, conventions and tools that help developing interactive robots with ROS.
The ROS REP-155 (aka, ROS4HRI) defines a set of topics, naming conventions, frames that are important for HRI application. It was originally introduced in the paper 'ROS for Human-Robot Interaction', presented at IROS2021.
The REP-155 is still evolving. On-going changes can be submitted and discussed on the ros-infrastructure/rep Github repository.
Common ROS packages
hri_msgs: base ROS messages for Human-Robot Interaction
human_description: a parametric kinematic model of a human, in URDF format
libhri: a C++ library to easily access human-related topics
pyhri: a Python library to easily access human-related topics
hri_rviz: a collection of RViz plugins to visualise faces, facial landmarks, 3D kinematic models...
These packages are all available in ROS noetic (eg, the ros-noetic-hri-msgs debian package).
The source code for these packages (as well as several others) can be found on github.com/ros4hri.
Specialized ROS packages
Feel free to add your own packages to this list, as long as they implement the REP-155.
Face detection, recognition, analysis
hri_face_detect: a Google MediaPipe-based multi-people face detector.
- facial landmarks
- 3D head pose estimation
- 30+ FPS on CPU only
Body tracking, gesture recognition
hri_fullbody: a Google MediaPipe-based 3D full-body pose estimator
- 2D and 3D pose estimation of a single person (multiple person pose estimation possible with an external body detector)
- facial landmarks
- optionally, can use registered depth information to improve 3D pose estimation
Voice processing, speech, dialogue understanding
Whole person analysis
hri_person_manager: probabilistic fusion of faces, bodies, voices into unified persons.
Group interactions, gaze behaviour
You can access the ROS4HRI tutorials here.