Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags. |
Using Hardware Acceleration with Docker
Description: This tutorial walks you through using Hardware Acceleration with Docker for various ROS tools.Keywords: ROS, Docker, Hardware Acceleration, Tooling
Tutorial Level: INTERMEDIATE
In this tutorial, we go over some of the recent methods in enabling Hardware Acceleration within Docker containers. If you've tried using graphical interfaces or process requiring CUDA or OpenGL inside containers, you've most likely encountered the need for utilising hardware acceleration. Besides the dependencies, use and mounting devices for hardware acceleration is relatively simple. As a best practice, try to keep most of your images hardware agnostic, corralling any driver specific setup to the last layer in building the docker image. For example, leave the driver install steps towards the bottom end of the Dockerfile, or the last tag added in the hierarchy. Thus any rebuilding or modifications for in changes hardware/drivers when sharing with others or swapping deployed targets can be minimized. The methods listed are not exhaustive, as this all still quite new and continually evolving. Please feel free to contribute by keeping this wiki update and adding additional resources.
Contents
Accelerated Graphics
Nvidia
There are two ways of getting hardware accelerated graphics with nvidia cards: nvidia-docker1 and nvidia-docker2. The official osrf images ship with support for nvidia-docker1. If you would like to use nvidia-docker2, you must create your own Dockerfile.
Rocker
rocker is a tools which will help you run docker containers with hardware acceleration. If you have an nvidia driver and need graphics acceleration you can run it with --nvidia --x11 as an option to enable the nvidia drivers and the X server in the container.
nvidia-docker1
https://github.com/NVIDIA/nvidia-docker/wiki/Installation-(version-1.0)
Support for Nvidia and Docker is probably the most widely documented and discussed on the internet thanks to many similar efforts in supporting GPU computation in cloud based environments. NVIDIA now has a tool for running accelerated containers:
https://github.com/NVIDIA/nvidia-docker
nvidia-docker will volume mount the driver files for the container at /usr/local/nvidia, so you'll need a few lightweight changes in the Dockerfile:
FROM osrf/ros:indigo-desktop-full # nvidia-docker hooks LABEL com.nvidia.volumes.needed="nvidia_driver" ENV PATH /usr/local/nvidia/bin:${PATH} ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}
Then build the dockerfile, we'll perhaps tag the one above as ros:nvidia. You should then first start the container using nvidia-docker:
nvidia-docker run -it \ --env="DISPLAY=$DISPLAY" \ --env="QT_X11_NO_MITSHM=1" \ --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \ ros:nvidia \ bash -c "roscore & rosrun rviz rviz"
nvidia-docker2
Follow this link for install instructions for nvidia-docker2 https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)
Some ROS packages like RViz and Gazebo need OpenGL. nvidia-docker2 requires libglvnd (GL Vendor-Neutral Dispatch library) to be installed inside the image to get OpenGL calls working correctly. Lucky with ROS melodic and up, this package should already be installed when building from ros-base images. However, for older ROS releases before melodic targeting older distros, an alternate way to gain libglvnd support is to base off "FROM" these Docker images: https://hub.docker.com/r/nvidia/opengl/ .
First create a directory with a Dockerfile and entrypoint script inside.
$ mkdir my_melodic_image & cd my_melodic_image $ touch Dockerfile
Paste the following content into the Dockerfile.
FROM osrf/ros:melodic-desktop-full # nvidia-container-runtime ENV NVIDIA_VISIBLE_DEVICES \ ${NVIDIA_VISIBLE_DEVICES:-all} ENV NVIDIA_DRIVER_CAPABILITIES \ ${NVIDIA_DRIVER_CAPABILITIES:+$NVIDIA_DRIVER_CAPABILITIES,}graphics
Build the image. Don't forget the period at the end of that command.
$ cd my_melodic_image/ $ docker build -t my_melodic_image .
Now create a script to run the image called run_my_image.bash
XAUTH=/tmp/.docker.xauth if [ ! -f $XAUTH ] then xauth_list=$(xauth nlist :0 | sed -e 's/^..../ffff/') if [ ! -z "$xauth_list" ] then echo $xauth_list | xauth -f $XAUTH nmerge - else touch $XAUTH fi chmod a+r $XAUTH fi docker run -it \ --env="DISPLAY=$DISPLAY" \ --env="QT_X11_NO_MITSHM=1" \ --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \ --env="XAUTHORITY=$XAUTH" \ --volume="$XAUTH:$XAUTH" \ --runtime=nvidia \ my_melodic_image \ bash
Make the script executable
$ chmod a+x run_my_image.bash
Execute the script
$ ./run_my_image.bash
Then inside the container launch RViz
$ roscore > /dev/null & rosrun rviz rviz
ATI/AMD
The following assumes you are using the FOSS driver.
You must install Mesa libraries in the image:
RUN \ apt-get update && \ apt-get -y install libgl1-mesa-glx libgl1-mesa-dri && \ rm -rf /var/lib/apt/lists/*
Now run your container with the necessary Xorg and DRI mounts:
xhost + docker run \ --device=/dev/dri \ --group-add video \ --volume=/tmp/.X11-unix:/tmp/.X11-unix \ --env="DISPLAY=$DISPLAY" \ your_image
Mesa libraries are preinstalled in newer versions of the ROS's full desktop docker image, so the following should just work:
xhost + sudo docker run -it \ --device=/dev/dri \ --group-add video \ --volume=/tmp/.X11-unix:/tmp/.X11-unix \ --env="DISPLAY=$DISPLAY" \ osrf/ros:melodic-desktop-full \ rviz
See AMD's ROCm TensorFlow Docker Image if you need GPU Compute acceleration, such as TensorFlow.
Intel
You must install Mesa libraries in the image:
RUN \ apt-get update && \ apt-get -y install libgl1-mesa-glx libgl1-mesa-dri && \ rm -rf /var/lib/apt/lists/*
Now run your container with the necessary Xorg and DRI mounts:
xhost + docker run \ --volume=/tmp/.X11-unix:/tmp/.X11-unix \ --device=/dev/dri:/dev/dri \ --env="DISPLAY=$DISPLAY" \ your_image
Also, note that users in the container (other than root) need access to the 'video' group for Mesa DRI devices.
Troubleshooting
glxgears is handy for troubleshooting GPU acceleration. It should run with no errors if everything is working properly.
apt-get install mesa-utils glxgears
Use this first outside of the container to verify that the host Xorg is accelerated, then run it inside the container.
References
Gernot Klingler and his detailed post: How docker replaced my virtual machines and chroots, a guide in how to enable a container to connect to an x-server and graphical hardware acceleration.
Docker on AWS GPU Ubuntu 14.04 / CUDA 6.5 by Traun Leyden and providing details on using nvidia devices with Docker
miscellaneous Dockerfile examples using Cuda
elastic-thought, a large project leveraging docker with cuda for deep convolutional neural networks in caffe.