## For instruction on writing tutorials ## http://www.ros.org/wiki/WritingTutorials #################################### ##FILL ME IN #################################### ## for a custom note with links: ## note = ## for the canned note of "This tutorial assumes that you have completed the previous tutorials:" just add the links ## note.0= ## descriptive title for the tutorial ## title = Using Hardware Acceleration with Docker ## multi-line description to be displayed in search ## description = This tutorial walks you through using Hardware Acceleration with Docker for various ROS tools. ## the next tutorial description (optional) ## next = ## links to next tutorial (optional) ## next.0.link= ## next.1.link= ## what level user is this tutorial for ## level= IntermediateCategory ## keywords = ROS, Docker, Hardware Acceleration, Tooling #################################### <> In this tutorial, we go over some of the recent methods in enabling Hardware Acceleration within Docker containers. If you've tried using graphical interfaces or process requiring CUDA or OpenGL inside containers, you've most likely encountered the need for utilising hardware acceleration. Besides the dependencies, use and mounting devices for hardware acceleration is relatively simple. As a best practice, try to keep most of your images hardware agnostic, corralling any driver specific setup to the last layer in building the docker image. For example, leave the driver install steps towards the bottom end of the Dockerfile, or the last tag added in the hierarchy. Thus any rebuilding or modifications for in changes hardware/drivers when sharing with others or swapping deployed targets can be minimized. The methods listed are not exhaustive, as this all still quite new and continually evolving. Please feel free to contribute by keeping this wiki update and adding additional resources. <> = Accelerated Graphics = == Nvidia == There are two ways of getting hardware accelerated graphics with nvidia cards: nvidia-docker1 and nvidia-docker2. The official osrf images ship with support for nvidia-docker1. If you would like to use nvidia-docker2, you must create your own Dockerfile. === Rocker === [[https://github.com/osrf/rocker|rocker]] is a tools which will help you run docker containers with hardware acceleration. If you have an nvidia driver and need graphics acceleration you can run it with `--nvidia --x11` as an option to enable the nvidia drivers and the X server in the container. === nvidia-docker1 === https://github.com/NVIDIA/nvidia-docker/wiki/Installation-(version-1.0) Support for Nvidia and Docker is probably the most widely documented and discussed on the internet thanks to many similar efforts in supporting GPU computation in cloud based environments. NVIDIA now has a tool for running accelerated containers: https://github.com/NVIDIA/nvidia-docker nvidia-docker will volume mount the driver files for the container at /usr/local/nvidia, so you'll need a few lightweight changes in the Dockerfile: {{{ FROM osrf/ros:indigo-desktop-full # nvidia-docker hooks LABEL com.nvidia.volumes.needed="nvidia_driver" ENV PATH /usr/local/nvidia/bin:${PATH} ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH} }}} Then build the dockerfile, we'll perhaps tag the one above as ros:nvidia. You should then first start the container using nvidia-docker: {{{ nvidia-docker run -it \ --env="DISPLAY=$DISPLAY" \ --env="QT_X11_NO_MITSHM=1" \ --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \ ros:nvidia \ bash -c "roscore & rosrun rviz rviz" }}} === nvidia-docker2 === Follow this link for install instructions for `nvidia-docker2` https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0) Some ROS packages like RViz and Gazebo need OpenGL. nvidia-docker2 requires `libglvnd` (GL Vendor-Neutral Dispatch library) to be installed inside the image to get OpenGL calls working correctly. Lucky with ROS '''melodic''' and up, this package should already be installed when building from `ros-base` images. However, for older ROS releases before melodic targeting older distros, an alternate way to gain `libglvnd` support is to base off "FROM" these Docker images: https://hub.docker.com/r/nvidia/opengl/ . First create a directory with a Dockerfile and entrypoint script inside. {{{ $ mkdir my_melodic_image & cd my_melodic_image $ touch Dockerfile }}} Paste the following content into the Dockerfile. {{{ FROM osrf/ros:melodic-desktop-full # nvidia-container-runtime ENV NVIDIA_VISIBLE_DEVICES \ ${NVIDIA_VISIBLE_DEVICES:-all} ENV NVIDIA_DRIVER_CAPABILITIES \ ${NVIDIA_DRIVER_CAPABILITIES:+$NVIDIA_DRIVER_CAPABILITIES,}graphics }}} Build the image. Don't forget the period at the end of that command. {{{ $ cd my_melodic_image/ $ docker build -t my_melodic_image . }}} Now create a script to run the image called run_my_image.bash {{{ #!/bin/bash XAUTH=/tmp/.docker.xauth if [ ! -f $XAUTH ] then xauth_list=$(xauth nlist :0 | sed -e 's/^..../ffff/') if [ ! -z "$xauth_list" ] then echo $xauth_list | xauth -f $XAUTH nmerge - else touch $XAUTH fi chmod a+r $XAUTH fi docker run -it \ --env="DISPLAY=$DISPLAY" \ --env="QT_X11_NO_MITSHM=1" \ --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \ --env="XAUTHORITY=$XAUTH" \ --volume="$XAUTH:$XAUTH" \ --runtime=nvidia \ my_melodic_image \ bash }}} Make the script executable {{{ $ chmod a+x run_my_image.bash }}} Execute the script {{{ $ ./run_my_image.bash }}} Then inside the container launch RViz {{{ $ roscore > /dev/null & rosrun rviz rviz }}} == ATI/AMD == ''The following assumes you are using the FOSS driver.'' You must install Mesa libraries in the image: {{{ RUN \ apt-get update && \ apt-get -y install libgl1-mesa-glx libgl1-mesa-dri && \ rm -rf /var/lib/apt/lists/* }}} Now run your container with the necessary Xorg and DRI mounts: {{{ xhost + docker run \ --device=/dev/dri \ --group-add video \ --volume=/tmp/.X11-unix:/tmp/.X11-unix \ --env="DISPLAY=$DISPLAY" \ your_image }}} Mesa libraries are preinstalled in newer versions of the ROS's full desktop docker image, so the following should just work: {{{ xhost + sudo docker run -it \ --device=/dev/dri \ --group-add video \ --volume=/tmp/.X11-unix:/tmp/.X11-unix \ --env="DISPLAY=$DISPLAY" \ osrf/ros:melodic-desktop-full \ rviz }}} See [[https://hub.docker.com/r/rocm/tensorflow/|AMD's ROCm TensorFlow Docker Image]] if you need GPU Compute acceleration, such as TensorFlow. == Intel == You must install Mesa libraries in the image: {{{ RUN \ apt-get update && \ apt-get -y install libgl1-mesa-glx libgl1-mesa-dri && \ rm -rf /var/lib/apt/lists/* }}} Now run your container with the necessary Xorg and DRI mounts: {{{ xhost + docker run \ --volume=/tmp/.X11-unix:/tmp/.X11-unix \ --device=/dev/dri:/dev/dri \ --env="DISPLAY=$DISPLAY" \ your_image }}} Also, note that users in the container (other than root) need access to the 'video' group for Mesa DRI devices. = Troubleshooting = glxgears is handy for troubleshooting GPU acceleration. It should run with no errors if everything is working properly. {{{ apt-get install mesa-utils glxgears }}} Use this first outside of the container to verify that the host Xorg is accelerated, then run it inside the container. = References = * [[http://gernotklingler.com/blog/|Gernot Klingler]] and his detailed post: [[http://gernotklingler.com/blog/docker-replaced-virtual-machines-chroots/|How docker replaced my virtual machines and chroots]], a guide in how to enable a container to connect to an x-server and graphical hardware acceleration. * [[http://tleyden.github.io/blog/2014/10/25/docker-on-aws-gpu-ubuntu-14-dot-04-slash-cuda-6-dot-5/|Docker on AWS GPU Ubuntu 14.04 / CUDA 6.5]] by Traun Leyden and providing details on using nvidia devices with Docker * [[https://github.com/Kaixhin/dockerfiles|miscellaneous Dockerfile]] examples using Cuda * [[https://github.com/tleyden/elastic-thought|elastic-thought]], a large project leveraging docker with cuda for deep convolutional neural networks in caffe. ## AUTOGENERATED DO NOT DELETE ## TutorialCategory ## FILL IN THE STACK TUTORIAL CATEGORY HERE ## ToolingCategory