I'd like to do some machine learning on my AMD 6800 XT gpu within a python image based on python:3.9.10
. I can confirm that the GPU is available outside of the image (in a wsl2 instance). However, if I do docker run -it python:3.9.10 /bin/bash
and then complete the same tutorial (https://docs.microsoft.com/en-us/windows/ai/directml/gpu-tensorflow-wsl#install-the-tensorflow-with-directml-package) it doesn't work:
ERROR Rendering Code Block
This article has led me to think that perhaps docker doesn't support AMD GPUs at all: https://docs.docker.com/desktop/windows/wsl/
Can anyone suggest what I might be able to do to get this to work?
Note that the reason I have picked this image is that my environment is based on a rather lengthy Dockerfile inheriting from python:3.9.10
, and I'd like to keep using that image on the PC with the GPU as well as other (nvidia) environments, so I'm after a portable solution as far as the image is concerned, although I'd be grateful for any solution at this point.
Basically you are just passing it in with the --device
flag but you need drivers and stuff. Check this project out https://github.com/RadeonOpenCompute/ROCm-docker the details section is really helpful.