• notfromhere
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Does that setup allow access to PCIe GPUs for CUDA inference from containers or VMs?

    • Anarch157a@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Can’t say anything about CUDA because I don’t have Nvidia cards nor do I work with AI stuff, but I was able to pass the built-in GPU on my Ryzen 2600G to the Jellyfin container so it could do hardware transcoding of videos.

      You need the drivers for the GPU installed on the host OS, then link the devices on /dev to the container. For AMD this is easy, bc the drivers are open source and included in the distro (Proxmox is Debian based), for Nvidia you’d have to deal with the proprietary stuff both on the host and on the containers.

    • oken735@yukistorm.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Yes, you can pass through any GPU to containers pretty easily, and if you are starting with a new VM you can also pass through easily there, but if you are trying to use an existing VM you can run into problems.