cross-posted from: https://lemmy.world/post/27088416

This is an update to a previous post found at https://lemmy.world/post/27013201


Ollama uses the AMD ROCm library which works well with many AMD GPUs not listed as compatible by forcing an LLVM target.

The original Ollama documentation is wrong as the following can not be set for individual GPUs, only all or none, as shown at github.com/ollama/ollama/issues/8473

AMD GPU issue fix

  1. Check your GPU is not already listed as compatibility at github.com/ollama/ollama/blob/main/docs/gpu.md#linux-support
  2. Edit the Ollama service file. This uses the text editor set in the $SYSTEMD_EDITOR environment variable.
sudo systemctl edit ollama.service
  1. Add the following, save and exit. You can try different versions as shown at github.com/ollama/ollama/blob/main/docs/gpu.md#overrides-on-linux
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
  1. Restart the Ollama service.
sudo systemctl restart ollama
    • Lucy :3@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      1 day ago

      Why not throw that into a VM with VFIO passthrough, plug the GPU in via an external dock and if we are already at abstracting shit away for unnecessary complexity and non-compatibility do all that on windows?

        • Lucy :3@feddit.org
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          17 hours ago

          Really easy to start running it

          Then everything goes wrong, from configuration over logs to cuda. And the worst fucking debugging ever.