rocm: doc driver constraints (#14833)

This commit is contained in:
Daniel Hiltgen
2026-03-13 15:53:35 -07:00
committed by GitHub
parent 3980c0217d
commit 2f9a68f9e9
2 changed files with 23 additions and 0 deletions

View File

@@ -61,6 +61,10 @@ Ollama supports the following AMD GPUs via the ROCm library:
### Linux Support
Ollama requires the AMD ROCm v7 driver on Linux. You can install or upgrade
using the `amdgpu-install` utility from
[AMD's ROCm documentation](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/).
| Family | Cards and accelerators |
| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| AMD Radeon RX | `9070 XT` `9070 GRE` `9070` `9060 XT` `9060 XT LP` `9060` `7900 XTX` `7900 XT` `7900 GRE` `7800 XT` `7700 XT` `7700` `7600 XT` `7600` `6950 XT` `6900 XTX` `6900XT` `6800 XT` `6800` `5700 XT` `5700` `5600 XT` `5500 XT` |

View File

@@ -114,6 +114,25 @@ If you are experiencing problems getting Ollama to correctly discover or use you
- `OLLAMA_DEBUG=1` During GPU discovery additional information will be reported
- Check dmesg for any errors from amdgpu or kfd drivers `sudo dmesg | grep -i amdgpu` and `sudo dmesg | grep -i kfd`
### AMD Driver Version Mismatch
If your AMD GPU is not detected on Linux and the server logs contain messages like:
```
msg="failure during GPU discovery" ... error="failed to finish discovery before timeout"
msg="bootstrap discovery took" duration=30s ...
```
This typically means the system's AMD GPU driver is too old. Ollama bundles
ROCm 7 linux libraries which require a compatible ROCm 7 kernel driver. If the
system is running an older driver (ROCm 6.x or earlier), GPU initialization
will hang during device discovery and eventually time out, causing Ollama to
fall back to CPU.
To resolve this, upgrade to the ROCm v7 driver using the `amdgpu-install`
utility from [AMD's ROCm documentation](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/).
After upgrading, reboot and restart Ollama.
## Multiple AMD GPUs
If you experience gibberish responses when models load across multiple AMD GPUs on Linux, see the following guide.