cm0002@piefed.world to LocalLLaMA@sh.itjust.worksEnglish · 7 个月前ollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimizationwww.phoronix.comexternal-linkmessage-square12fedilinkarrow-up133
arrow-up133external-linkollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimizationwww.phoronix.comcm0002@piefed.world to LocalLLaMA@sh.itjust.worksEnglish · 7 个月前message-square12fedilink
minus-squarehendrik@palaver.p3x.delinkfedilinkEnglisharrow-up2·7 个月前I think llama.cpp merged ROCm support in 2023 already. It’s called HIP on their Readme, but I’m not super educated on all the acronyms and compute frameworks and instruction sets.
minus-squareafk_strats@lemmy.worldlinkfedilinkEnglisharrow-up5·7 个月前ROCm is a software stack which includes a bunch of SDKs and API. HIP is a subset of ROCm which lets you program on AMD GPUs with focus portability from Nvidia’s CUDA
I think llama.cpp merged ROCm support in 2023 already. It’s called HIP on their Readme, but I’m not super educated on all the acronyms and compute frameworks and instruction sets.
ROCm is a software stack which includes a bunch of SDKs and API.
HIP is a subset of ROCm which lets you program on AMD GPUs with focus portability from Nvidia’s CUDA