cm0002@piefed.world to LocalLLaMA@sh.itjust.worksEnglish · 5か月前ollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimizationwww.phoronix.comexternal-linkmessage-square12fedilinkarrow-up133
arrow-up133external-linkollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimizationwww.phoronix.comcm0002@piefed.world to LocalLLaMA@sh.itjust.worksEnglish · 5か月前message-square12fedilink
minus-squareafaix@lemmy.worldlinkfedilinkEnglisharrow-up0·5か月前Doesn’t llama.cpp have a -hf flag to download models from huggingface instead of doing it manually?
minus-squarepanda_abyss@lemmy.calinkfedilinkEnglisharrow-up1·5か月前It does, but I’ve never tried it, I just use the hf cli
Doesn’t llama.cpp have a -hf flag to download models from huggingface instead of doing it manually?
It does, but I’ve never tried it, I just use the hf cli