cm0002@piefed.world to LocalLLaMA@sh.itjust.worksEnglish · 2 months agoollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimizationwww.phoronix.comexternal-linkmessage-square12fedilinkarrow-up133
arrow-up133external-linkollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimizationwww.phoronix.comcm0002@piefed.world to LocalLLaMA@sh.itjust.worksEnglish · 2 months agomessage-square12fedilink
minus-squareafaix@lemmy.worldlinkfedilinkEnglisharrow-up0·2 months agoDoesn’t llama.cpp have a -hf flag to download models from huggingface instead of doing it manually?
minus-squarepanda_abyss@lemmy.calinkfedilinkEnglisharrow-up1·2 months agoIt does, but I’ve never tried it, I just use the hf cli
Doesn’t llama.cpp have a -hf flag to download models from huggingface instead of doing it manually?
It does, but I’ve never tried it, I just use the hf cli