Lee Duna@lemmy.nz to Technology@lemmy.worldEnglish · 11 months agoWant a more private ChatGPT alternative that runs offline? Check out Janbgr.comexternal-linkmessage-square82fedilinkarrow-up1430
arrow-up1430external-linkWant a more private ChatGPT alternative that runs offline? Check out Janbgr.comLee Duna@lemmy.nz to Technology@lemmy.worldEnglish · 11 months agomessage-square82fedilink
minus-squareInfiltrated_ad8271@kbin.sociallinkfedilinkarrow-up14·11 months agoThe question is quickly answered as none is currently that good, open or not. Anyway it seems that this is just a manager. I see some competitors available that I have heard good things about, like mistral.
minus-square🇸🇵🇪🇨🇺🇱🇦🇹🇪🇷@lemmy.worldlinkfedilinkEnglisharrow-up5·11 months agoI think a good 13B model running on 12GB of VRAM can do pretty well. But I’d be hard pressed to believe anything under 33B would beat 3.5.
minus-squaremiss_brainfart@lemmy.mllinkfedilinkEnglisharrow-up4·edit-211 months agoAsking as someone who doesn’t know anything about any of this: Does more B mean better?
minus-squarealphafalcon@feddit.delinkfedilinkEnglisharrow-up5·11 months agoB stands for Billion (Parameters) IIRC
minus-squarejune@lemmy.worldlinkfedilinkEnglisharrow-up3·11 months ago3.5 fuckin sucks though. That’s a pretty low bar to set imo.
The question is quickly answered as none is currently that good, open or not.
Anyway it seems that this is just a manager. I see some competitors available that I have heard good things about, like mistral.
Local LLMs can beat GPT 3.5 now.
I think a good 13B model running on 12GB of VRAM can do pretty well. But I’d be hard pressed to believe anything under 33B would beat 3.5.
Asking as someone who doesn’t know anything about any of this:
Does more B mean better?
B stands for Billion (Parameters) IIRC
3.5 fuckin sucks though. That’s a pretty low bar to set imo.