☆ Yσɠƚԋσʂ ☆@lemmy.ml to Programmer Humor@lemmy.ml · 2 年前Deploying LLMs to production be likelemmy.mlimagemessage-square2fedilinkarrow-up131
arrow-up131imageDeploying LLMs to production be likelemmy.ml☆ Yσɠƚԋσʂ ☆@lemmy.ml to Programmer Humor@lemmy.ml · 2 年前message-square2fedilink
minus-square𝒍𝒆𝒎𝒂𝒏𝒏linkfedilinkarrow-up4·2 年前Yeah that’s a no from me 😂 what causes this anyway? Badly thought out fine-tuning dataset? Haven’t had a response sounding that out of touch from the few LLaMA variants I’ve messed around with in chat mode
minus-square☆ Yσɠƚԋσʂ ☆@lemmy.mlOPlinkfedilinkarrow-up2·2 年前Probably just poor tuning, but in general it’s pretty hard to guarantee that the model won’t do something unexpected. Hence why it’s a terrible idea to use LLMs for something like this.
Yeah that’s a no from me 😂 what causes this anyway? Badly thought out fine-tuning dataset?
Haven’t had a response sounding that out of touch from the few LLaMA variants I’ve messed around with in chat mode
Probably just poor tuning, but in general it’s pretty hard to guarantee that the model won’t do something unexpected. Hence why it’s a terrible idea to use LLMs for something like this.