You must log in or # to comment.
Yeah that’s a no from me 😂 what causes this anyway? Badly thought out fine-tuning dataset?
Haven’t had a response sounding that out of touch from the few LLaMA variants I’ve messed around with in chat mode
Probably just poor tuning, but in general it’s pretty hard to guarantee that the model won’t do something unexpected. Hence why it’s a terrible idea to use LLMs for something like this.