- cross-posted to:
- technology@lemmit.online
- cross-posted to:
- technology@lemmit.online
This is peak laziness. It seems that the reading list’s author used autoplag to extrude the entire 60 page supplemental insert. The author also super-promises this has never happened before.
No, they are hallucinations or bullshit. I won’t accept any other terms.
If it makes you feel better, I’ve heard good folks like Emily Bender of Stochastic Parrots fame suggest confabulation is a better term. “Hallucination” implies that LLMs have qualia and are accidentally sprinkling falsehoods over a true story. Confabulation better illustrates that it’s producing a bullshit milkshake from its training data that can only be correct accidentally.
You’ve swayed me. I’m now down with all three. Thanks for the explaination.