- cross-posted to:
- technology@hexbear.net
- technology@midwest.social
- technology@lemmy.ml
- cross-posted to:
- technology@hexbear.net
- technology@midwest.social
- technology@lemmy.ml
cross-posted from: https://lemmy.ml/post/36807834
cross-posted from: https://lemmy.ml/post/36807834
LLMs do not generally democratize creativity, they undermine creativity and prevent people from developing the skills to exercise their creativity.
When it comes to creative fiction:
IP be damned, generative LLMs (I refuse to call these "AI"s because not only are they not intelligent, they’re a dead end that will never lead to actual AIs) produce shit results and that’s my biggest problem with them. They do not innovate, they merely regurgitate and repackage the most statistically common things in their training set. I’ve read short stories by LLMs and their style and tone are terrible. There’s nothing of substance to them. Reading their stories is like eating a box of Krispy Kreme doughnuts, which are simultaneously light and fluffy and full of poison. The first one is ok. The 2nd one is just more of the same. The third is nauseating.
Ask an LLM to write a story about the pope, then ask the LLM to write a story fictionalizing a chess game, and it will feel exactly the same. It’s extremely superficial and just has nothing interesting to say.
When it comes to visual art: you ask some generative model to generate you an image of something and it will usually look wrong and will have an overused style that has been ruined by being used for so much low quality slop. I get disgusted seeing anything that looks LLM generated because it looks like overused slop, like eating too much of the same food and then not being able to make myself eat it again for months. And again, there’s no stylistic innovation in regurgitation.
Both of these criticisms can be applied to human-made writing and human-made visuals, and I do apply them as well! There’s so much slop out there that people write and draw which is boring or offputting, and the problem is that LLMs are trained on that so they reproduce that!
When it comes to educational text, programming, and providing factual answers: LLMs are bullshit machines, which means they’re optimized to sound plausible, not to provide correct answers. And mathematically it is impossible to solve the bullshitting problem (again, these are called “hallucinations” but that makes it sound like a mistake or an aberration rather than what it actually is: an unavoidable artifact of how these things work). But since there are no tells to warn you that they’re wrong, none of their output can be trusted. I have used LLMs to write me disposable or simple code I was too lazy to write myself, but only because I have the expertise to vet it and confirm by reading that it’s not going to do something horrible. Even then, if I’ve wanted to keep it or incorporate it into something I was doing I’ve often had to clean it up myself, or change the comments because the comments are just wrong, and the style is all inconsistent, and in those cases it’s easier to just rewrite it from scratch. But when I’ve tried to quality-check ChatGPT by asking it technical questions I know the answers to, it gives me a mix of truth and lies that sound equally plausible unless you know better. The most dangerous kind of lie is that kind.
Pushing people to rely on unreliable tools is a bad idea. Why should “a factory worker draft a union newsletter” with an LLM when the union newsletter will inherently be worse because it was made by an LLM that produces flat-toned bullshit? You lose the human and personal voice, the fire and zeal that are needed for organizing, by passing it through an LLM. Or “rent strike scenario simulations” done by an LLM are completely unreliable and worthless. " enable a nurse to visualize a protest poster" using an LLM to emphasize the importance of the skilled human labor that nurses provide? An absurd and self-undermining tactic.
I don’t know or care whether it’s inherently immoral for leftists to use LLMs, but I do care that it is just a bad idea. I wouldn’t make a Marxist Case for investing in NFTs!
I’ve used Deepseek, by the way, which is perhaps a more party-line approved LLM, and it was still awful in all the ways ChatGPT was