My understanding is most of you are anti AI? My only question is…why? It is the coolest invention since the Internet and it is remarkable how close it can resemble actual consciousness. No joke the AI in sci fi movies is worse than what we actually have in many ways!
Don’t get me wrong, I am absolutely anti “AI baked into my operating system and cell phone so that it can monitor me and sell me crap”. If that’s what being Anti AI is…to that I say Amen.
But simply not liking a privacy conscious experience or utilization of AI at all? I’m not getting it?
It’s a glorified parakeet
“All of your issues with ai go away if capitalism goes away”
Word. Clearly, capitalism drives the world economy, so…
What you’re calling AI is a mass marketing ponzy scheme. LLMs are not even actual AI. Beyond that issue, its development is in the hands of capital exclusively, and it will only exist to serve capital interests which come at the expense of the lower and working classes by necessity given what corporations (which are essentially unregulated in current climates) are designed to do. What you’re calling AI will only be used to hurt human lives and worsen living conditions for all of us (before you nitpick, I think enabling the 0.1% and their hoarding pathology hurts them too). I personally believe you’re already aware of that and are cynically trolling, and despite that I’m giving you the honest truth and factual reality of this subject because there is nothing good about being a techno-fetishist sociopath who thinks the answer to humanity’s problems is to make humanity itself obsolete, even if it’s ‘cool’. You clearly got the wrong fucking message from Terminator.
This is why when actual AI emerges I can only hope it’ll be in the hands of a public or collective development process and designed with an intent of progression and cooperation in mind.
I will preface this with my usual disclaimer on such topics: I do not believe in intellectual property (that is, the likening of thought to physical possessions). I do not think remixing is a sin and I largely agree with the Electronic Frontier Foundation’s take that “AI training” may largely be fair use. So, I don’t think so-called “generative AI” is inherently evil, however in practice I think it is very often used for evil today.
The most obvious example is, of course, the threat to the work force. “AI” is pitched as a tool that can replace human workers and “wipe out entire categories of human jobs.” Ethical issues aside, “AI” as it exists today is not capable of doing what its evangelists sell it as. “AI chat bots” do not know, but they can give off a very convincing impression of knowledge.
“AI” is also used as a tool to pollute the web with worse-than-worthless garbage. At best it is meaningless and at worst it is actively harmful. I would actually say machine generated text is worse than imagery here, because it feels almost impossible to do a web search without running into some LLM generated blog spam.
Creators of “AI” systems use scraper bots to collect data for training. I do not necessarily believe this is evil per se, but again - these bots are not well behaved. They cause real problems for real human users, far beyond “stealing jpegs.” There is a sense of Silicon Valley entitlement here - we can do whatever we want and deal with the consequences later, or never.
I have long held that a tool, like any human creation, is imbued with the values and will of its creators, and thus must serve both the creator and the user as its masters (The software freedom movement is largely an attempt at reconciling these interests, by empowering users with the ability to change their tools to do their bidding). In the case of “Generative AI” it is very often the case that both the creators and users of these tools intend them for evil. We often make the mistake of attributing agency to these computer programs, so as to minimize the human element (perhaps, in order to create a “man vs machine” narrative). We speak of “AI” as if it just woke up one day, a la Skynet, in order to steal our jpegs and put us out of work and generate mountains of webslurry. Make no mistake, however - the problems with “AI” are human problems. Humans created these systems in order for other humans to use, in order to inflict harm to other humans. “AI slop” was created specifically for an environment in which human-generated slop already ran amok, because the web as it existed then (as it exists today) rewards the generation of slop.
There was a lawyer recently who used a chatbot to lodge a motion in court. He got all sorts of case law cases from it. The problem? None of the cases were real.
current AI is absolutely not better than sci-fi AI, not by a long shot.
I do think LLMs are interesting and have neat potential for highly specific applications, though I also have many ethical concerns regarding the data it is being trained on. AI is a corporate buzzword designed to attract investment, and nothing more.
Valid question on a community for questions. Tons of legitimate responses from people mostly hyped for the opportunity to shed light on why they think AI is bad. Which seems to be what OP wanted to figure out. Currently negative 25 for the votes on the post. Seems off.
also because its just a way for big tech to harvest your data while stilling content from creators and destroying the planet
also because instead of actually innovating any more tech companies just jam ai slop in everything
Its not smart. Its a theft engine that averages information and confidently speaks hallucinations insisting its fact. AI sucks. It wont ever be AGI because it doesn’t “think”, it runs models and averages. Its autocomplete at huge scale. It burns the earth and produces absolute garbage.
The only LLMs doing anything good are because averaging large data models was good for a specific case, like looking at millions of cancer images and looking at averages.
This does not work for deterministic answers. The “AI” we have now is corporate bullshit they are desperate to have make money and is a massive investor hype machine.
Stop believing the shitty CEOs.
ignoring the hate-brigade, lemmy users are probably a bit more tech savvy on average.
and i think many people who know how “AI” works under the hood are frustrated because, unlike most of it’s loud proponents, they have real-world understanding what it actually is.
and they’re tired of being told they “don’t get it”, by people who actually don’t get it. but instead they’re the ones being drowned out by the hype train.
and the thing fueling the hype train are dishonest greedy people, eager to over-extend the grift at the expense of responsible and well engineered “AI”.
but, and this is the real crux of it, keeping the amazing true potential of “AI” technology in the hands of the rich & powerful. rather than using it to liberate society.
lemmy users are probably a bit more tech savvy on average.
Second this.
but, and this is the real crux of it, keeping the amazing true potential of “AI” technology in the hands of the rich & powerful. rather than using it to liberate society.
Leaving public interests (data and everything around data) to the hands of top 1% is a recipe for disaster.
OK remember like 70 years ago when they started saying we were burning the planet? And then like 50 years ago they were like “no guys we’re really burning the planet”? And then 30 years ago they were like “seriously we’re close to our last chance to not burn the planet”? and then in the past few years they’ve been like “the planet is currently burning, species are going extinct, and we are beginning to experience permanent effects that might not snowball into an extinction event if we act right now?”
But sure, AI is really cool and can trick you, personally into thinking it’s conscious. It’s just using nearly as much power as the whole of Japan, but you’re giggling and clapping along, so how bad can it really be? It’s just poisoning the air and water to serve you nearly accurate information, when you could have had accurate information by googling it for a fraction of the energy cost.I hate AI because I’m a responsible adult.
Lemmy loves artists who have their income threatened by AI because AI can make what they make at a substantially lower cost with an acceptable quality in a fraction of the time.
AI depends on being trained on the artistic works of others, essentially intellectual and artistic property theft, so that you can make an image of a fat anime JD Vance. Calling it plagiarism is a bit far, but it edges so hard that it leaks onto the balls and could cum with a soft breeze.
AI consumes massive amounts of energy, which is supplied through climate hostile means.
AI threatens to take countless office jobs, which are some of the better paying jobs in metropolitan areas where most people can’t afford to live.
AI is a party trick, it is not comparable to human or an advanced AI. It is unimaginative and not creative like an actual AI. Calling the current state of AI anything like an advanced AI is like calling paint by numbers the result of artistry. It can rhyme, it can be like, but it can never be original.
I think that about sums it up.
The less tech-savvy of lemmy
Acceptable quality is a bit of a stretch in many cases… Especially with the hallucinations everywhere in generated text.
Most of that gets solved with an altered prompt or trying again.
That is less of an issue as time goes on. It was just a couple years ago that the number of fingers and limbs were a roll of the dice, now the random words in the background are alien.
AI is getting so much money dumped into it that it is progressing at a very rapid pace. an all AI movie is just around the corner and it will have a style that says AI, but could easily be mistaken with a conventional film production that has a particular style.
Once AI porn gets there, AI has won media.
Eh, I at least partially disagree. I’ve noticed some of the modern models (such as Claude 4.0) have started to hallucinate more than previous models. I know you’re talking about image generation, but still. I can’t quite put my finger on it, but maybe it’s cause the models are beginning to consume their own slop.
https://cdn.openai.com/pdf/2221c875-02dc-4789-800b-e7758f3722c1/o3-and-o4-mini-system-card.pdf
OpenAi May 2025: in their internal tests the newer model the higher hallucination rate.
maybe it’s cause the models are beginning to consume their own slop.
That’s going to be a huge issue indeed because synthetic data contains bias and it’s proven that produced biased models.
Fairly stated
Lemmy loves artists who have their income threatened by AI because AI can make what they make at a substantially lower cost with an acceptable quality in a fraction of the time.
AI depends on being trained on the artistic works of others, essentially intellectual and artistic property theft, so that you can make an image of a fat anime JD Vance. Calling it plagiarism is a bit far, but it edges so hard that it leaks onto the balls and could cum with a soft breeze.
While I mostly agree with all your arguments, I think the ‘intellectual property’ part - from my perspective - is a bit ambivalent on Lemmy. When someone uses an AI that is trained on pirated art to create a meme, that’s seen as a sin. Meanwhile, using regular artists’ or photographers’ work in memes without paying the author is really common. More or less every news article comes with a link to Archive.is to bypass paywalls and there are also special communities subject to (digital) piracy which are far more polular than AI content.
I’m not saying that you are wrong or that piracy is great, but when pirating media or creating memes, you can pinpoint a specific artist that created the original piece. And therefore acts as a bit of an ad for the creator (not necessarily a good one). But with AI it’s for the most part not possible to say exactly who it took “inspiration” from. Which in my opinion makes it worse. Said in other words: A viral meme can benefit the artist, while AI slop does not.
It is unimaginative
Can you make an example of something 100% original that was not inspired by anything that came before?
That’s not what imaginative means.
If you’d like an example of AI being exceptionally boring to look at, though, peruse through any rule 34 site that has had its catalogue overrun with AI spam: an endless see of images that all have the same artstyle, the same color choices, the same perspective, the same poses, the same personality; a flipbook of allegedly different characters that all. look. fucking. identical.
I’m not joking: I was once so bored by the AI garbage presented to me, I actually just stopped jerking off.
If you people would do something interesting with your novelty toy, I would be like 10% less mad about it.
Ironically you just said that artists are wrong to be concerned.
The threat of AI is not that it will be more human than human. It is that it will become so ubiquitous that real people are hard to find.
I couldn’t find many real people.
Are you sure that I’m real?
Lots of good points in the replies here, but I want to make the perhaps secondary point that the automation of thought is generally just bad for you. Dgmw AI (even LLMs) has its uses, but we’re already seeing the atrophying effects on some people, and in my experience as a teacher I have seen a number of people for whom chat bot dependency has become a serious problem on a par with drug addiction. I dread to think what’s going to happen to these people when we enter the ‘jack up the prices’ phase of the grift, let alone the ‘have you considered product/voting X may solve your problems’ phase, which is currently only being held back by engineering difficulties.
I myself despise capitalism, and would not like to see the current global ecological disaster worsen because some stupid-ass techbros forcing their shit on everyone.
It is the coolest invention since the Internet and it is remarkable how close it can resemble actual consciousness.
No. It isn’t. First and foremost, it produces a randomised output that it has learned to make look like other stuff on the Internet. It has as much to do with consciousness as a set of dice and the fact that you think it’s more than that already shows how you don’t understand what it is and what it does.
AI doesn’t produce anything new. It doesn’t reason, it isn’t creative. As it has no understanding or experience, it doesn’t develop or change. Using it to produce art shows a lack of understanding of what art is supposed to be or accomplish. AI only chews up what’s being thrown at it to vomit it onto the Web, without any hint of something new. It also lacks understanding about the world, so asking it about decisions to be made is not only like asking an encyclopedia that comes up with answers on the fly based on whether they sound nice, regardless of the answers being correct, applicable or even possible.
And on top of all of this, on top of people using a bunch of statistical dice rolls to rob themselves of experiences and progress that they’d have made had they made their own decisions or learned painting themselves, it’s an example of the “rules for thee, not for me”. An industry that has lobbied against the free information exchange for decades, that sent lawyers after people who downloaded decades old books or movies for a few hours of private enjoyment suddenly thinks that there might be the possibility of profits around the corner, so they break all the laws they helped create without even the slightest bit of self-awareness. Their technology is just a hollow shell that makes the Internet unusable for all the shit it produces, but at least it isn’t anything else. Their business model, however, openly declares that people are only second class citizens.
There you are. That’s why I hate it. What’s not to hate?
AI doesn’t produce anything new.
Many humans don’t, either.
False equivalencies, or ‘Whatabouts’ are not a form of argument, they’re a deflection debate tactic.
Oh really? Man, you don’t say!
What’s your point?
I am will aware it is not conscious 🙄. Hence the word RESEMBLES.
But here’s the scary thing. Even with all your song and dance you just typed when we are interacting with AI our brains literally can not tell the difference between human interaction and AI interaction. And that to me…is WILD and so trippy
Yeah, that mostly just proves that humans are idiots, something we’ve known for awhile.
when we are interacting with AI our brains literally can not tell the difference between human interaction and AI interaction
I can certainly tell, at least a lot of the time. I won’t say all of the time, but LLMs are squarely in uncanny valley territory for me. Most of what they generate seems slightly off, in one way or another.
I’ve never knowingly engaged with a proper chat bot beyond the ‘virtual help desk’ things some sites use. By proper I mean some sizable system beyond what can be typically run at home.
Home ran ones are bizarre though, so far whatever I try they get stuck on go-to phrases and tend to return to specific formats of response over and over. Very much not passing the turing test.
It doesn’t even resemble a consciousness. It’s not even close.
Also, why are you asking your question to begin with if your answer is then just a condescending “but sometimes we can’t tell AI from humans apart”? Yeah, no shit. It’s been like that at least since the 60s. That’s not the point. If that’s all you have, then go ahead, be happy you found something “wild and so trippy”. But don’t ask if there are legitimate reasons to reject AI if all you want to do is indulge yourself.
If you think that AI closely resembles a conscious flesh and blood human being you need to go outside more. That is a dangerous line of thinking and people are forming relationships with a shoddy simulacrum of humanity because of it. AI is still in it’s early conception and it’s only a matter of time before someone’s grok waifu convinces them to shoot up a school.