- cross-posted to:
- artificial_intel@lemmy.ml
- cross-posted to:
- artificial_intel@lemmy.ml
Anyone surprised by this wasn’t paying attention. This is the “AI” apocalypse everyone has been wringing their hands over and dumbass executives have been salivating over. This is exactly the problem with LLMs, they produce very convincing looking content, but it’s not actually factual content. You need teams of fact checkers and editors to review all their output if you care at all about accuracy.
As is with software developing, actually writing the stuff down is the easiest part of the work. If you already have someone fact checking and editing… why do you need AI to shit out crap just for the writing? It would be easier to gather the facts first, fact check them, then wrangle them through the AI if you don’t want to hire a writer (+ another pass for editing).
LLMs look like magic on a glance, but people thinking they are going to produce high quality content (or code for god’s sake) are delusional.
Yeah. I’m a programmer. Everyone has been telling me that I’m about to be out of a job any day now because the “AI” is coming for me. I’m really not worried. It’s way harder to correct bad code than it is to just throw it all away and start fresh, and I can’t even imagine how difficult it’s going to be to try to debug whatever garbage some “AI” has spewed out. If you employ a dozen programmers now, if you start using AI to generate your code you’re going to need two dozen programmers to debug and fix it’s output.
The promise with “AI” (more accurately machine learning, as this is not AI) as far as code is concerned is as a sort of smart copy and paste, where you can take a chunk of code and say “duplicate this but with these changes”, and then verify and tweak its output. As a smart refactoring tool it shows a lot of promise, but it’s not like you’re going to sit down and go “write me an app” and suddenly it’s done. Well, unless you want Hello World, and even then I’m sure it would find a way to introduce a bug or two.
unless you want Hello World, and even then I’m sure it would find a way to introduce a bug or two.
“Greetings planet”
D’oh!
Yep, I’ve had plenty of discussion about this on here before. Which was a total waste of time, as idiots don’t listen to facts. They also just keep moving the goal posts.
One developer was like they use AI to do their job all the time, so I asked them how that works. Yeah, they “just” have to throw 20% of the garbage away that’s obviously wrong when writing small scripts, then it’s great!
Or another one who said AI is the only way for them to write code, because their main issue is getting the syntax right (dyslexic). When I told them that the syntax and actually writing the code is the easiest part of my job they shot back that they don’t care, they are going to continue “looking like a miracle worker” due to having AI spit out their scripts…
And yet another one that discussed at length how you obviously can’t magically expect AI to put the right things out. So we went to the topic of code reviews and I tried to tell them: Give a real developer 1000+ line pull requests (like the AI might spit out) and there is a chance of a snowball in hell you’ll get bug free code despite reviews. So now they argued: Duh, you give the AI small bite sized Jira tickets to work on, so you can review it! And if the pull request is too long you tell the AI to make a shorter more human readable one! And then we’re back to square one: The senior developer reviewing the mess of code could just write it faster and more correct themselves.
It’s exhausting how little understanding there is about LLMs and their limitations. They produce a ton of seemingly high quality stuff, but it’s never 100% correct.
It seems to mostly be replacing work that is both repetitive and pointless. I have it writing my contract letters, ‘executive white papers’, and proposals.
The contract letters I can use without edit. The white papers I need to usually redirect it, but the second or third output is good. The proposals it functionally does the job I’d have a co-op do… put stuff on paper so I can realize why it isn’t right, and then write to that. (For the ‘fluffy’ parts of engineering proposals, like the cover letters, I can also use it.)
And yet another one that discussed at length how you obviously can’t magically expect AI to put the right things out. So we went to the topic of code reviews and I tried to tell them: Give a real developer 1000+ line pull requests (like the AI might spit out) and there is a chance of a snowball in hell you’ll get bug free code despite reviews.
Arguably this is comparing apples and oranges here. I agree with you that code reviews aren’t going to be useful for evaluating a big code dump with no context. But I’d also say that a significant amount of software in the world is either written with no code review process or a process that just has a human spitting out the big code dump with no context.
The AI hype is definitely hype, but there’s enough truth there to justify some of the hand-wringing. The guy who told you he only has to throw away the 20% of the code that’s useless is still getting 100% of his work done with maybe 40% of the effort (i.e., very little effort to generate the first AI cut, 20% to figure out the stupid stuff, 20% to fix it). That’s a big enough impact to have significant ripples.
Might not matter. It seems like the way it’s going to go in the short term is that paranoia and economic populism are going to kill the whole thing anyway. We’re just going to effectively make it illegal to train on data. I think that’s both a mistake and a gross misrepresentation of things like copyright, but it seems like the way we’re headed.
Arguably this is comparing apples and oranges here. I agree with you that code reviews aren’t going to be useful for evaluating a big code dump with no context. But I’d also say that a significant amount of software in the world is either written with no code review process or a process that just has a human spitting out the big code dump with no context.
That’s not totally true. Even if a developer throws a massive pull request dump at you, there is a high chance the dev at least ran the program locally and tried it out (at least the happy path).
With AI the code might not even compile. Or it looks good at first glance, but has a disastrous bug in the logic (that is extremely easy to overlook).
As with most code: Writing it takes me maybe 10% of the time, if even that. The main problem is finding the right solution, covering edge cases and so on. And then you spend 190% of the time trying to fix a sneaky bug that got into the code, just because someone didn’t think of a certain case or didn’t pay attention. If AI throws 99% correct code at me it would probably take me longer to properly fix it than to just write it myself from scratch.
People have been saying programming would become redundant since the first 4GL languages came out in the 1980s.
Maybe it’ll actually happen some day… but I see no sign of it so far.
Yep, had this argument a bunch. Conversation basically goes:
Them: All you need is a description of the problem and then it can generate code to solve it
Me: But the description has to be detailed enough to cover all the edge cases.
Them: Well yeah.
Me: You know what we call a description of a problem detailed enough to cover all the edge cases?
Them: What?
Me: A program. And the people that know how to write those descriptions are called programmers.
Devil’s advocate though. With things like 4GLs, it was still all on the human to come up with the detailed spec. Best case scenario was that you work very hard, write a lot of things down, generate the code, see that it didn’t work and then ???. That “???” at the end was you as the programmer sitting alone in a room trying to figure out what a non-responsive black box might wanted you to have said instead.
It’s qualitatively different if you can just talk to the black box as though it were a programmer. It’s less of a black box at that point. It understands your language, and it understands the code. So you can start with the spec, but when something inevitably doesn’t work, the “???” step doesn’t just come back to you figuring out with no help what you did wrong. You can ask it questions and make suggestions. You can run experiments. Today’s LLMs hit the wall pretty quick there, and maybe they always will. There’s certainly the viewpoint that “all they do is model text and they can’t really learn anything”.
I think that’s fundamentally wrong. I’m a pretty solid programmer. I have a PhD in Computer Science, and I’ve worked as a software engineer and an architect throughout a pretty long career. And everything I’ve ever learned has basically been through language. Through reading, writing, speaking, and listening to English and a few other languages. I think that to say that I can learn what I’ve learned, but it’s fundamentally impossible for a robot to learn it is to resort to mysticism. At some point, we will have AIs that can do what I do today. I think that’s inevitable.
Well, that particular conversation typically happens in relation to something like a business rules engine, or sometimes one of those drag and drop visual programming languages which everyone always touts as letting you get rid of programmers (but in reality just limits you to a really hard to work with programming language), but there is a lot of overlap with the current LLM based hype.
If we ever do get an actual AI, then yes, AI will probably end up writing most of the programs, although it’s possible programmers will still exist in some capacity maybe for the purpose of creating flow charts or something to hand to the AIs. But we’re a long way off from a true AI, so everyone acting like it’s going to happen any day now is as laughable as everyone promising cold fusion was going to happen any day now back in the 70s. Ironically I think we are more likely to see a workable cold fusion before we see true AI, some of the hot fusion experiments happening lately are very promising.
Fix its* output.
See, this is why I work mostly in Java and Rust and not English. I’ve got those down, but English is WAY harder. Who even came up with this language, it’s a complete mess, glad they’re not making programming languages… or maybe they are, quick see if English and JavaScript share any devs!
You should get an AI to write English for you!
On a side note, I have used AI to help my programming, with some success. Smaller snippets and scripts (1-2 pages) is usually okay, but bigger than that is a big no no. Also, very nice for writing unit tests.
Haha, I know you’re mostly joking, but that comment about “English” creators not making programming languages is golden. Especially because most programming languages use keywords in English :)
Yeah, it was mostly meant as a joke since English doesn’t really have a creator (or at least not one alive today), but it evolved over a very long period. In terms of spelling there’s been some notable contributors, but in general it’s sort of a group effort. Then there’s JavaScript, which isn’t actually that bad with the exception of its very confusing scoping and type coercion rules. The scoping thing is really just a side effect of mixing OO and Functional paradigms together, and the type coercion while well-intentioned, is terribly implemented. If you removed type coercion from JS, and the this keyword, you’d pretty much eliminate every single one of those “omg, wtf JavaScript?!” posts that make the rounds. Well… you’d still probably have the callback hell posts of like 100 nested callbacks, but you can do that in any language, that’s not really a JS problem, so much as a callback based API problem.
it’s a complete mess, glad they’re not making programming languages…
Make a note to never look at Applescript.
Removed by mod
“making ai” these days isn’t so much programming as having access to millions of dollars worth of hardware
Removed by mod
I don’t think this one is even an LLM, it looks like the output of a basic article spinning script that takes an existing article and replaces random words with synonyms.
This seems like the case. One of the first stanzas:
Hunter, initially a extremely regarded highschool basketball participant in Cincinnati, achieved vital success as a ahead for the Bobcats.
Language models are text prediction machines. None of this text is predictable and it contains basic grammatical errors that even small models will almost never make.
AI doesn’t exist, but it will ruin everything anyway.
Hah, great video. There was a reason why I put quotes around AI in my response because yes, what’s being called AI by everyone is not in fact AI, but most people have never even heard of machine learning let alone understand the difference between it and AI. I’ve seen a trend of people starting to use the term AGI to differentiate between “AI” and actual AI, but I’m not really a fan of that because I think that’s just watering down the term AI.
In the industry ML is considered a subset of AI, as are genetic algorithms and other approaches to developing “intelligence”. That’s why people tend to use AGI now to differentiate, because the fields been evolving (not that I agree with the approach either) . Honestly, you show someone even 10/15 years ago what we can do with RL, computer vision, LLMs and they’d certainly call it AI. I think the real problem is a failure to convey what these things actually are, they’re sold to the public under the term AI only to hype up the brand/business.
“AI is whatever haven’t been done yet”
Honestly, you show someone even 10/15 years ago what we can do with RL, computer vision, LLMs and they’d certainly call it AI.
Some people trying ELIZA back in the 60s attributed intelligence and even feelings to it. So yeah, turns out humans are rather easy to trick with good presentation.
The danger about current AI is people giving them important tasks to do when they aren’t up to it. To put it in War Games terms, the problem is not Joshua, not even Professor Falken, but the McKittricks of the world.
if you care at all about accuracy.
There’s the problem right there. The MSN homepage ain’t exactly a pinnacle of superlative journalism.
This article wasn’t even remotely convincing, though.
Throughout his NBA profession, he performed in 67 video games over two seasons
Dude really went wild during the steam summer sale.
Don’t we all.
Gotta teach it to add qualifying language. The above is falsifiable (even if it happens to be true).
Throughout his NBA profession, he performed in approximately 67 video games over two seasons
Throughout his NBA profession, he performed in at least 67 video games over two seasons
The second one is only technically falsifiable. It wouldn’t be practical though as you’d have to prove you investigated every video game over a 2 year period (and not necessarily contiguous). Not an easy task.
Agreed. Otherwise the content was perfect.
I really hope public opinion on AI starts to change. LLMs aren’t going to make anyone’s life easier, except in that they take jobs away once the corporate world determines that they are in a “good-enough” state – desensitizing people to this kind of stupid output is just one step on that trail.
The whole point is just to save the corporate world money. There will never, ever be a content advantage over a human author.
The thing is LLMs are extremely useful at aiding humans. I use one all the time at work and it has made me faster at my job, but left unchecked they do really stupid shit.
I agree they can be useful (I’ve found intelligent code snippet autocompletion to be great), but it’s really important that the humans using the tool are very skilled and aware of the limitations of AI.
Eg, my usage generates only very, very small amounts of code (usually a few lines). I have to very carefully read those lines to make sure they are correct. It’s never generating something innovative. It simply guesses what I was going to type anyways. So it only saved me time spent typing and the AI is by no means in charge of logic. It also is wrong a lot of the time. Anyone who lets AI generate a substantial amount of code or lets it generate code you don’t understand thoroughly is both a fool and a danger.
It does save me time, especially on boilerplate and common constructs, but it’s certainly not revolutionary and it’s far too inaccurate to do the kinds of things non programmers tend to think AI can do.
It’s already made my life much easier.
The technology is amazing.
It’s just there’s a lot of stupid people using it stupidly, and people whose job it is to write happen to really like writing articles about its failures.
There’s a lot more going on in how it is being used and improving than what you are going to see unless you are actually using it yourself daily and following research papers on it.
Don’t buy into the anti-hype, as it’s misleading to the point of bordering on misinformation.
I’m going to fight the machines for the right to keep slaving away myself
And when I’m done, capitalism will give me an off day as a treat!
You’re missing the point. If you don’t have a job to “slave away” at, you don’t have the money to afford food and shelter. Any changes to that situation, if they ever come, are going to lag far behind whatever events cause a mass explosion of unemployment.
It’s not about licking a boot, it’s that we don’t want to let the boot just use something that should be a net good as extra weight as they step on us.
I am not going to purposefully waste human life on tasks that machines could perform or help us be faster at just because late capitalism doesn’t let me, the worker, reap the value from them.
It removes human labor
On a bigger scale we had the loom, the printing press, the steam engine the computer. Imagine if we’d refused them
I can’t see us get ensnared into some neu dark age propelled by some “i need to keep my job” status quo just because we found ourselves with a moronic economic system that makes innovations bad news for the workers it replaces
If it takes AI taking away our livelihoods to get a chance to rework this failing doctrine so be it
I’m not talking communism I’m barely hoping for an organic response to it, likely a UBI
As someone who works in content marketing, this is already untrue at the current quality of LLMs. It still requires a LOT of human oversight, which obviously it was not given in this example, but a good writer paired with knowledgeable use of LLMs is already significantly better than a good content writer alone.
Some examples are writing outside of a person’s subject expertise at a relatively basic level. This used to take hours or days of entirely self-directed research on a given topic, even if the ultimate article was going to be written for beginners and therefore in broad strokes. With diligent fact-checking and ChatGPT alone, the whole process, including final copy, takes maybe 4 hours.
It’s also an enormously useful research tool. Rather than poring over research journals, you can ask LLMs with academic plug-ins to give a list of studies that fit very specific criteria and link to full texts. Sometimes it misfires, of course, hence the need for a good writer still, but on average this can cut hours from journalistic and review pieces without harming (often improving) quality.
All the time writers save by having AI do legwork is then time they can instead spend improving the actual prose and content of an article, post, whatever it is. The folks I know who were hired as writers because they love writing and have incredible commitment to quality are actually happier now using AI and being more “productive” because it deals mostly with the shittiest parts of writing to a deadline and leaves the rest to the human.
It still requires a LOT of human oversight, which obviously it was not given in this example, but a good writer paired with knowledgeable use of LLMs is already significantly better than a good content writer alone.
I’m talking about future state. The goal clearly is to avoid the need of human oversight altogether. The purpose of that is saving some rich people more money. I also disagree that LLMs improve output of good writers, but even if they did, the cost to society is high.
I’d much rather just have the human author, and I just hope that saying “we don’t use AI” becomes a plus for PR due to shifting public opinion.
No, it’s not the ‘goal’.
Somehow when it comes to AI it’s humans who have the binary thinking.
It’s not going to be “either/or” anytime soon.
Collaboration between humans and ML is going to be the paradigm for the foreseeable future.
The hundreds of clearly AI written help articles with bad or useless info every time I try to look something up in the last few months says otherwise…
Because the internet was so clear of junk and spam before LLMs were released?
There once was a time, long long ago, where the interwebs had good information on it. It was even easier to find then, before the googles went hard.
But really I have noticed a massive increase in AI junk writing popping up first in any thing I try to look up.
if you want to go back to the 90s or early 2000s sure. But 4 years ago the internet was full of blogspam clickbait articles and fake news. LLMs have not increased that percetptably to me, the first 10 results on google were often crap 4 years ago and theyre often crap now
I mean… if they’re dead, they probably really suck at basketball so it’s not exactly untrue.
Dead people really are quite useless in basketball.
“Hello, I’m the agent for dead player ‘Magic Bob’, I’d like to enrol him in your team of the Eagles…
Hello?”“Sir this is the other NBA. You wanna contact the Necromatic Basketball Association.”
Oh, right.
Is there a pentagram you could point me to?
I mean, MSN is just a portal and I doubt there’s much behind it besides what domains are popular. MSN “published” this the same way Google News published articles. It sounds better to say Microsoft did it, but it’s from some news site called Race Track and it was simply scraped by MSN.
Yeah, but that’s a key part of the problem. The media had already automated a lot of the news curation into Google News, MSN and other portals, getting people used to not paying much attention to the particular source of news. The news is now moving to generating the actual content in an automated way, rather than just the aggregation step.
But it still isn’t MSN who did it. The key part of the problem is entirely glossed over in the article.
“The full story is that back in 2020, MSN fired the team of human journalists responsible for vetting content published on its platform. As a result, as we reported last year, the platform ended up syndicating large numbers of sloppy articles about topics as dubious Bigfoot and mermaids, which it deleted after we pointed them out.”
MSN is not blameless for publishing bad content without supervision. And we are due for a wave of bad AI content starting now. So this problem is going to keep getting worse.
Thats a different problem and not even new. It’s not even the same problem you referenced as the “key” part of the problem. Algorithms providing content is behind every mainstream platform ever.
I didn’t say MSN is flawless. Just that people are really bad at determining responsibility for an issue.
They’re also really bad at delineating the nuance of different root problems apparently.
Link to the article (archived)
#Brandon Hunter useless at 42# Story by Editor • 9/12/2023, 11:21:42 PM21h
Former NBA participant Brandon Hunter, who beforehand performed for the Boston Celtics and Orlando Magic, has handed away on the age of 42, as introduced by Ohio males’s basketball coach Jeff Boals on Tuesday.
Hunter, initially a extremely regarded highschool basketball participant in Cincinnati, achieved vital success as a ahead for the Bobcats.
He earned three first-team All-MAC convention alternatives and led the NCAA in rebounding throughout his senior season. Hunter’s expertise led to his choice because the 56th general decide within the 2003 NBA Draft.
Throughout his NBA profession, he performed in 67 video games over two seasons and achieved a career-high of 17 factors in a recreation in opposition to the Milwaukee Bucks in 2004.
Okay but when’s the last time you had 17 factors in a recreation in opposition to the Milwaukee Bucks, hmm?
Never, but I am useless at 39, so what does that get me?
You get an AI-generated tabloid piece.
frickineh Useless at 39
In a shocking revelation, 39-year-old Lemmy user, frickineh, has declared themselves “useless” despite being one of the most active contributors to the popular Lemmy.world instance!
Though they’ve been on the platform for a mere two months, frickineh has already fired off a staggering 65 comments, giving every topic from 3D printing to gaming their two cents! But there’s a twist - this prolific commentator hasn’t yet taken the leap to submit their own posts.
Sources close to the situation say that frickineh’s interests are as varied as they come. They’re not only a tech-savvy enthusiast, but they also have a penchant for the finer things in life, like cross-stitch and embroidery. The question on everyone’s lips is: How can someone with such varied talents feel “useless”?
One insider told our reporters, “You’d think with all the knowledge on gaming, technology, and even embroidery, frickineh would be out there making waves. But instead, they’re here on Lemmy, dishing out opinions without sharing their own stories!”
Will frickineh step up their game and finally make a post? Or will they remain the mystery commentator of Lemmy.world? Only time will tell! Stay tuned for more on this Lemmy legend.
Man, if you think I’m not good at posting here, you should’ve seen how much I didn’t post on reddit.
That’s some top notch obituary writing, AI.
I agree, it’s extremely regarded.
Hey!
“Throughout his NBA profession, he performed in 67 video games over two seasons and achieved a career-high of 17 factors in a recreation in opposition to the Milwaukee Bucks in 2004.”
He wasn’t useless, you wish version Skynet!!
Intelligence is not the same as Wisdom. People often conflate the two and “AI” as it exists today is equivalent to a 3 year olds level of wisdom and a 40 year olds level of intelligence. It has access to vast amounts of facts and data but is completely unable to actually “understand” context and meaning.
deleted by creator
It’s clear you’re both using different meanings of “intelligence.” Granted I don’t think there is consensus on its meaning, but from context they clearly regard “intelligence” as just memorized facts and wisdom as the application of it, which they aren’t honestly far off. The amount of data is there, it’s the understanding of the data that isn’t there.
This is just word replacement of an existing article (forward = ahead, games = video games, passed (away) = handed, points = factors) done to avoid DMCA claims, whether it was done by AI or an algorithm is irrelevant. The AI was used to reword the article, and it’s good at doing that, but why those words in particular were replaced is beyond my comprehension.
Oh yeah, you’re right. Seems like the AI replaced dead with useless as in “dead batteries”. That is really awful.
This is the best summary I could come up with:
Former NBA player Brandon Hunter passed away unexpectedly at the young age of 42 this week, a tragedy that rattled fans of his 2000s career with the Boston Celtics and Orlando Magic.
The rest of the brief report is even more incomprehensible, informing readers that Hunter “handed away” after achieving “vital success as a ahead [sic] for the Bobcats” and “performed in 67 video games.”
It made headlines last month, for instance, after publishing a similarly incoherent AI-generated travel guide for Ottawa, Canada that bizarrely recommended that tourists visit a local food bank.
As a result, as we reported last year, the platform ended up syndicating large numbers of sloppy articles about topics as dubious Bigfoot and mermaids, which it deleted after we pointed them out.
Hunter, initially a extremely regarded highschool basketball participant in Cincinnati, achieved vital success as a ahead for the Bobcats.
Accusing an NBA legend of being “useless” the week he died isn’t just an offensive slip-up by a seemingly unsupervised algorithm, in other words.
The original article contains 882 words, the summary contains 166 words. Saved 81%. I’m a bot and I’m open source!
Imagine being an AI-generated summary of an article criticizing AI-written articles
Still included a shitty AI generated sentence in there anyways. Not knocking the bot or the creator though. This bot seems pretty good at summaries for the most part.
It’s even funnier to consider that many publications are probably using AI (or more accurately LLM’s) to pad out their articles. So then you directly get one program trying to lengthen a article and another trying to shorten it.
Pretty cool to be able to watch it happening though
Microsoft Tay could be looking for a new job as a writer at MSN.
So MSN has started relying on AI to do all of the work for their writing? And they can’t at least proofread it??
MSN didn’t write this. They are entirely responsible for that odd travel article not too long ago, but MSN is mostly just a news aggregator.
Costs too much