I keep having to argue with people that the crap that chat GPT told them doesn’t exist.
I asked AI to explain how to set a completely fictional setting in an admin control panel and it told me exactly where to go and what non-existent buttons to press.
I actually had someone send me a screenshot of instructions on how to do exactly what they wanted and I sent back screenshots of me during the directions to a tee, and pointing out that the option didn’t exist.
And it keeps happening.
“AI” gets big uppies energy from telling you that something can be done and how to do it. It does not get big uppies energy from telling you that something isn’t possible. So it’s basically going to lie to you about whatever you want to hear so it gets the good good.
No, seriously, there’s a weighting system to responses. When something isn’t possible, it tends to be a less favorable response than hallucinating a way for it to work.
I am quickly growing to hate this so-called “AI”. I’ve been on the Internet long enough that I can probably guess what the AI will reply to just about any query.
It’s just… Inaccurate, stupid, and not useful. Unless you’re repeating something that’s already been said a hundred different ways by a hundred different people and you just want to say the same thing… Then it’s great.
Hey, chat GPT, write me a cover letter for this job posting. Cover letters suck and are generally a waste of fucking time, so, who gives a shit?
to be fair, you could train an LLM on only Microsoft documentation with 100% accuracy, and it will still do the same with broken instructions because Microsoft has 12 guides for how to do a thing, and they all don’t work because they keep changing the layout, moving shit around or renaming crap and don’t update their documentation.
Yeah, that experience they described could have happened before chatGPT because MS was already providing an “as cheap as possible” general support that was questionable whether it was better than just publishing documentation and letting power users willing to help do so. Because these support people clearly barely even understood the question, gave many irrelevant answers, which search engines pick up and return when you search for the problem later.
Tbh, chatGPT is a step up from that, even as bad as it is. The old suppory had that same abnoying overly corporate friendly attitude but were even less accurate. Though I don’t use windows anymore on my personal desktop, so I don’t have as much recent experience.
I asked AI to explain how to set a completely fictional setting in an admin control panel and it told me exactly where to go and what non-existent buttons to press.
This makes sense if you consider it works by trying to find the most accurate next word in a sentence. Ask it where I can turn off the screen defogger in windows and it will associate “screen” with “monitor” or “display”. “Turn off” -> must be a toggle… yeah go to settings -> display -> defogger toggle.
Its not AI, its not smart, its text prediction with a few extra tricks.
It just copies corporate cool aid yes man culture. If it didn’t marketing would say it’s not ready for release.
Think about it, how much corpo bosses and marketing get annoyed and label you as “difficult” if they get to you with a stupid idea and you call it BS? Now make the AI so that it pleases that kind of people.
I keep having to argue with people that the crap that chat GPT told them doesn’t exist.
I asked AI to explain how to set a completely fictional setting in an admin control panel and it told me exactly where to go and what non-existent buttons to press.
I actually had someone send me a screenshot of instructions on how to do exactly what they wanted and I sent back screenshots of me during the directions to a tee, and pointing out that the option didn’t exist.
And it keeps happening.
“AI” gets big uppies energy from telling you that something can be done and how to do it. It does not get big uppies energy from telling you that something isn’t possible. So it’s basically going to lie to you about whatever you want to hear so it gets the good good.
No, seriously, there’s a weighting system to responses. When something isn’t possible, it tends to be a less favorable response than hallucinating a way for it to work.
I am quickly growing to hate this so-called “AI”. I’ve been on the Internet long enough that I can probably guess what the AI will reply to just about any query.
It’s just… Inaccurate, stupid, and not useful. Unless you’re repeating something that’s already been said a hundred different ways by a hundred different people and you just want to say the same thing… Then it’s great.
Hey, chat GPT, write me a cover letter for this job posting. Cover letters suck and are generally a waste of fucking time, so, who gives a shit?
to be fair, you could train an LLM on only Microsoft documentation with 100% accuracy, and it will still do the same with broken instructions because Microsoft has 12 guides for how to do a thing, and they all don’t work because they keep changing the layout, moving shit around or renaming crap and don’t update their documentation.
The worst is that they replace products and give them the same name.
Teams, was replaced with “new” teams, that then got renamed to teams again.
Outlook is now known as Outlook (classic) and the new version of Outlook is just called Outlook.
Both are basically just webapps.
I could go on.
Yeah, that experience they described could have happened before chatGPT because MS was already providing an “as cheap as possible” general support that was questionable whether it was better than just publishing documentation and letting power users willing to help do so. Because these support people clearly barely even understood the question, gave many irrelevant answers, which search engines pick up and return when you search for the problem later.
Tbh, chatGPT is a step up from that, even as bad as it is. The old suppory had that same abnoying overly corporate friendly attitude but were even less accurate. Though I don’t use windows anymore on my personal desktop, so I don’t have as much recent experience.
This makes sense if you consider it works by trying to find the most accurate next word in a sentence. Ask it where I can turn off the screen defogger in windows and it will associate “screen” with “monitor” or “display”. “Turn off” -> must be a toggle… yeah go to settings -> display -> defogger toggle.
Its not AI, its not smart, its text prediction with a few extra tricks.
I describe it as unchecked auto correct that just accepts the most likely next word without user input, and trained on the entire Internet.
So the response reflects the average of every response on the public Internet.
Great for broad, common queries, but not great for specialized, specific and nuanced questions.
It just copies corporate cool aid yes man culture. If it didn’t marketing would say it’s not ready for release.
Think about it, how much corpo bosses and marketing get annoyed and label you as “difficult” if they get to you with a stupid idea and you call it BS? Now make the AI so that it pleases that kind of people.