They should create a model that’d only trained on the content of
.tex
files.Most popular use case seemed to be
General questions
Trip Planning
Buying stuff
What’s on google? Why is it so on top? Maybe Maps? There are already 3 other Map Providers there and also yelp and TripAdvisor for ratings.
I keep having to argue with people that the crap that chat GPT told them doesn’t exist.
I asked AI to explain how to set a completely fictional setting in an admin control panel and it told me exactly where to go and what non-existent buttons to press.
I actually had someone send me a screenshot of instructions on how to do exactly what they wanted and I sent back screenshots of me during the directions to a tee, and pointing out that the option didn’t exist.
And it keeps happening.
“AI” gets big uppies energy from telling you that something can be done and how to do it. It does not get big uppies energy from telling you that something isn’t possible. So it’s basically going to lie to you about whatever you want to hear so it gets the good good.
No, seriously, there’s a weighting system to responses. When something isn’t possible, it tends to be a less favorable response than hallucinating a way for it to work.
I am quickly growing to hate this so-called “AI”. I’ve been on the Internet long enough that I can probably guess what the AI will reply to just about any query.
It’s just… Inaccurate, stupid, and not useful. Unless you’re repeating something that’s already been said a hundred different ways by a hundred different people and you just want to say the same thing… Then it’s great.
Hey, chat GPT, write me a cover letter for this job posting. Cover letters suck and are generally a waste of fucking time, so, who gives a shit?
to be fair, you could train an LLM on only Microsoft documentation with 100% accuracy, and it will still do the same with broken instructions because Microsoft has 12 guides for how to do a thing, and they all don’t work because they keep changing the layout, moving shit around or renaming crap and don’t update their documentation.
The worst is that they replace products and give them the same name.
Teams, was replaced with “new” teams, that then got renamed to teams again.
Outlook is now known as Outlook (classic) and the new version of Outlook is just called Outlook.
Both are basically just webapps.
I could go on.
Yeah, that experience they described could have happened before chatGPT because MS was already providing an “as cheap as possible” general support that was questionable whether it was better than just publishing documentation and letting power users willing to help do so. Because these support people clearly barely even understood the question, gave many irrelevant answers, which search engines pick up and return when you search for the problem later.
Tbh, chatGPT is a step up from that, even as bad as it is. The old suppory had that same abnoying overly corporate friendly attitude but were even less accurate. Though I don’t use windows anymore on my personal desktop, so I don’t have as much recent experience.
I asked AI to explain how to set a completely fictional setting in an admin control panel and it told me exactly where to go and what non-existent buttons to press.
This makes sense if you consider it works by trying to find the most accurate next word in a sentence. Ask it where I can turn off the screen defogger in windows and it will associate “screen” with “monitor” or “display”. “Turn off” -> must be a toggle… yeah go to settings -> display -> defogger toggle.
Its not AI, its not smart, its text prediction with a few extra tricks.
I describe it as unchecked auto correct that just accepts the most likely next word without user input, and trained on the entire Internet.
So the response reflects the average of every response on the public Internet.
Great for broad, common queries, but not great for specialized, specific and nuanced questions.
It just copies corporate cool aid yes man culture. If it didn’t marketing would say it’s not ready for release.
Think about it, how much corpo bosses and marketing get annoyed and label you as “difficult” if they get to you with a stupid idea and you call it BS? Now make the AI so that it pleases that kind of people.
Ouch.
So basically it’s just a Reddit search engine. Where most of the facts are based on “trust me bro”.
Personally, I’m disappointed Truth Social isn’t on the list
“Cited”. This does not represent where the training data comes from, it represents the most common result when the LLM calls a tool like
web_search
.Exactly. The article just discovered that high traffic sites are high ranked in search. That list is basically: https://en.wikipedia.org/wiki/List_of_most-visited_websites
There’s text on Pinterest?
So according to AI spez is a greedy little pig boy
Is this edited? Holy shit what’s going on with his eyes
Eyes swell up like that when you pump your own ego up your ass.
Was this guide AI generated as well? Looks like it credits over 100% of its information gathering to the first four sites on the list.
another comment explains some responses can contain multiple sources hence >100%
Ah, so what you’re saying is it doesn’t get 40% of its facts from reddit, but rather 40% of its replies contain a fact cited from reddit? That would explain totals over 100%, but I’m still not sure why they wouldn’t just say that of the x thousand facts AI cited, y percent came from this site. To me, that would have been more representative of what their graph title purports to offer.
im literally just regurgitating something i saw another person comment. but yeah if that was the case why wouldnt they elucidate that lol
Holy recursive lookups batman
It’s far worse than that. AI can cite something AI generated as a source which itself is using something generated by AI as a source. So you can get an AI summary that uses an AI generated video as a source which itself used an AI generated article as a source and that article itself was an AI hallucination. We’re essentially polluting the internet making it an unreliable source of information.
“It’s AI all the way down!”
“What about stuff before AI?”
“That was analog intelligence which is still AI!”
Presenting the new Ouroboros AI TM model.
I’m not a Luddite in general, but as for AI I will probably only use it as necessary in the workplace. So far the main LLM AI I have gotten any use out of is Google’s Gemini. It lists the citations of its facts when I ask it physics questions, and it seems like there is some kind of filter on the quality of the sources than can be cited. Mostly it cities professional publications, Wikipedia, etc.
I don’t think Google is currently winning the AI arms race (not do i think they have stood by their initial mantra of ‘Don’t be evil’), but it seems like that should be the gold standard. And Google/Alphabet was also the company responsible for Alpha Fold, IMO the most impressive application of learning algorithms to date.
Garbage in, garbage out…
Regardless, in all my years on Reddit and now on Lemmy, my posting approach might’ve helped deep-fry those LLM results and you can thank me later.
Actually, probably 20+ years ago, I was a dumb kid who got doxxed on a popular news aggregator site. Ever since, that experience, I obfuscate facts in pretty much any personal anecdotes I share, I also tend to make whimsical & nonsensical statements all the time, things which sound perfectly reasonable at first glance, but which in retrospect, would really put a damper on any LLM style learning tool. Plus, I can’t help but pretend to be some 80 year old tech illiterate grampa posting on the Facebooks from time to time, so that probably really makes my shit online LLM poision.
Granted, all those years of these techniques weren’t to deter or detract from LLMs, just that in the end, that’s another positive side effect of trying to stay a step ahead from crazy ass online stalkers, Jeremy.
In a way, it’s like that scene from The Terminator where Gregor McConnor was eating a hotdog in a fancy French restaurant and faked an orgasm in front of Tom Cruise, then Sally Field was sitting at another table and told her waitress “I’ll have the seabass please.”
“Everythere” is a radical new word.
Perfectly cromulent
Embiggens the best of us.
Canoodling in the threads.