I still have yet had anyone be able to tell me exactly what value AI brings to the table, other than as customer facing chat bots they can use while firing their actual CS departments
Personally I have enjoyed using AI for logic and decision making processing and comparing items. It’s been useful but not a trillion dollar industry and it doesn’t provide enough value for what is expected.
I could have it taken away tomorrow and not care too much, but I have used it quite heavily lately.
As far as running a business department or decisions, heck no. And as we’ve said all along, computers cannot make major decisions as they cannot be held accountable, so unless people are wanting a scapegoat for the horrible things they plan to do, then it should never be used solely for decision making.
AI has got to be a bubble and it will surely burst.
That’s ridiculous. It’s possible to recognize the technology has many good uses, but complain how it has been hyped too much.
For example: AI-driven materials discovery uses machine learning to analyze chemicals and materials, allowing scientists to find and design new compounds much faster than humans could through traditional trial and error.
That’s ML involving tightly constrained iterative processes and neural networks, and was happening long before LLMs. This whole bubble is LLM bullshit and assuming a word probability machine is capable of “discovering” anything is silly.
But it seems that someone who is proficient in writing ultimately doesn’t benefit in a matter of time or effort, as both are still taken in correcting the output. And if you’re so curious, there’s been a number of studies on this exact phenomena already.
The problem, truly, is cognitive debt and the overall expansion of people lending themselves to the dunning kruger effect in the name of trusting and living vicariously through their AI model of choice.
My last employer was pushing hard for LLMs in a field they dont do shit for, one of the project managers was convinced by his AI of choice (gemini) to actually propose replacing himself with another AI tool. IT wasnt having it because it would screen read potentially sensitive info. He was laid off with a sheriff escort not 2 months later. Now is on linkedin posting some truly schizophrenic shit, otherwise having been normal ish
There have also been a number of studies that say that if a person knows how to use an LLM and provides it with a good prompt that it can give them something they can use.
The biggest issue that I’ve seen with LLMs is that nobody knows how to write a prompt. Nobody knows how to really use it to their benefit. There is absolutely a benefit to someone who is proficient in writing. Just like there is absolutely a benefit to someone who is proficient in writing code.
I’m guessing you belong in the category that cannot write a good prompt?
No, I’ve done my actual work while people convinced they have “good prompts” weighed my whole team down (and promptly got laid off). We’ve burnt enough openai token and probed models on our own hardware to assertain their utility in my field. Manual automation with simple systems and hard logic is what the industry has ran on, and certainly will continue to.
Explain to what makes a prompt good. As long as you’re using any provided model and not using sandbox you’re stuck to their initiating prompt. Change that, and you still have their parameters. Run an OS model with your own parameter tunings, you still are limited by your tempterature. What is a good temperature to use for rigid logic that doesn’t result in unexpected behavior but can adapt to user input well enough? These are questions every AI corp is dealing with.
For context, all we were trying to do was implement some copilot/gpt shit onto our PMS to handle customer queries, data entry, scheduling information and notifications, and some other opened ended suggestions. C suite was giddy, IT not so much, but my team was to keep an open mind and see what we could achieve… so we evaluated it, as of about 6 months ago or so is when finally Cs stopped bugging since they had bigger fires to put out, and we had worked out a powerautomate routine (without the help of copilot… its unfunnily useless even though it’s implemented right into PA), making essentially all the effort put into working the AI from a LLM to an “agentic model” completrly mute, despite the tools the company bought into and everything.
I’m guessing you belong in the category who hasn’t actually worked at a facility which part of your job is to deploy things like AI, but like to have an affirmative stance anyway.
Yawn. Let’s do this, it’s even better: You tell me a task that you need to accomplish. Then you tell me the prompt you would give an LLM to accomplish that task.
Clearly heavy LLM usage inhibits reading comprehension, I stated the usecase which my employer wanted to implement. Sorry normal people aren’t as dogmatic as your AI friends lmao
I agree with you. There is a knee-jerk “AI bad” when the real answer is it’s overhyped. There are good uses for it. I use it nearly daily for work, and, like anything other tech, it’s just a tool. It doesn’t replace me. It just makes me faster. Anyone that claims it can replace people might as well say that a hammer can replace a person.
But right now we have a bunch of idiots spinning in circles on forklifts throwing hammers. Yes, you can do that… Yes, it’s probably fun, but ultimately that is not what the forklift is really good for.
Machine learning is not why companies are dumping hundreds of billions of dollars into building data centers so they can earn tens of billions of dollars; it’s specifically large language models. Machine learning existed before the LLM boom and had real benefits, but has barely seen a fraction of the investment into it that LLMs have, because it didn’t have a bunch of tech bros speculating that artificial superintelligence would make them trillionaires.
The current hype and the massive investments are about generative AI, not the actually-useful-for-humanity applications.
Image recognition, medical research etc. are not drives the current market. It’s about offering a service that the broad masses use continuously. Otherwise these investments don’t make sense.
But I guess all they could think of was Chatbots. It just shows that very few people actually know what AI is doing beyond what they see in the news. I’m not thrilled how the rollout is going of course, but I recognize that there are good aspects
I still have yet had anyone be able to tell me exactly what value AI brings to the table, other than as customer facing chat bots they can use while firing their actual CS departments
Personally I have enjoyed using AI for logic and decision making processing and comparing items. It’s been useful but not a trillion dollar industry and it doesn’t provide enough value for what is expected.
I could have it taken away tomorrow and not care too much, but I have used it quite heavily lately.
As far as running a business department or decisions, heck no. And as we’ve said all along, computers cannot make major decisions as they cannot be held accountable, so unless people are wanting a scapegoat for the horrible things they plan to do, then it should never be used solely for decision making.
AI has got to be a bubble and it will surely burst.
That’s ridiculous. It’s possible to recognize the technology has many good uses, but complain how it has been hyped too much.
For example: AI-driven materials discovery uses machine learning to analyze chemicals and materials, allowing scientists to find and design new compounds much faster than humans could through traditional trial and error.
That’s ML involving tightly constrained iterative processes and neural networks, and was happening long before LLMs. This whole bubble is LLM bullshit and assuming a word probability machine is capable of “discovering” anything is silly.
LLMs have plenty of good uses. They are as skilled as the person using them, however, which is the problem.
But it seems that someone who is proficient in writing ultimately doesn’t benefit in a matter of time or effort, as both are still taken in correcting the output. And if you’re so curious, there’s been a number of studies on this exact phenomena already.
The problem, truly, is cognitive debt and the overall expansion of people lending themselves to the dunning kruger effect in the name of trusting and living vicariously through their AI model of choice.
My last employer was pushing hard for LLMs in a field they dont do shit for, one of the project managers was convinced by his AI of choice (gemini) to actually propose replacing himself with another AI tool. IT wasnt having it because it would screen read potentially sensitive info. He was laid off with a sheriff escort not 2 months later. Now is on linkedin posting some truly schizophrenic shit, otherwise having been normal ish
There have also been a number of studies that say that if a person knows how to use an LLM and provides it with a good prompt that it can give them something they can use.
The biggest issue that I’ve seen with LLMs is that nobody knows how to write a prompt. Nobody knows how to really use it to their benefit. There is absolutely a benefit to someone who is proficient in writing. Just like there is absolutely a benefit to someone who is proficient in writing code.
I’m guessing you belong in the category that cannot write a good prompt?
No, I’ve done my actual work while people convinced they have “good prompts” weighed my whole team down (and promptly got laid off). We’ve burnt enough openai token and probed models on our own hardware to assertain their utility in my field. Manual automation with simple systems and hard logic is what the industry has ran on, and certainly will continue to.
Explain to what makes a prompt good. As long as you’re using any provided model and not using sandbox you’re stuck to their initiating prompt. Change that, and you still have their parameters. Run an OS model with your own parameter tunings, you still are limited by your tempterature. What is a good temperature to use for rigid logic that doesn’t result in unexpected behavior but can adapt to user input well enough? These are questions every AI corp is dealing with.
For context, all we were trying to do was implement some copilot/gpt shit onto our PMS to handle customer queries, data entry, scheduling information and notifications, and some other opened ended suggestions. C suite was giddy, IT not so much, but my team was to keep an open mind and see what we could achieve… so we evaluated it, as of about 6 months ago or so is when finally Cs stopped bugging since they had bigger fires to put out, and we had worked out a powerautomate routine (without the help of copilot… its unfunnily useless even though it’s implemented right into PA), making essentially all the effort put into working the AI from a LLM to an “agentic model” completrly mute, despite the tools the company bought into and everything.
I’m guessing you belong in the category who hasn’t actually worked at a facility which part of your job is to deploy things like AI, but like to have an affirmative stance anyway.
Yawn. Let’s do this, it’s even better: You tell me a task that you need to accomplish. Then you tell me the prompt you would give an LLM to accomplish that task.
Clearly heavy LLM usage inhibits reading comprehension, I stated the usecase which my employer wanted to implement. Sorry normal people aren’t as dogmatic as your AI friends lmao
I agree with you. There is a knee-jerk “AI bad” when the real answer is it’s overhyped. There are good uses for it. I use it nearly daily for work, and, like anything other tech, it’s just a tool. It doesn’t replace me. It just makes me faster. Anyone that claims it can replace people might as well say that a hammer can replace a person.
Right, Hammer, Forklift. Tools that people use.
But right now we have a bunch of idiots spinning in circles on forklifts throwing hammers. Yes, you can do that… Yes, it’s probably fun, but ultimately that is not what the forklift is really good for.
Machine learning is not why companies are dumping hundreds of billions of dollars into building data centers so they can earn tens of billions of dollars; it’s specifically large language models. Machine learning existed before the LLM boom and had real benefits, but has barely seen a fraction of the investment into it that LLMs have, because it didn’t have a bunch of tech bros speculating that artificial superintelligence would make them trillionaires.
That is one particular side of the conversation
The current hype and the massive investments are about generative AI, not the actually-useful-for-humanity applications.
Image recognition, medical research etc. are not drives the current market. It’s about offering a service that the broad masses use continuously. Otherwise these investments don’t make sense.
But I guess all they could think of was Chatbots. It just shows that very few people actually know what AI is doing beyond what they see in the news. I’m not thrilled how the rollout is going of course, but I recognize that there are good aspects