Retool, a development platform for business software, recently published the results of its State of AI survey. Over 1,500 people took part, all from the tech industry:...
Over half of all tech industry workers view AI as overrated::undefined
This is a growing pet peeve of mine. If and when actual AI becomes a thing, it’ll be a major turning point for humanity comparable to things like harnessing fire or electricity.
…and most people will be confused as fuck. “We’ve had this for years, what’s the big deal?” -_-
I also believe that will happen! We will not be prepared since many don’t understand the differences between what current models do and what an actual general AI could potentially do.
It also saddens me that many don’t know or ignore how fundamental abstract reasoning is to our understanding of how human intelligence works. And that LLMs simply aren’t intelligent in that sense (or at all, if you take a tight definition of intelligence).
I don’t get how recognizing a pattern is not AI. It recognizes patterns in data, and patterns in side of patterns, and does so at a massive scale. Humans are no different, we find patterns and make predictions on what to do next.
The human brain does not simply recognise patterns, though. Abstract reasoning means that humans are able to find solutions for problems they did not encounter before. That’s what makes a thing intelligent. It is not fully understood yet what exactly gives the brain these capabilities, btw. Like, we also do not understand yet how it is possible that we can recognize our own thinking processes.
The most competent current AI models mimic one aspect of the brain which is neural pathways. In our brain it’s an activity threshold and in a neural network AI it’s statistics which decide whether a certain path is active or not and then it crosses with other paths, etc. Like a very complex decision tree.
So that is quite similar between AI and brains. But we actually get something like an understanding of concepts that goes beyond the decision tree but isn’t fully understood yet, as described above.
For an AI to be actually intelligent it would probably need to at least get this ability, to trace back it’s own way through the decision tree. Maybe it even turns out that you in fact do need a consciousness to have reason.
This abstract thinking… is pattern recognition. Patterns of behavior, patterns of series of actions, patterns of photons, patterns of patterns.
And there is one, I think only, concept of consciousness. And it is that it’s another layer of pattern recognition. A pattern recognizer that looks into the patterns of your own mind.
I’m unfortunately unsure how else to convey this because it seems so obvious to me. I’d need to take quite some time to figure out how to explain it any better.
Please do, but I don’t understand why you believe that it changes things? Pattern recognition is the modus operandi of a brain, or rather the connection between your senses and your brain. So perhaps could be seen more like the way “brain data” is stored, its data type.The peculiarity is how the data type is used.
This may turn philosophical, but consider you would have the perfect pattern recognition apparatus. It would see one pattern, the ultimate pattern how everything is exactly connected. Does that make it intelligent?
To be called intelligent, you would want to be able to ask the apparatus about specific problems (much smaller chunks of the whole thing). While it may still be confined to the data type throughout the whole process, the scope of its intelligence would be defined by the way it uses the data.
See, I like this question, “what is intelligence?”
I feel way too many people are so happy to make claims about what is or isn’t intelligent without ever attempting to define intelligence.
Honestly, I’m not sure what constitutes “intelligence”, the best I can come up with is the human brain. But when I try to differentiate the brain from a computer, I just keep seeing all the similarities. The differences that are there, seem reasonable to expect a computer to replicate… eventually.
Anyway, I’ve been working off of the idea that all that reacts to stimulus is intelligent. It’s all a matter of degree and type. I’m talking bacteria, bugs, humans, plants, maybe even planets.
I’ve had exactly this discussion with a friend recently. I share your opinion, he shared what seems to be the view of the majority here. I just don’t see what the qualitative difference between the brain and a data-based AI would be. It almost seems to me like people have problems accepting the fact that they’re not more than biological machines. Like there must be something that makes them special, that gives them some sort of “soul” even when it’s in a non-religious and non-spiritual way. Some qualitative difference between them and the computer. I don’t think there necessarily is one. Look at how many things people get wrong. Look at how bad we are at simple logic sometimes. We have a better sense of some things like plausibility because we have a different set of experiences that is rooted in our physical life. I think it’s entirely possible that we will be able to create robots that are more similar to human beings than we’d like them to be. I even think it’s possible that they would have qualia. I just don’t see why not.
I know that there is a debate about machine learning AI and symbolic AI. I’m not an expert to be fair, but I have not seen any possible explanation as to why only symbolic AI would be “true” AI, even though many people seem to believe that.
I’ve seen it refered to as AGI bit I think itns wrong.
Chat GPT isnt intelligent in the slightest, it only makes guesses on what word is statistically more likely to come up next.
There is no thikinking or problem solving involved.
A while ago I saw an article that with a tittle along the lines of “spark of AGI in ChatGPT 4” because it chose to use a calculator tool when facing a problme that required one.
That would be AI (and not AGI). It has a problem, it learns and uses available tools to solve it.
The argument “it just predicts the most likely next word” while true massively under values what it even means to predict the next word or token. Largely these predictions are based on sentences and ideas the model has trained on from its data sets. It’s pretty intelligent if you think about it. You read a text book then when you apply the knowledge or take a test you use what you read to form a new sentence in relation to the context of the question or problem. For the models “text prediction” to be correct it has to understand certain relationships between complex ideas and objects to some capacity. Yes it absolutely is not as good as human intelligence. But what it’s doing is much more advanced then text to type on your phone keyboard. It’s a step in the right direction, over hyped right now but the hype is funneling cash into research. The models are already getting more advanced. Right now half of what it says is hot garbage but it can be pretty accurate.
Right? Like, I, too, predict the next word in my sentence to properly respond to inputs with desired output. Sure I have personality (usually) and interests, but that’s an emergent behavior of my intelligence, not a prerequisite.
It might not formulate thoughts the way we do, but it absolutely emulates some level of intelligence, artificially.
I think so many people overrate human intelligence, thus causing them to underrate AI. Don’t get me wrong, our brains are amazing, but they’re also so amazing that they can make crazy cool AI that is also really amazing.
People just hate the idea of being meat robots, I don’t blame em.
AI doesn’t necessarily mean human-level intelligence, if that’s what you mean. The AI field has wrestled with this for decades. There can be “strong AI”, which is aiming for that human-level intelligence, but that’s probably a far off goal. The “weak AI” is about pushing the boundaries of what computers can do, and that stuff has been massively useful even before we talk about the more modern stuff.
Sounds like people here are expecting to see GPAI and singularity stuff, but all they see is a pitiful LLM or other even more narrow AI applications. Remember, even optical character recognition (OCR) used to be called AI until it became so common that it wasn’t exciting any more. What AI developers call AI today, is just basic automation and few decades later.
for many years AI referred to that type of technology. It is not infact AGI but AI historically in the technical field refers more towards decision trees, and classification/ linear regression models.
The main difference is that crypto was/is burning huge amounts of energy to run a distributed ponzi scheme. LLMs are at least using energy to create a useful tool (even if there is discussion over how useful they are).
You really should listen rather than talk. This is not AI, it’s just a word prediction model. The media calls it AI because it sells and the companies calls it AI because it brings the stock value up.
There are significant differences between statistical models and AI.
I work for an analytics department at a fortune 100 company. We have a very clear delineation between what constitutes a model and what constitutes an AI.
That’s true. Statistical models are very carefully engineered and tested and current machine learning models are created by throwing a lot of training data at the software and hope for the best that the things that the model learns are not complete bullshit.
Given that AI isn’t purported to be AGI, how do you define AI such that multimodal transformers capable of developing abstract world models as linear representations and trained on unthinkable amounts of human content mirroring a wide array of capabilities which lead to the ability to do things thought to be impossible as recently as three years ago (such as explain jokes not in the training set or solve riddles not in the training set) isn’t “artificial intelligence”?
Largely because we understand that what they’re calling “AI” isn’t AI.
This is a growing pet peeve of mine. If and when actual AI becomes a thing, it’ll be a major turning point for humanity comparable to things like harnessing fire or electricity.
…and most people will be confused as fuck. “We’ve had this for years, what’s the big deal?” -_-
I also believe that will happen! We will not be prepared since many don’t understand the differences between what current models do and what an actual general AI could potentially do.
It also saddens me that many don’t know or ignore how fundamental abstract reasoning is to our understanding of how human intelligence works. And that LLMs simply aren’t intelligent in that sense (or at all, if you take a tight definition of intelligence).
I don’t get how recognizing a pattern is not AI. It recognizes patterns in data, and patterns in side of patterns, and does so at a massive scale. Humans are no different, we find patterns and make predictions on what to do next.
The human brain does not simply recognise patterns, though. Abstract reasoning means that humans are able to find solutions for problems they did not encounter before. That’s what makes a thing intelligent. It is not fully understood yet what exactly gives the brain these capabilities, btw. Like, we also do not understand yet how it is possible that we can recognize our own thinking processes.
The most competent current AI models mimic one aspect of the brain which is neural pathways. In our brain it’s an activity threshold and in a neural network AI it’s statistics which decide whether a certain path is active or not and then it crosses with other paths, etc. Like a very complex decision tree.
So that is quite similar between AI and brains. But we actually get something like an understanding of concepts that goes beyond the decision tree but isn’t fully understood yet, as described above.
For an AI to be actually intelligent it would probably need to at least get this ability, to trace back it’s own way through the decision tree. Maybe it even turns out that you in fact do need a consciousness to have reason.
This abstract thinking… is pattern recognition. Patterns of behavior, patterns of series of actions, patterns of photons, patterns of patterns.
And there is one, I think only, concept of consciousness. And it is that it’s another layer of pattern recognition. A pattern recognizer that looks into the patterns of your own mind.
I’m unfortunately unsure how else to convey this because it seems so obvious to me. I’d need to take quite some time to figure out how to explain it any better.
Please do, but I don’t understand why you believe that it changes things? Pattern recognition is the modus operandi of a brain, or rather the connection between your senses and your brain. So perhaps could be seen more like the way “brain data” is stored, its data type.The peculiarity is how the data type is used.
This may turn philosophical, but consider you would have the perfect pattern recognition apparatus. It would see one pattern, the ultimate pattern how everything is exactly connected. Does that make it intelligent?
To be called intelligent, you would want to be able to ask the apparatus about specific problems (much smaller chunks of the whole thing). While it may still be confined to the data type throughout the whole process, the scope of its intelligence would be defined by the way it uses the data.
See, I like this question, “what is intelligence?”
I feel way too many people are so happy to make claims about what is or isn’t intelligent without ever attempting to define intelligence.
Honestly, I’m not sure what constitutes “intelligence”, the best I can come up with is the human brain. But when I try to differentiate the brain from a computer, I just keep seeing all the similarities. The differences that are there, seem reasonable to expect a computer to replicate… eventually.
Anyway, I’ve been working off of the idea that all that reacts to stimulus is intelligent. It’s all a matter of degree and type. I’m talking bacteria, bugs, humans, plants, maybe even planets.
I’ve had exactly this discussion with a friend recently. I share your opinion, he shared what seems to be the view of the majority here. I just don’t see what the qualitative difference between the brain and a data-based AI would be. It almost seems to me like people have problems accepting the fact that they’re not more than biological machines. Like there must be something that makes them special, that gives them some sort of “soul” even when it’s in a non-religious and non-spiritual way. Some qualitative difference between them and the computer. I don’t think there necessarily is one. Look at how many things people get wrong. Look at how bad we are at simple logic sometimes. We have a better sense of some things like plausibility because we have a different set of experiences that is rooted in our physical life. I think it’s entirely possible that we will be able to create robots that are more similar to human beings than we’d like them to be. I even think it’s possible that they would have qualia. I just don’t see why not.
I know that there is a debate about machine learning AI and symbolic AI. I’m not an expert to be fair, but I have not seen any possible explanation as to why only symbolic AI would be “true” AI, even though many people seem to believe that.
As in AGI?
I’ve seen it refered to as AGI bit I think itns wrong. Chat GPT isnt intelligent in the slightest, it only makes guesses on what word is statistically more likely to come up next. There is no thikinking or problem solving involved.
A while ago I saw an article that with a tittle along the lines of “spark of AGI in ChatGPT 4” because it chose to use a calculator tool when facing a problme that required one. That would be AI (and not AGI). It has a problem, it learns and uses available tools to solve it.
AGI would be on a whole other level.
Edit: Grammar
The argument “it just predicts the most likely next word” while true massively under values what it even means to predict the next word or token. Largely these predictions are based on sentences and ideas the model has trained on from its data sets. It’s pretty intelligent if you think about it. You read a text book then when you apply the knowledge or take a test you use what you read to form a new sentence in relation to the context of the question or problem. For the models “text prediction” to be correct it has to understand certain relationships between complex ideas and objects to some capacity. Yes it absolutely is not as good as human intelligence. But what it’s doing is much more advanced then text to type on your phone keyboard. It’s a step in the right direction, over hyped right now but the hype is funneling cash into research. The models are already getting more advanced. Right now half of what it says is hot garbage but it can be pretty accurate.
Right? Like, I, too, predict the next word in my sentence to properly respond to inputs with desired output. Sure I have personality (usually) and interests, but that’s an emergent behavior of my intelligence, not a prerequisite.
It might not formulate thoughts the way we do, but it absolutely emulates some level of intelligence, artificially.
I think so many people overrate human intelligence, thus causing them to underrate AI. Don’t get me wrong, our brains are amazing, but they’re also so amazing that they can make crazy cool AI that is also really amazing.
People just hate the idea of being meat robots, I don’t blame em.
No, INT.
What is that?
A dump stat for STR characters.
AI doesn’t necessarily mean human-level intelligence, if that’s what you mean. The AI field has wrestled with this for decades. There can be “strong AI”, which is aiming for that human-level intelligence, but that’s probably a far off goal. The “weak AI” is about pushing the boundaries of what computers can do, and that stuff has been massively useful even before we talk about the more modern stuff.
Sounds like people here are expecting to see GPAI and singularity stuff, but all they see is a pitiful LLM or other even more narrow AI applications. Remember, even optical character recognition (OCR) used to be called AI until it became so common that it wasn’t exciting any more. What AI developers call AI today, is just basic automation and few decades later.
It absolutely is AI. A lot of stuff is AI.
It’s just not that useful.
The decision tree my company uses to deny customer claims is not AI despite the business constantly referring to it as such.
There’s definitely a ton of “AI” that is nothing more than an If/Else statement.
for many years AI referred to that type of technology. It is not infact AGI but AI historically in the technical field refers more towards decision trees, and classification/ linear regression models.
That’s basically what video game AI is, and we’re happy enough to call it that
Well… it’s a video game. We also call them “CPU” which is also entirely inaccurate.
That’s called an expert system, and has been commonly called a form of AI for decades.
That is indeed what most of it is, my company was doing “sentiment analysis” and it was literally just checking it against a good and bad word list
When someone corporate says “AI” you should hear “extremely rudimentary machine learning” until given more details
It’s useful at sucking down all the compute we complained crypto used
Yeah it’s funny how that little tidbit just went quietly into the bin not to talked about again.
The main difference is that crypto was/is burning huge amounts of energy to run a distributed ponzi scheme. LLMs are at least using energy to create a useful tool (even if there is discussion over how useful they are).
I argue AI is much easier to pull a profit from than a currency exchange also 🙂
You really should listen rather than talk. This is not AI, it’s just a word prediction model. The media calls it AI because it sells and the companies calls it AI because it brings the stock value up.
Yes, what you’re describing is also AI.
Then we may as well call the field of statistics AI now, but sure, it’s a crazy world. :)
There are significant differences between statistical models and AI.
I work for an analytics department at a fortune 100 company. We have a very clear delineation between what constitutes a model and what constitutes an AI.
That’s true. Statistical models are very carefully engineered and tested and current machine learning models are created by throwing a lot of training data at the software and hope for the best that the things that the model learns are not complete bullshit.
Yeah, an AI is a model you can’t explain.
Optimizing compilers came directly out of AI research. The entirety of modern computing is built on things the field produced.
deleted by creator
Given that AI isn’t purported to be AGI, how do you define AI such that multimodal transformers capable of developing abstract world models as linear representations and trained on unthinkable amounts of human content mirroring a wide array of capabilities which lead to the ability to do things thought to be impossible as recently as three years ago (such as explain jokes not in the training set or solve riddles not in the training set) isn’t “artificial intelligence”?
Yup. LLM RAG is just search 2.0 with a GPU.
For certain use cases it’s incredible, but those use cases shouldn’t be your first idea for a pipeline
THANK YOU! I’ve been saying this a long time, but have just kind of accepted that the definition of AI is no longer what it was.