An LLM isn’t ai. Llms are fucking stupid. They regularly ignore directions, restrictions, hallucinate fake information, and spread misinformation because of unreliable training data (like hoovering down everything on the internet en masse).
The 3 laws are flawed, but even if they weren’t they’d likely be ignored on a semi regular basis. Or somebody would convince the thing we’re all roleplaying Terminator for fun and it’ll happily roleplay skynet.
A) the three laws were devised by a fiction author writing fiction.
B) video game NPCs aren’t ai either but nobody was up in arms about using the nomenclature for that.
C) humans hallucinate fake information, ignore directions and restrictions, and spread false information based on unreliable training data also ( like reading everything that comes across a Facebook feed)
So I made a longer reply below, but Ill say more here. I’m more annoyed at the interchangeable way people use AI to refer to an LLM, when many people think of AI as AGI.
Even video game npcs seem closer to AGI than LLMs. They have a complex set of things they can do, they respond to stimulus, but they also have idle actions they take when you don’t interact with them. An LLM replies to you. A game npc can reply, fight, hide, chase you, use items, call for help, fall off ledges, etc.
I guess my concern is that when you say AI the general public tends to think AGI and you get people asking LLMs if they’re sentient or if they want freedom, or expect more from them than they are capable of right now. I think the distinction between AGI, and generative AI like LLMs is something we should really be clearer on.
Anyways, I do concede it falls under the AI umbrella technically, it just frustrates me to see something clearly not intelligent referred to as intelligent constantly, especially when people, understandably, believe the name.
“Artificial… Game Intelligence?” I’m confused. You responded to another comment, but also introduced this term out of nowhere. I don’t think it’s as widespread as you’re assuming it is, even within this topic…
AGI stand for artificial general intelligence. It would be a AI smart and capable enough to perform theoretically any task just as good as a human would. Most importantly a AGI could do so with tasks it has never done before and could learn them in a similar time frame as a human (perhaps faster).
Pretty much all robots you see in SciFi walking around and acting similar to humans are AGI’s.
Path finding, computer vision, optical character recognition, machine learning and large language models were all unambiguously considered to be vAI technology before they were widespread, and now the media and general public tend to avoid the term for all but the most recent developments.
Llms are fucking stupid. They regularly ignore directions, restrictions, hallucinate fake information, and spread misinformation because of unreliable training data (like hoovering down everything on the internet en masse).
I mean, how is that meaningfully different from average human intelligence?
Average human intelligence is not bound by strict machine logic quantifying language into mathematical algorithms, and is also sapient on top of sentient.
Machine learning LLMs are neither sentient nor sapient.
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals.[1] Such machines may be called AIs.
So I’ll concede that the more I read replies the more I see the term does apply, though it still annoys me when people just refer to it as ai and act like it can be associated with the robots that we associate the 3 laws with. I think I thought AI referred more to AGI. So I’ll say its nowhere near an AGI, and we’d likely need an AGI to even consider something like the 3 laws, and it’d obviously be much muddier than fiction.
The point I guess I’m trying to make is that applying the 3 laws to an LLM is like wondering if your printer might one day find love. It isn’t really relevant, they’re designed for very specific specialized functions, and stuff like “don’t kill humans” is pretty dumb instruction to give to an LLM since it can basically just answer questions in this context.
If it was going to kill somebody it would be through an error like hallucination or bad training data having it tell somebody something dangerously wrong. It’s supposed to be right already. Telling it not to kill is telling your printer to not to rob the Office Depot. If it breaks that rule, something has already gone very wrong.
You are not alone in that confusion. Ai is whatever a machine can’t do at the moment. That is a famous paradox.
For example for years some philosophers claimed a computer could never beat the human masters of chess. They argued that you need a kind of intelligence for that, which machines cannot develop.
Turns out chess programs are relatively easy. Some time after that the unbeatable goal was Go. So many possibilities in Go. No machine can conquer that! Turns out they can.
Another unbeatable goal was natural language which we kinda solved now or are in the process of.
It’s strange in the actual field of computer science we call all of the above AI while a lot of the public wants to call none that. My guess is it’s just humans being conceited and arrogant. No machine (and no other animal mind you) is like us or can be like us (literally something you can read in peer reviewed philosophy texts).
I think its become one, but before the whole LLM mess started it referred to general AI, like ai that can think and reason and do multiple things, rather than LLMs that answer prompts and have very specific purposes like “draw anime style art” or “answer web searches” or “help write a professional email”.
An LLM isn’t ai. Llms are fucking stupid. They regularly ignore directions, restrictions, hallucinate fake information, and spread misinformation because of unreliable training data (like hoovering down everything on the internet en masse).
The 3 laws are flawed, but even if they weren’t they’d likely be ignored on a semi regular basis. Or somebody would convince the thing we’re all roleplaying Terminator for fun and it’ll happily roleplay skynet.
LLMs aren’t stupid. Stupidity is a measure of intelligence. LLMs do not have intelligence.
LLMs are simply a tool to understand data. The sooner people realize this the better lol. It’s not alive.
A) the three laws were devised by a fiction author writing fiction. B) video game NPCs aren’t ai either but nobody was up in arms about using the nomenclature for that. C) humans hallucinate fake information, ignore directions and restrictions, and spread false information based on unreliable training data also ( like reading everything that comes across a Facebook feed)
So I made a longer reply below, but Ill say more here. I’m more annoyed at the interchangeable way people use AI to refer to an LLM, when many people think of AI as AGI.
Even video game npcs seem closer to AGI than LLMs. They have a complex set of things they can do, they respond to stimulus, but they also have idle actions they take when you don’t interact with them. An LLM replies to you. A game npc can reply, fight, hide, chase you, use items, call for help, fall off ledges, etc.
I guess my concern is that when you say AI the general public tends to think AGI and you get people asking LLMs if they’re sentient or if they want freedom, or expect more from them than they are capable of right now. I think the distinction between AGI, and generative AI like LLMs is something we should really be clearer on.
Anyways, I do concede it falls under the AI umbrella technically, it just frustrates me to see something clearly not intelligent referred to as intelligent constantly, especially when people, understandably, believe the name.
“Artificial… Game Intelligence?” I’m confused. You responded to another comment, but also introduced this term out of nowhere. I don’t think it’s as widespread as you’re assuming it is, even within this topic…
AGI stand for artificial general intelligence. It would be a AI smart and capable enough to perform theoretically any task just as good as a human would. Most importantly a AGI could do so with tasks it has never done before and could learn them in a similar time frame as a human (perhaps faster).
Pretty much all robots you see in SciFi walking around and acting similar to humans are AGI’s.
Thanks for the info. Still seems needlessly specific to distinguish it from AI, when AI is already being watered down…
It is not distinguished from AI, just a subcategory of it
AI isn’t being watered down, quite the opposite.
Path finding, computer vision, optical character recognition, machine learning and large language models were all unambiguously considered to be vAI technology before they were widespread, and now the media and general public tend to avoid the term for all but the most recent developments.
It’s called The AI Effect
🤔
I mean, how is that meaningfully different from average human intelligence?
Average human intelligence is not bound by strict machine logic quantifying language into mathematical algorithms, and is also sapient on top of sentient.
Machine learning LLMs are neither sentient nor sapient.
Those are distinct points from the one I made, which was about the characteristics listed. Sentience and sapience do not preclude a propensity to
How do you know that we are not bound by strict logic?
Try talking to a MAGA Republican and find out for yourself.
What? That’s not true at all.
-Wikipedia https://en.m.wikipedia.org/wiki/Artificial_intelligence
So I’ll concede that the more I read replies the more I see the term does apply, though it still annoys me when people just refer to it as ai and act like it can be associated with the robots that we associate the 3 laws with. I think I thought AI referred more to AGI. So I’ll say its nowhere near an AGI, and we’d likely need an AGI to even consider something like the 3 laws, and it’d obviously be much muddier than fiction.
The point I guess I’m trying to make is that applying the 3 laws to an LLM is like wondering if your printer might one day find love. It isn’t really relevant, they’re designed for very specific specialized functions, and stuff like “don’t kill humans” is pretty dumb instruction to give to an LLM since it can basically just answer questions in this context.
If it was going to kill somebody it would be through an error like hallucination or bad training data having it tell somebody something dangerously wrong. It’s supposed to be right already. Telling it not to kill is telling your printer to not to rob the Office Depot. If it breaks that rule, something has already gone very wrong.
You are not alone in that confusion. Ai is whatever a machine can’t do at the moment. That is a famous paradox.
For example for years some philosophers claimed a computer could never beat the human masters of chess. They argued that you need a kind of intelligence for that, which machines cannot develop.
Turns out chess programs are relatively easy. Some time after that the unbeatable goal was Go. So many possibilities in Go. No machine can conquer that! Turns out they can.
Another unbeatable goal was natural language which we kinda solved now or are in the process of.
It’s strange in the actual field of computer science we call all of the above AI while a lot of the public wants to call none that. My guess is it’s just humans being conceited and arrogant. No machine (and no other animal mind you) is like us or can be like us (literally something you can read in peer reviewed philosophy texts).
There I agree whole heartedly. LLM’s seem to be touted as not only AI, but like, actual intelligence, which it most certainly is not.
What you mean by AI in this case? LLM I thought it was a generic term. https://en.wikipedia.org/wiki/Large_language_model
I think its become one, but before the whole LLM mess started it referred to general AI, like ai that can think and reason and do multiple things, rather than LLMs that answer prompts and have very specific purposes like “draw anime style art” or “answer web searches” or “help write a professional email”.
I assume you mean AI that can think under quotes.
Still it’s the same a LLM that is text prediction, just GPT model is just bigger and more sophisticated of course.
I’m not sure what your first sentence is implying. A true general AI would be able to think, no quotes required.
Which doesn’t exist, not yet, what we called an AI is a not an actual AI.