Look, I don’t believe that an AGI is possible or atleast within the next few decade. But I was thinking about, if one came to be, how can we differentiate it from a Large Language Model (LLM) that has read every book ever written by humans?

Such an LLM would have the “knowledge” of almost every human emotions, morals, and can even infer from the past if the situations are slightly changed. Also such LLM would be backed by pretty powerful infrastructure, so hallucinations might be eliminated and can handle different context at a single time.

One might say, it also has to have emotions to be considered an AGI and that’s a valid one. But an LLM is capable of putting on a facade at-least in a conversation. So we might have to hard time reading if the emotions are genuine or just some texts churned out by some rules and algorithms.

In a pure TEXTUAL context, I feel it would be hard to tell them apart. What are your thoughts on this? BTW this is a shower-thought, so I might be wrong.

  • SirEDCaLot@lemmy.today
    link
    fedilink
    arrow-up
    7
    ·
    4 hours ago

    There was actually a paper recently that tested this exactly.

    They made up a new type of problem that had never before been published. They wrote a paper explaining the problem and how to solve it.

    They fed this to an AI, not as training material but as part of the query, and then fed it the same problem but with different inputs and asked it to solve it.
    It could not.

    AGI would be able to learn from the queries given to it, not just its training base data.

    • techpeakedin1991@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      1 hour ago

      It’s really easy to show this even with a known problem. Ask an LLM to play a game of chess, and then give it 1. h3 as a first move. They always screw up immediately, by making an illegal move. This happens because 1. h3 is hardly ever played, so it isn’t part of it’s model. In fact, it’ll usually play a move that ‘normally’ responds to h3, like Bh5 for example

  • UpperBroccoli@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    2
    ·
    4 hours ago

    An LLM trained on all books ever written would probably take romance novels, books by flat earthers, or even “Atlas Shrugged” as truth as much as current AIs consider all stack overflow comments to contain useful and accurate information.

    Thinking about it, your questions comes back to the very first and original instance of a computer and the question interested people asked about it:

    If you put into the machine wrong figures, will the right answer come out?

    Now if we allow ourselves the illusion of assuming that an AGI could exist, and that it can actually learn by itself in a similar way as humans, than just that quote above leads us to these two truths:

    • LLMs cannot help being stupid, they just do not know any better.
    • AGIs will probably be idiots, just like the humans asking the above question, but there is at least a chance that they will not.
  • Dionysus@leminal.space
    link
    fedilink
    arrow-up
    7
    ·
    7 hours ago

    An LLM will only know what it knows.

    AGI will be able to come up with novel information or work though things it’s never been trained on.

    • Andy@slrpnk.net
      link
      fedilink
      arrow-up
      5
      ·
      7 hours ago

      This is what I was going to say.

      Also, long form narrative. Right now LLMs seem to work best for short conversations, but get increasingly unhinged over very long conversations. And if they generate a novel, it’s not consistent or structured, from what I understand.

      • Dionysus@leminal.space
        link
        fedilink
        arrow-up
        3
        ·
        5 hours ago

        You’re spot on for all of that. Context windows have a lot to do with the unhinged behavior right now… But it’s a fundamental trait of how LLMs work.

        For example, you can tell it to refer to you by a specific name and once it stops you know the context window is overrun and it’ll go off the rails soon… The newer chat bots have mitigations in place but it still happens a lot.

        These are non-deterministic predictive text generators.

        Any semblance of novel thoughts is due to two things for modern LLMs:

        • Model “temperature”: a setting that determines how much “randomness” there is… with a value of 0 it will generate exactly what it can find that exactly follows what you gave it the best it can. Note it often breaks when you try this.

        • It has more information than you: I’ve had interesting interactions with work where it came up with actually good ideas. These are all accounted for by MCPs allowing it to search and piece things together or the post training refinements and catalog augmentation though.

  • Naich@lemmings.world
    link
    fedilink
    arrow-up
    27
    ·
    10 hours ago

    An LLM doesn’t understand the output it gives. It can’t understand what someone wants when they talk to it, and it can’t generate an original thought. It’s as far from actual intelligence as auto complete on your phone.

        • Tracaine@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 hour ago

          We don’t. Period. I could be looking at you dead in the eye right now and have no objective way of knowing you are sentient in the same way I am.

          • TheJesusaurus@piefed.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            57 minutes ago

            Didn’t ask how you know. But how you understand.

            Sure you don’t know someone else is sapient. But you treat them as if they are

        • CmdrShepard49@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          4 hours ago

          Because as far as we know currently only humans have sentience so if you’re talking to a human you know it does, and if you’re talking to anything else, you know it doesn’t.

          • TheJesusaurus@piefed.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            4 hours ago

            How do you know it’s not a dolphin in one of their hidden underwater dolphin tech cities?

            Literally more likely than a “take the average of the internet and put it in a blender” machine gaining a soul

            • CmdrShepard49@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              ·
              4 hours ago

              Im talking about face to face. When you speak to someone online it becomes a lot blurrier but I would err on the side of an LLM until proven otherwise.

      • Thorry@feddit.org
        link
        fedilink
        arrow-up
        3
        ·
        9 hours ago

        Think of it this way:

        If I ask you can a car fly? You might say well if you put wings on it or a rocket engine or something, maybe? OK, I say, so I point at a car on the street and ask: Do you think that specific car can fly? You will probably say no.

        Why? Even though you might not fully understand how a car works and all the parts that go into it, you can easily tell it does not have any of the things it needs to fly.

        It’s the same with an LLM. We know what kinds of things are needed for true intelligence and we can easily tell the LLM does not have the parts required. So an LLM alone can never ever lead to AGI, more parts are needed. Even though we might not fully understand how the internals of an LLM function in specific cases and might also not know what parts exactly are needed for intelligence or how those work.

        A full understanding of all parts isn’t required to discern large scale capabilities.

  • azimir@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    8 hours ago

    You’re sitting on the Chinese Translator problem, and to some extent the basis of the Turing Test (mostly the translator problem).

    https://en.wikipedia.org/wiki/Chinese_room

    Knowledge != Intelligence

    Regurgitating things you’ve read is only interpolative. You’re only able to reply with things you’ve seen before, never new things. Intelligence is extrapolative. You’re able to generate new ideas or material beyond what has been done before.

    So far, the LLM world remains interpolative, even if it reads everything created by others before it.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      7 hours ago

      The Chinese room thought experiment is deeply idiotic, it’s frankly incredible to me that people discuss it seriously. Hofstadter does a great tear down of it in I Am a Strange Loop.

  • SkyNTP@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    7 hours ago

    Such an LLM would have the “knowledge” of almost every

    Most human knowledge is not written down. Your premise is flawed to the core.

  • FRYD@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    10 hours ago

    I mean sure, an imaginary LLM that exceeds the fundamental limitations of the technology could be convincing, but then that’s not an LLM. LLMs are statistical models, they don’t know anything. They use stats calculated from training data to guess what token should follow another. Hallucinations cannot be eliminated because that would require it to be capable of knowing things and then it would have to be able to error check itself rationally. In other words, it would have to be intelligent.

  • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    9 hours ago

    That’s like asking what’s the difference between a chef who has memorized every recipe in the world and a chef who can actually cook. One is a database and the other has understanding.

    The LLM you’re describing is just a highly sophisticated autocomplete. It has read every book, so it can perfectly mimic the syntax of human thought including the words, the emotional descriptions, and the moral arguments. It can put on a flawless textual facade. But it has no internal experience. It has never burned its hand on a stove, felt betrayal, or tried to build a chair and had it collapse underneath it.

    AGI implies a world model which is an internal, causal understanding of how reality works, which we build through continous interaction with it. If we get AGI, then it’s likely going to come from robotics. A robot learns that gravity is a real, it learns that “heavy” isn’t an abstract concept but a physical property that changes how you move. It has to interact with its environment, and develop a predictive model that allows it to accomplish its tasks effectively.

    This embodiment creates a feedback loop LLMs completely lack: action -> consequence -> learning -> updated model. An LLM can infer from the past, but an AGI would reason about the future because it operates with the same fundamental rules we do. Your super-LLM is just a library of human ghosts. A real AGI would be another entity in the world.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    9 hours ago

    Uh, simple.

    Clear your chat history, and see if it remembers anything.


    LLMs are, by current defitions, static. They’re like clones you take out of cryostasis every time you hit enter; nothing you say has an impact on them. Meanwhile, the ‘memory’ and thinking of a true AGI are not seperable; it has a state that changes with time, and everything it experiences impacts its output.

    …There are a ton of other differences. Transformers models trained with glorified linear regression are about a million miles away from AGI, but this one thing is an easy one to test right now. It’d work as an LLM vs human test too.

  • hansolo@lemmy.today
    link
    fedilink
    arrow-up
    5
    ·
    10 hours ago

    Your premise is a bit flawed here, and I appreciate where you’re coming from with this.

    I would say it’s probably true that no human has read every book written by humans. And while reading about those experiences are insightful, any person of any intelligence can go through a full and rich life with lots of introspection and cosmic-level thought without ever reading about how other people experience the same things. If two young kids are abandoned on an island and grow up there into adults, and have the entirety of human knowledge available to them, is that the only way they would be able to experience love or sadness or envy or joy? Of course not. With or without books makes no difference.

    Knowledge is not intelligence in this sense. An LLM is no more able to understand the data it’s trained on than Excel understands the numbers in a spreadsheet. If I ask an LLM to interpret Moby Dick for me, it will pick the statistically most likely words to be next to each other based on all the reviews of Moby Dick it’s trained on. If you take an LLM and train it on books with no critiques of books, it would just summarize the book because it doesn’t know what a critique looks like to try and put the words in the right order.

    Also, AGI is not well-defined, but emotions are no where in the equation. AGI is about human or better intelligence at cognitive tasks, like math, writing, etc. It’s basically taking several narrow AI systems specialized on one task each and combining them in a single system. AGI is not “the singularity” or whatever. It’s a commercially viable system that makes money for wealthy people.

    • Brutticus@midwest.social
      link
      fedilink
      arrow-up
      1
      ·
      9 hours ago

      This is interesting. Im not supremely well informed on these issues but I always assumed so called “AGI” would have emotions, or at least would be “Alive.” Is there a term for such a robot? Most fictional robots have intelligence and emotions to the point where we loop back around to it being unethical to exploit them.

  • Grimy@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    8 hours ago

    An AGI is just a software that can do anything a human can (more or less). Something that could use blender for example or do proper research.

    Llms can already already understand images and other data formats, they might be a pathway to agi but most think they have too many constraints for it to reach that far.

  • FriendOfDeSoto@startrek.website
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 hours ago

    It remains to be seen if reading about all the emotions and morals is the same as feeling them, acting according to them, or just being able to identify them in us meatbags. So even with the sum total of human knowledge at their disposal, this may not matter. We already don’t know how these models actually organize their “knowledge.” We can feed them and we can correct bad outcomes. Beyond that it is a black box, at least right now. So if the spark from statistical spellchecker (current so-called AI) to actual AI (or AGI) happens, we’ll probably not even notice until it writes us a literary classic of Shakespearean proportions or until it turns us into fuel for its paperclip factory. That’s my assessment anyway.