Look, I don’t believe that an AGI is possible or atleast within the next few decade. But I was thinking about, if one came to be, how can we differentiate it from a Large Language Model (LLM) that has read every book ever written by humans?

Such an LLM would have the “knowledge” of almost every human emotions, morals, and can even infer from the past if the situations are slightly changed. Also such LLM would be backed by pretty powerful infrastructure, so hallucinations might be eliminated and can handle different context at a single time.

One might say, it also has to have emotions to be considered an AGI and that’s a valid one. But an LLM is capable of putting on a facade at-least in a conversation. So we might have to hard time reading if the emotions are genuine or just some texts churned out by some rules and algorithms.

In a pure TEXTUAL context, I feel it would be hard to tell them apart. What are your thoughts on this? BTW this is a shower-thought, so I might be wrong.

  • UpperBroccoli@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    2
    ·
    7時間前

    An LLM trained on all books ever written would probably take romance novels, books by flat earthers, or even “Atlas Shrugged” as truth as much as current AIs consider all stack overflow comments to contain useful and accurate information.

    Thinking about it, your questions comes back to the very first and original instance of a computer and the question interested people asked about it:

    If you put into the machine wrong figures, will the right answer come out?

    Now if we allow ourselves the illusion of assuming that an AGI could exist, and that it can actually learn by itself in a similar way as humans, than just that quote above leads us to these two truths:

    • LLMs cannot help being stupid, they just do not know any better.
    • AGIs will probably be idiots, just like the humans asking the above question, but there is at least a chance that they will not.