I think AI is neat.

  • Ragdoll X@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    10 months ago

    Depends on what you mean by general intelligence. I’ve seen a lot of people confuse Artificial General Intelligence and AI more broadly. Even something as simple as the K-nearest neighbor algorithm is artificial intelligence, as this is a much broader topic than AGI.

    Wikipedia gives two definitions of AGI:

    An artificial general intelligence (AGI) is a hypothetical type of intelligent agent which, if realized, could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks.

    If some task can be represented through text, an LLM can, in theory, be trained to perform it either through fine-tuning or few-shot learning. The question then is how general do LLMs have to be for one to consider them to be AGIs, and there’s no hard metric for that question.

    I can’t pass the bar exam like GPT-4 did, and it also has a lot more general knowledge than me. Sure, it gets stuff wrong, but so do humans. We can interact with physical objects in ways that GPT-4 can’t, but it is catching up. Plus Stephen Hawking couldn’t move the same way that most people can either and we certainly wouldn’t say that he didn’t have general intelligence.

    I’m rambling but I think you get the point. There’s no clear threshold or way to calculate how “general” an AI has to be before we consider it an AGI, which is why some people argue that the best LLMs are already examples of general intelligence.

    • Dr. Jenkem@lemmy.blugatch.tube
      link
      fedilink
      English
      arrow-up
      11
      ·
      10 months ago

      Depends on what you mean by general intelligence. I’ve seen a lot of people confuse Artificial General Intelligence and AI more broadly. Even something as simple as the K-nearest neighbor algorithm is artificial intelligence, as this is a much broader topic than AGI.

      Well, I mean the ability to solve problems we don’t already have the solution to. Can it cure cancer? Can it solve the p vs np problem?

      And by the way, wikipedia tags that second definition as dubious as that is the definition put fourth by OpenAI, who again, has a financial incentive to make us believe LLMs will lead to AGI.

      Not only has it not been proven whether LLMs will lead to AGI, it hasn’t even been proven that AGIs are possible.

      If some task can be represented through text, an LLM can, in theory, be trained to perform it either through fine-tuning or few-shot learning.

      No it can’t. If the task requires the LLM to solve a problem that hasn’t been solved before, it will fail.

      I can’t pass the bar exam like GPT-4 did

      Exams often are bad measures of intelligence. They typically measure your ability to consume, retain, and recall facts. LLMs are very good at that.

      Ask an LLM to solve a problem without a known solution and it will fail.

      We can interact with physical objects in ways that GPT-4 can’t, but it is catching up. Plus Stephen Hawking couldn’t move the same way that most people can either and we certainly wouldn’t say that he didn’t have general intelligence.

      The ability to interact with physical objects is very clearly not a good test for general intelligence and I never claimed otherwise.

      • Ragdoll X@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        10 months ago

        I know the second definition was proposed by OpenAI, who obviously has a vested interest in this topic, but that doesn’t mean it can’t be a useful or informative conceptualization of AGI, after all we have to set some threshold for the amount of intelligence AI needs to display and in what areas for it to be considered an AGI. Their proposal of an autonomous system that surpasses humans in economically valuable tasks is fairly reasonable, though it’s still pretty vague and very much debatable, which is why this isn’t the only definition that’s been proposed.

        Your definition is definitely more peculiar as I’ve never seen anyone else propose something like it, and it also seems to exclude humans since you’re referring to problems we can’t solve.

        The next question then is what problems specifically AI would need to solve to fit your definition, and with what accuracy. Do you mean solve any problem we can throw at it? At that point we’d be going past AGI and now we’re talking about artificial superintelligence…

        Not only has it not been proven whether LLMs will lead to AGI, it hasn’t even been proven that AGIs are possible.

        By your definition AGI doesn’t really seem possible at all. But of course, your definition isn’t how most data scientists or people in general conceptualize AGI, which is the point of my comment. It’s very difficult to put a clear-cut line on what AGI is or isn’t, which is why there are those like you who believe it will never be possible, but there are also those who argue it’s already here.

        No it can’t. If the task requires the LLM to solve a problem that hasn’t been solved before, it will fail.

        Ask an LLM to solve a problem without a known solution and it will fail.

        That’s simply not true. That’s the whole point of the concept of generalization in AI and what the few-shot and zero-shot metrics represent - LLMs solving problems represented in text with few or no prior examples by reasoning beyond what they saw in the training data. You can actually test this yourself by simply signing up to use ChatGPT since it’s free.

        Exams often are bad measures of intelligence. They typically measure your ability to consume, retain, and recall facts. LLMs are very good at that.

        So are humans. We’re also deterministic machines that output some action depending on the inputs we get through our senses, much like an LLM outputs some text depending on the inputs it received, plus as I mentioned they can reason beyond what they’ve seen in the training data.

        The ability to interact with physical objects is very clearly not a good test for general intelligence and I never claimed otherwise.

        I wasn’t accusing you of anything, I was just pointing out that there are many things we can argue require some degree of intelligence, even physical tasks. The example in the video requires understanding the instructions, the environment, and how to move the robotic arm in order to complete new instructions.


        I find LLMs and AGI interesting subjects and was hoping to have a conversation on the nuances of these topics, but it’s pretty clear that you just want to turn this into some sort of debate to “debunk” AGI, so I’ll be taking my leave.

        • rambaroo@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          10 months ago

          IME when you prompt an LLM to solve a new problem it usually just makes up a bunch of complete bullshit that sounds good but doesn’t mean anything.

        • Redacted@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          I agree, there is no formal definition for AGI so a bit silly to discuss that really. Funnily enough I inadvertantly wrote the nearest neighbour algorithm to model swarming behavour back when I was an undergrad and didn’t even consider it rudimentary AI.

          Can I ask what your take on the possibility of neural networks understanding what they are doing is?

        • KeenFlame@feddit.nu
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          Yes refreshing to see someone a little literate here thanks for fighting the misinformation man

      • KeenFlame@feddit.nu
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        10 months ago

        Can your calculator only serve problems you already solved? I really don’t buy that take

        Llms are in fact not at all good at retaining facts, it’s one of the most worked on problems for them

        Llms can solve novel problems. It’s actually much more complex than just a lookup robot, which we already have for such tasks

        You just take wild guesstimates on how they work and it just feels wrong to me to not point that out