• Terrasque@infosec.pub
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 month ago

    I generally agree with your comment, but not on this part:

    parroting the responses to questions that already existed in their input.

    They’re quite capable of following instructions over data where neither the instruction nor the data was anywhere in the training data.

    They’re completely incapable of critical thought or even basic reasoning.

    Critical thought, generally no. Basic reasoning, that they’re somewhat capable of. And chain of thought amplifies what little is there.

    • AliasAKA@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 month ago

      I don’t believe this is quite right. They’re capable of following instructions that aren’t in their data but appear like things which were (that is, it can probabilistically interpolate between what it has seen in training and what you prompted it with — this is why prompting can be so important). Chain of thought is essentially automated prompt engineering; if it’s seen a similar process (eg from an online help forum or study materials) it can emulate that process with different keywords and phrases. The models themselves however are not able to perform a is to b therefore b is to a, arguably the cornerstone of symbolic reasoning. This is in part because it has no state model or true grounding, only probabilities you could observe a token given some context. So even with chain of thought, it is not reasoning, it’s just doing very fancy interpolation of the words and phrases used in the initial prompt to generate a prompt that is probably going to give a better answer, not because of reasoning, but because of a stochastic process.