• @sunbeam60
    link
    English
    32 months ago

    I think I know enough about these concepts to know that there isn’t any conclusive proof, observed in output or system state, to establish consensus that human speech output is generated differently to how LLMs generate output. If you have links to any papers that claim otherwise, I’ll be happy to read them.

      • @sunbeam60
        link
        English
        22 months ago

        I mean I have an opinion too; what I’m seeking is evidence.

        • @rottingleaf@lemmy.world
          link
          fedilink
          English
          12 months ago

          Evidence for what?

          I’ve just diagonally read a google link where the described way humans work with language appears for me to be very similar to GPT in rough strokes. Only human brain does a lot more than language. Hence the comparisons to the mechanical Turk.

          Also Russell’s teapot.

          • @sunbeam60
            link
            English
            22 months ago

            I’m not saying humans and LLMs generate language the same way.

            I’m not saying humans and LLMs don’t generate language the same way.

            I’m saying I don’t know and I haven’t seen clear data/evidence/papers/science to lean one way or the other.

            A lot of people seem to believe humans and LLMs don’t generate language the same way. I’m challenging that belief in the absence of data/evidence/papers/science.

              • JackGreenEarth
                link
                fedilink
                English
                12 months ago

                You’re actually incorrect in regards to Russell’s teapot in this instance. The correct approach is to admit to yourself and others you don’t know. Not to assume a negative became you can’t prove a positive, if you can’t prove the negative either.

                • @rottingleaf@lemmy.world
                  link
                  fedilink
                  English
                  12 months ago

                  I know I don’t know, but this is a continuous system and the probability of something being in one particular state is infinitely small ; the probability of it being in certain range of that particular state is, ahem, not, but with the amount of moving things in LLMs and in human brains there are most likely quite a few radical differences between laws describing them.

                  Why am I incorrect? You can’t disprove that there isn’t that teapot flying at a certain orbit as well. Or you can, but not for all such statements.

                  What would be the criterion for saying that yes, human brain works with language just in the same way as LLMs do? What would be “same”? Logic exists inside defined constraints in the continuous world.

                  Unless you define what would prove something, you can’t disprove it, but it’s also not a scientific hypothesis. That’s Popper’s criterion.