the-podcast guy recently linked this essay, its old, but i don’t think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin

  • bumpusoot [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    7 months ago

    This essay is ridiculous, it’s arguing against a concept that nobody with the minutest understanding or interest in the brain has. He’s arguing that because you cannot go find the picture of a dollar bill in any single neuron, that means the brain is not storing the “representation” of a dollar bill.

    I am the first to argue the brain is more than just a plain neural network, it’s highly diversified and works in ways beyond our understanding yet, but this is just silly. The brain obviously stores the understanding of a dollar bill in the pattern and sets of neurons (like a neural network). The brain quite obviously has to store the representation of a dollar bill, and we probably will find a way to isolate this in a brain in the next 100 years. It’s just that, like a neural network, information is stored in complex multi-layered systems rather than traditional computing where a specific bit of memory is stored in a specific address.

    Author is half arguing a point absolutely nobody makes, and half arguing that “human brains are super duper special and can never be represented by machinery because magic”. Which is a very tired philosophical argument. Human brains are amazing and continue to exceed our understanding, but they are just shifting information around in patterns, and that’s a simple physical process.

    • Frank [he/him, he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 months ago

      This whole thing is incredibly frustrating. Like his guy did draw a representation of a dollar bill. It was a shitty representation, but so is a 640x400 image of a Monet. What’s the argument being made, even? It’s just an empty gotcha. The way that image is stored and retrieved is radically different from how most actual physical computers work, but there is observably an analogous process happening. You point a camera at an object, take a picture, store it to disk, retrieve it, you get an approximation of the object as perceived by the camera. You show someone the same object, they somehow store a representation of that object somewhere in their meat, and when you ask them to draw it they’re retrieving that approximation and feeding that approximation to their hands to draw the imagine. I don’t get why the guy thinks these things are obviously, axiomatically uncomparable.