the-podcast guy recently linked this essay, its old, but i don’t think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin

  • Parzivus [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    15
    ·
    7 months ago

    We really don’t know enough about the brain to make any sweeping statements about it at all beyond “it’s made of cells” or whatever.
    Also, Dr. Epstein? Unfortunate.

    • Frank [he/him, he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 months ago

      We really do, though. Like we really, really do. Not enough to build one from scratch, but my understanding is we’re starting to be able to read images people are forming in their minds, to locate individual memories within the brain, we’re starting to get a grasp on how at least some of the sub systems of the mind function and handle sensory information. Like we are making real progress at a rapid pace.

      • bumpusoot [any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        8
        ·
        7 months ago

        We can’t yet really read images people are thinking of, but we have got a very vague technology that can associate very specific brainwave patterns with specific images after extensive training with that specific image on the individual. Which is still an impressive 1% of the way there.

      • commiewithoutorgans [he/him, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        7 months ago

        I would love to see some studies that you believe show this. I have seen several over the last decade and come to the conclusion that most of these are bunk or just able to recognize one brain signal pattern, and that that pattern actually is indistinguishable from some others (like lamp and basket look nothing the same, but then the brain map for lamp also returns for bus for some reason).

        It’s not a useful endeavor in my opinion, and using computer experience and languages as a model is a pretty shit model, is my conclusion. More predictive possibilities than psychology, but wildly inaccurate and unable to predict it’s innaccuracy. It’s good to push back because it’s accuracy is wildly inflated by stembros

        • dat_math [they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          I have seen several over the last decade and come to the conclusion that most of these are bunk or just able to recognize one brain signal pattern

          The fmri ones are probably bunk. That said, if you could manage the heinous act of cw: body gore

          spoiler

          implanting several thousands of very small wires throughout someone’s visual cortex, and record the responses evoked by specific stimuli or instructions to visualize a given stimulus, you could probably produce low fidelity reconstructions of their visual perception

          are you familiar with the crimes of Hubel and Weisel?

          • I am not, and I will look it up in a minute.

            But my point is that such a low-fidelity reconstruction, when interpreted through the model of modern computing methods, lacks the accuracy for any application AND, crucially, has absolutely no way to account for and understand its limitations in relation to the intended applications. That last part is a more philosophy of science argument than about some percentage accuracy. It’s that the model has no way to understand its limitations because we don’t have any idea what those are, and discussion of this is limited to my knowledge, leaving no ceiling for the interpretations and implications.

            I think a big difference in positions in this thread though is between those talking about how the best neuroscientists in the world think about this, and about those who are more technologists who never reached that level and want to Frankenstein their way to tech-bro godhood. I’m sure the top neuros get this, and are constantly trying to find new and better models. But their publications don’t appear in science journals on the covers