the-podcast guy recently linked this essay, its old, but i don’t think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin

  • Frank [he/him, he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 months ago

    You can build a computer out of anything that can flip a logic gate, up to and including red crabs. It doesn’t matter if you’re using electricity or chemistry or crabs. That’s why it’s a metaphor. This really all reads as someone arguing with a straw man who literally believes that neurons are logic gates or something. “Actually brains have chemistry” sounds like it’s supposed to be a gotcha when people are out there working on building chemical computers, chemical data storage, chemical automata right now. There’s no dichotomy there, nor does it argue against using computer terminology to discuss brain function. It just suggests a lack of creativity, flexibility, and awareness of the current state of the art in chemistry.

    It’s also apparently arguing with people who think chat-gpt and neural nets and llms are intelligent and sentient? In which case you should loudly specify that in the first line so people know you’re arguing with ignorant fools and they can skip your article.

    Humans rely on intuition, worldviews, thoughts, beliefs, our conscience. Machines rely on algorithms, which are inherently dumb. Here’s David Berlinski’s definition of an algorithm: “An algorithm is a finite procedure, written in a fixed symbolic vocabulary, governed by precise instructions, moving in discrete steps, 1, 2, 3, . . ., whose execution requires no insight, cleverness, intuition, intelligence, or perspicuity, and that sooner or later comes to an end.”

    And what the hell is this? Jumping up and down and screaming “i have a soul! Consciousness is privileged and special! I’m not a meat automata i’m a real boy!” Is not mature or productive. This isn’t an argument, it’s a tantrum.

    The deeper we get in to this it sounds like dumb guys arguing with dumb guys about reductive models of the mind that dumb guys think other dumb guys rigidly adhere to. Ranting about ai research without specifying whether you’re talking about long standing research trends or the religious fanatics in California proseletyzing about their fictive machine gods isn’t helpful.

    • TraumaDumpling@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 months ago

      “an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.” Now if we assume that the input and output states are arbitrary and not specified, then time evolution of any system becomes computing it’s time-evolution function, with the state at every time t becoming the input for the output state at time (t+1), and hence too broad a definition to be useful. If we want to narrow the usage of the word computers to systems like our laptops, desktops, etc., then we are talking about those systems in which the input and output states are arbitrary (you can make Boolean logic work with either physical voltage high or low as Boolean logic zero, as long you find suitable physical implementations) but are clearly specified (voltage low=Boolean logic zero generally in modern day electronics), as in the intuitive definition of an algorithm….with the most important part being that those physical states (and their relationship to the computational variables) are specified by us!!! All the systems that we refer to as modern day computers and want to restrict our usage of the word computers to are in fact our created by us(or our intelligence to be more specific), in which we decide what are the input and output states. Take your calculator for example. If you wanted to calculate the sum of 3 and 5 on it, it is your interpretation of the pressing of the 3,5,+ and = buttons as inputs, and the number that pops up on the LED screen as output is what allows you interpret the time evolution of the system as a computation, and imbues the computational property to the calculator. Physically, nothing about the electron flow through the calculator circuit makes the system evolution computational.

      you literally ignore the actual part of the text that adresses your problems.

      you can use the word ‘tantrum’ while you ignore the literal words used and their meanings if you want but it only makes you seem illiterate and immature.

      ‘intuition worldviews thoughts beliefs our conscience’ are specific words with specific meanings. no computer (information processing machine) has ‘consciousness’, no computer has ‘intuition’, no computer has internal subjective experience - not even an idealized one with ‘infinite processing power’ like a turing machine. humans do. therefore humans are not computers. we cannot replicate ‘intuition’ with information processing, we cannot replicate ‘internal subjective experience’ with information processing. we cannot bridge the gap between subjective internal experience and objective external physical processes, not even hypothetically, there is not even a theoretical experiment you could design for it, there is not even theoretical language to describe it without metaphor. We could learn and simulate literally every single specific feature of the brain and it would not tell us about internal subjective experiences, because it is simply not the kind of phenomena that is understood by the field of information processing. If you have a specific take on the ‘hard problem of consciousness’ thats fine, but to say that ‘anyone who disagrees with me about this is just stupid’ is immature and ignorant, especially in light of your complete failure to understand things like Turing machines.

      I usually like your posts and comments but this thread has revelaed a complete ignorance of the philosophical and theoretical concepts under discussion here and an overzealous hysteria regarding anything that is remotely critical of a mechanistic physicalist reductionist worldview. you literally ignore or glazed over any relevant parts of the text i quoted, misunderstood the basic nature of what a turing machine is, misunderstood the nature of the discourse around the brain-as-computer discourse, all with the smuggest redditor energy humanly possible. I will not be further engaging after this post and will block you on my profile, have a nice life.

      • Frank [he/him, he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Well, Traumadumpling isn’t going to read this, so I’m just amusing myself.

        we cannot bridge the gap between subjective internal experience and objective external physical processes, not even hypothetically, there is not even a theoretical experiment you could design for it, there is not even theoretical language to describe it without metaphor. We could learn and simulate literally every single specific feature of the brain and it would not tell us about internal subjective experiences, because it is simply not the kind of phenomena that is understood by the field of information processing.

        This is all because subjectivity isn’t falsifiable and is not currently something that the scientific method can interact with. As far as the scientific method is concerned it doesn’t exist. idk why people are even interested in it, I don’t see why it’s important. The answer to “P-zombies” is that it doesn’t matter and isn’t interesting. If something performs all the observable functions of an intelligent mind with a subjective experience… well… it performs all the observable functions of an intelligent mind. Why are you interested in subjectivity if you can’t evaluate whether it’s even happening? You can’t test it, you can’t confirm or deny it. So just put it back in the drawer and move on with your life. It’s not even a question of whether it does or doesn’t exist. It’s that the question isn’t important or interesting. It has no practical consequences at all unless people, for cultural reasons, decide that something that performs the functions of an intelligent mind doesn’t deserve recognition as a person because of their ingrained cultural belief3 in the existence and importance of a soul.

        I do see this as directly tied to atheism. Part of making the leap to atheism and giving up on magic is admitting that you can’t know, but based on what you can observe the gods aren’t there. No one can find them, no one can talk to them, they never do anything. If there are transcendental magic people it’s not relevant to your life.

        Phenomenology is the same way. It just doesn’t matter, and continuing to carry it around with you is an indication of immaturity, a refusal to let go and accept that some things are unknowable and probably always will be. Hammering on and on that we can’t explain how subjectivity arises from physical processes doesn’t change the facts on the ground; We’ve never observed anything but physical processes, and as such it is reasonable to assume that there is a process by which subjectivity emerges from the physical. Because there’s nothing else. There’s nothing else that could be giving rise to subjectivity. And, again, we don’t know. Maybe there is a magic extradimensional puppeteer. But we don’t know in the same sense that we don’t know that the sun will rise tomorrow. It’s one of the not particularly interesting problems with the theory of science - We assume that things that happened in the past predict things that will happen in the future. We do not, and cannot know if the sun will rise tomorrow. But as a practical matter it isn’t important. With nothing else to explain the phenomena we observe, we can assume within the limits in which anything at all is predictable that the subjective experience is an emergent property of the crude, physical, boring, terrifyingly mortal meat.

        More and more philosophy’s dogged adherence to these ideas strikes me as an refusal to let go, to grow up, to embrace the unpredictable violence of a cold, hostile, meaningless universe. Instead of saying we don’t and cannot know, and therefor it’s not worth worrying about, philosophers cling to this security blanket of belief that we are, somehow, special. That we’re unique and our existence has meaning and purpose. That we’re different from the unthinking matter of stars or cosmic dust.

        mechanistic physicalist reductionist worldview

        https://en.wikipedia.org/wiki/Physicalism

        Like this is just materialism. Physicalism isn’t a belief, it’s a scientific observation. We haven’t found anything except the physical and as much as philosophers obsess about subjectivity and qualia and what have you those concepts, while mildly interesting intellectual topics, aren’t relevant to science. You can’t measure them, you cannot prove if they exist or do not exist. Maybe someday we’ll have a qualia detector and we’ll actually be able to do something with them, but right now they’re not relevant. I’m a reductionist physicalist mechanist because I’m tired of hearing about ghosts and souls and magic. No question is being raised. There’s no investigation that can proceed from these concepts. You can’t do anything with them except yell at people who think, based on evidence, that physics is the only system that we can observe and investigate. And it’s not “these things don’t exist”, it’s whether they exist or not, we can’t observe or interact with them so we can’t do anything with them. You can’t test qualia, you can’t measure it. If we can some day, cool. But until then it’s just… not useful.

        AI is everywhere.

        I didn’t read the article, just commented on the excerpts. And when I do read the article this is the first line? Conflating LLMs and neural nets with AI? Accepting the tech bro marketing buzzword at face value?

        Terms like “neural networks” certainly have not helped and, from Musk to Hawking, some of the greatest minds have propagated this myth.

        Neural networks are called that because they’re modeled on the behavior of neurons, not the other way around. Hawking could be a dork about some things but why put him in the same sentence as an ignorant buffoon like Musk?

        Is what we’re arguing here actually that psychologists and philosophers are yelling at tech bros because they think that neuroscientists using computer metaphors actually believe a seventy year old theory of cognition originating from psychology when psychologists were still mostly criminals and butchers?

        Like saying the brain is a biological organ? That’s not a gotcha when biological computers exist and research teams are encoding gigabytes of data, like computer readable data, 1s and 0s, as DNA. Whatever the brain is, we can build computers out of meat, we’ve done it, it works. There is no distinction between biological and machine, artifact and organ, meat and metal. It’s an illusion of scale combined with, frankly, superstition. A living cell operates according to physical law just like everything else. It has a lot of componenents, some of them are very small, some of them we don’t understand and I’m sure there are processes and systems we haven’t identified, but all those pieces and processes and systems follow physical laws the same as everything else in creation. There’s no spooky ghosts or quintessence down there.

        Like, if the message here is to tell completely ignorant laypeople and tech bros who haven’t read a book that wasn’t about webdev that the brain does not literally have circuitry in it, fine, but say that. But right now we’re very literally bridging the perceived gap between mechanical human artifacts and biology. We’re building biological machines, biological computers. These are not completely different categories of things that can never be unified under a single theory to explain their function.

        Let’s take a step back, look at “Capitalism as a real god”, what Marx called it, or “Holy shit capitalism is literally Cthulu” which is the formulation many people are independently arriving at these days. Capitalism is a gigantic system that emerges from the interactions of billions of humans. It’s not located in any single human, or any subset of humans. It emerges from all of us, as we interact with each other and the world. There’s no quintessence, no “subjectivity” that we could ever evaluate or interogate or find. We can’t say whether capitalism has a subjective experience or cosciousness, whether there is an “I think therefore I am” drifting across the maddening complexity of financial transactions, commodity fetishism, resource extraction, and cooking dinner.

        The brain has ~80 million neurons (plus glial matter I know I know bear with me). There are about 8 billion humans, and each of us is vastly more complex than a brain cell. So if humans actually are components in an emergent system that is intelligent and maybe self-aware, there’s only one order of magnitude fewer humans than there are cells in a human brain that, given lack of any other explanations, we must assume give rise to a thinking mind.

        Is it impossible for such a system to have a subjective experience? Is it a serious problem? As it stands we can’t assess whether such subjectivity exists in the system, whether the system has something meaningfully resembling a human mind. The difference in experience is likely so vast as to be utterly unbridgeable. A super-organism existing on a global level would, likely, not be able to communicate with us due to lack of any shared referents or experiences at all. A totally alien being unlike us except that it emerges from the interaction of less complex systems, seeks homeostasis, and reacts to its environment.

        But, like, who cares? Whether capitalism is a dumb system or an emergent intelligence there’s nothing we can do about it. We can’t investigate the question and an answer wouldn’t be useful. So move along. Have your moment of existential horror and then get on with your life.

        I think that’s what really bothers me about this whole subjectivity, qualia, consciousness thing. It’s boring. It’s just… boring. Being stuck on it doesn’t increase my knowledge or understanding. It doesn’t open up new avenues of investigation.

        The conclusion I’m coming to is this whole argument isn’t about computers or brains or minds, but rather phenomenology having reached a dead end. It’s a reaction to the discipline’s descent in to irrelevance. The “Hard Problem of Consciousness” simply is not a problem.