Avram Piltch is the editor in chief of Tom’s Hardware, and he’s written a thoroughly researched article breaking down the promises and failures of LLM AIs.

  • DarkenLM
    link
    fedilink
    1210 months ago

    Machines don’t Lear like humans yet.

    Our brains are a giant electrical/chemical system that somehow creates consciousness. We might be able to create that in a computer. And the day it happens, then what will be the difference between a human and a true AI?

    • @CanadaPlus@lemmy.sdf.org
      link
      fedilink
      310 months ago

      If you read the article, there’s “experts” saying that human comprehension is fundamentally computationally intractable, which is basically a religious standpoint. Like, ChatGPT isn’t intellegent yet, partly because it doesn’t really have long term memory, but yeah, there’s overwhelming evidence the brain is a machine like any other.

      • @barsoap@lemm.ee
        link
        fedilink
        210 months ago

        fundamentally computationally intractable

        …using current AI architecture, and the insight isn’t new it’s maths. This is currently the best idea we have about the subject. Trigger warning: Cybernetics, and lots of it.

        Meanwhile yes of course brains are machines like any other claiming otherwise is claiming you can compute incomputable functions which a physical and logical impossibility. And it’s fucking annoying to talk about this topic with people who don’t understand computability. Usually turns into a shouting match of “you’re claiming the existence of something like a soul, some metaphysical origin of the human mind” vs. “no I’m not” vs. “yes you are but you don’t understand why”.

        • @CanadaPlus@lemmy.sdf.org
          link
          fedilink
          010 months ago

          …using current AI architecture, and the insight isn’t new it’s maths.

          That is not what van Rooij et al. said, which is who was cited in here. They published their essay here, which I haven’t really read, but which appears to make an argument about any possible computer. They’re psychologists and I don’t see any LaTeX in there, so they must be missing something.

          Unfortunately I can’t open your link, although it sounds interesting. A feedforward network can approximate any computable function if it gets to be arbitrarily large, but depending on how you want to feed an agent inputs from it’s environment and read it’s actions a single function might not be enough.

          • @barsoap@lemm.ee
            link
            fedilink
            010 months ago

            They’re psychologists and I don’t see any LaTeX in there,

            Oh no that’s LaTeX alright. I can tell by everything from the font to the line breaking, some of it is hard to imitate with an office suite, the rest impossible. But I’ll totally roll with dunking on psychologists :)

            In this paper, we undercut these views and claims by presenting a mathematical proof of inherent intractability (formally, NP-hardness) of the task that these AI engineers set themselves

            Yeah I don’t buy it. If human cognition was inherently NP-hard we’d have brains the size of suns. OTOH it might be “close to NP” in the same sense as the travelling salesman is NP, but it’s quite feasible indeed to get answers guaranteed to not be X% (with user choice of X) worse than the actually shortest path which is good enough in practice. We do, after all, have to operate largely in real-time, there’s no time to be perfect when a sabre tooth tiger is trying to eat you.

            Or think about SAT solvers: They can solve large classes of problems ridiculously fast even though the problem is, in its full generality, NP. And the class they’re fast on is so large that people very much do treat solving SAT as tractable: Because it usually is. Maybe that is why we get headaches from hard problems.

            Unfortunately I can’t open your link, although it sounds interesting.

            Then let me throw citations at you. The first is for the underlying theory characterising the necessary cybernetic characteristics of human minds, the second one applies it to current approaches to AI. This comes out of German publicly-funded basic research (Max Planck / FIAS)

            Nikolić, Danko. “Practopoiesis: Or how life fosters a mind.” Journal of Theoretical Biology 373 (2015): 40-61.
            Nikolić, Danko. “Why deep neural nets cannot ever match biological intelligence and what to do about it?.” International Journal of Automation and Computing 14.5 (2017): 532-541.