Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

  • Spzi@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    The article complains the usage of the word “hallucinations” would be …

    feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species.

    Wether that is true or not depends on wether we eventually create human-level (or beyond) machine intelligences. No one can read the future. Personally I think it’s just a matter of time, but there are good arguments for both sides.

    I find the term “hallucinations” fitting, because it conveys to uneducated people that a claim by ChatGPT should not be trusted, even if it sounds compelling. The article suggests “algorithmic junk”, or “glitches” instead. I believe naive users would refuse to accept an output as junk or a glitch. These terms suggest something is broken, althought the output still seems sound. “Hallucinations” is a pretty good term for that job, and also already established.

    The article instead suggests the creators are hallucinating in their predictions of how useful the tools will be. Again no one can read the future, but maybe. But mostly: It could be both.


    Reading the rest of the article required a considerable amount of goodwill on my part. It’s a bit too polemical for my liking, but I can mostly agree with the challenges and injustices it sees forthcoming.

    I mostly agree with #1, #2 and #3. #4 is particularly interesting and funny, as I think it describes Embrace, Extend, Extinguish.


    I believe AI could help us create a better world (in the large scopes of the article), but I’m afraid it won’t. The tech is so expensive to develop, the most advanced models will come from people who already sit on top of the pyramid, and foremost multiply their power, which they can use to deepen the moat.

    On the other hand, we haven’t found a solution to alignment and control problem, and aren’t certain we will. It seems very likely we will continue to empower these tools without a plan for what to do when one model actually shows near-human or even super-human capabilities, but can already copy, backup, debug and enhance itself.

    The challenges to economy and society along the way are profound, but I’m afraid that pales in comparison to the end game.