AI Companies And Advocates Are Becoming More Cult-Like::How one writer’s trip to the annual tech conference CES left him with a sinking feeling about the future.

  • Cagi@lemmy.ca
    link
    fedilink
    English
    arrow-up
    57
    ·
    10 months ago

    Tech VCs did the same with block chain and the cloud before that. It’s an industry that loves it’s fads and fashions.

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      ·
      10 months ago

      Accelerationism is “F U I’ve got AI” combined with “you’ve got to burn the world down to rebuild it, so let’s start that fire”

      Singularitarianism is basically the Christian Rapture but with super intelligent AI.

      These ideas have been around some time in tech circles.

      • bionicjoey@lemmy.ca
        link
        fedilink
        English
        arrow-up
        15
        ·
        10 months ago

        I regularly see people on Lemmy talk about AGI run countries and governments as though it’s only a couple years away. Bruh it still struggles with fingers. You really think that’s where it will be in a couple of years?

        • maynarkh@feddit.nl
          link
          fedilink
          English
          arrow-up
          17
          ·
          10 months ago

          I’m convinced that ChatGPT or even some open source autocorrect, or a guy with a 24 sided die could run quite a few countries better than the people in charge now to be fair tp the looneys.

        • rho50@lemmy.nz
          link
          fedilink
          English
          arrow-up
          12
          ·
          edit-2
          10 months ago

          Yeah bro but eXpOnEnTiAl ImProVeMeNt bro!

          And haven’t you heard of Roko’s basilisk? Better be careful what you say on the cybernets, lest our AGI/ASI overlords of 2026 take a disliking to your commentary regarding their eventual supremacy!

          Excuse me while I go back to mining Dogecoin until I can buy enough NFTs to make Elon or Sam Altman notice me.

          /s

          • kibiz0r@midwest.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            10 months ago

            Better be careful what you say

            I know it’s not the point, but that always strikes me as so dumb. Wouldn’t a superintelligent being know that you were simply hiding your true feelings?

            • afraid_of_zombies@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              What bugs me about it is the same problem the wager has. What if there was a later AI that punished you for helping the first one? And a still later one that punished you for not helping the first one. Since the number of invented gods are infinite and have contradictionary commands no action or inaction promises salvation.

            • rho50@lemmy.nz
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              Agreed, and it could definitely make such an assumption. The other aspect that I don’t really get is… if a superintelligent entity were to eventuate, why would it care?

              We’re going to be nothing but bugs to it. It’s not likely to be of any consequence to that entity whether or not I expected/want it to exist.

              The anthropomorphising going on with the AI hype is just crazy.

        • afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          10 months ago

          According to everything I read AI is either going to be godlike soon or utterly useless forever. If people can just sit down and not repeat the endless trope of “the enemy is all powerful and all weak at the same time” I would appreciate it.

          Maybe we can just try to rationally evaluate what is going on and where it is going?

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    This is the best summary I could come up with:


    I was watching a video of a keynote speech at the Consumer Electronics Show for the Rabbit R1, an AI gadget that promises to act as a sort of personal assistant, when a feeling of doom took hold of me.

    Specifically, about a term first defined by psychologist Robert Lifton in his early writing on cult dynamics: “voluntary self-surrender.” This is what happens when people hand over their agency and the power to make decisions about their own lives to a guru.

    At Davos, just days ago, he was much more subdued, saying, “I don’t think anybody agrees anymore what AGI means.” A consummate businessman, Altman is happy to lean into that old-time religion when he wants to gin up buzz in the media, but among his fellow plutocrats, he treats AI like any other profitable technology.

    As I listened to PR people try to sell me on an AI-powered fake vagina, I thought back to Andreessen’s claims that AI will fix car crashes and pandemics and myriad other terrors.

    In an article published by Frontiers in Ecology and Evolution, a research journal, Dr. Andreas Roli and colleagues argue that “AGI is not achievable in the current algorithmic frame of AI research.” One point they make is that intelligent organisms can both want things and improvise, capabilities no model yet extant has generated.

    What we call AI lacks agency, the ability to make dynamic decisions of its own accord, choices that are “not purely reactive, not entirely determined by environmental conditions.” Midjourney can read a prompt and return with art it calculates will fit the criteria.


    The original article contains 3,929 words, the summary contains 266 words. Saved 93%. I’m a bot and I’m open source!

  • deafboy@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    Rabbit could order pizza for you, telling it “the most-ordered option is fine,” leaving his choice of dinner up to the Pizza Hut website.

    I feel like we wouldn’t need the language model as a translation layer between 2 machines, if there were proper APIs everywhere…