James Cameron on AI: “I warned you guys in 1984 and you didn’t listen”::undefined

  • stooovie@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    With a crucial difference - inventors of all those knew how the invention worked. Inventors of current AIs do NOT know the actual mechanism how it works. Hence, output is unpredictable.

    • drekly@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Lol could you provide a source where the people behind these LLMs say they don’t know how it works?

      Did they program it with their eyes closed?

      • stooovie@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Yes I can. example

        Opposed to other technology, nobody knows the internal structure. Input A does not necessarily produce output B.

        Whether you like it or not is irrelevant.

        • drekly@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          “Whether you like it or not is irrelevant.”

          That’s a very hostile take.

          I just think it’s wild they wouldn’t know how it works when they’re the ones who created it. How do you program something that you don’t understand?! It’s crazy.

          • stooovie@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            It is, sorry. It was a Reaction to the downvotes. But at this point I’m a bit allergic to the “it’s the same as every other invention” argument. It’s not, precisely for this reason. It’s a bit like “climate is always changing” - yes, but not within decades or centuries. These details are crucial.

          • BURN@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Basically with neural networks you program the way it injests data and how it outputs data. Everything else in between is constantly updating statistical algorithms. Developers can look at those algorithms, but it’s extremely hard to map that back out into human readable content.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        they program it to learn. They can tell you exactly how it learns, but not what it learned (there are some techniques to give some small insights, but not even close to the full picture)

        Problem is, how it behaves nepends on how it was programmed and what it learned after being trained. Since what it learned is a black box, we cannot explain their behaviour