Over half of all tech industry workers view AI as overrated::undefined

  • TrickDacy@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    1 year ago

    What always strikes me as weird is how trusting people are of inherently unreliable sources. Like why the fuck does a robot get trust automatically? It’s a fuckin miracle it works in the first place. You double check that robot’s work for years and it’s right every time? Yeah okay maybe then start to trust it. Until then, what reason is there not to be skeptical of everything it says?

    People who Google something and then accept whatever Google pulls out of webpages and puts at the top as fact… confuse me. Like all machines, there are failures. Why would we trust that the opposite is true?

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      27
      ·
      edit-2
      1 year ago

      At least a Google search gets you a reference you can point at. It might be wrong, it might not. Maybe it points to other references that you can verify.

      ChatGPT outright makes shit up and there’s no way to see how it came to a given conclusion.

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That’s a good point… So long as you follow the links and read more. My girlfriend for example, often doesn’t

    • BURN@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      Because the average person hears “AI” and thinks Cortana/Terminator, not a bunch of if statements.

      People are dumb when it comes to things they don’t understand. I’m dumb when it comes to mechanical engineering of any kind, but I’m competent with software. It’s all about where people’s strengths lie, but some people aren’t aware enough to know they don’t know something

    • I Cast Fist@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      My guess, wholly lacking any scientifc rigor, is that humans naturally trust each other. We don’t assume the info someone shares with us as wrong, unless there’s “a reason” to doubt. Chatting with any of these LLM bots feels like talking to a person (most of the time), so there’s usually “no reason” to doubt what it spews.

      If human trust wasn’t so easy to get and abuse, many scams would be much harder to pull.