Avram Piltch is the editor in chief of Tom’s Hardware, and he’s written a thoroughly researched article breaking down the promises and failures of LLM AIs.

  • @nyan@lemmy.cafe
    link
    fedilink
    English
    1510 months ago

    Let’s be clear on where the responsibility belongs, here. LLMs are neither alive nor sapient. They themselves have no more “rights” than a toaster. The question is whether the humans training the AIs have the right to feed them such-and-such data.

    The real problem is the way these systems are being anthropomorphized. Keep your attention firmly on the man behind the curtain.

    • Storksforlegs
      link
      fedilink
      English
      710 months ago

      Yes, these are the same people who are charging a fee to use their AI and profiting. Placing the blame and discussion on the AI itself conveniently overlooks a lot here.

      • @nyan@lemmy.cafe
        link
        fedilink
        English
        210 months ago

        One could equally claim that the toaster was ahead, because it does something useful in the physical world. Hmm. Is a robot dog more alive than a Tamagotchi?

        • @abhibeckert@beehaw.org
          link
          fedilink
          1
          edit-2
          10 months ago

          There are a lot of subjects where ChatGPT knows more than I do.

          Does it know more than someone who has studied that subject their whole life? Of course not. But those people aren’t available to talk to me on a whim. ChatGPT is available, and it’s really useful. Far more useful than a toaster.

          As long as you only use it for things where a mistake won’t be a problem - it’s a great tool. And you can also use it for “risky” decisions but take the information it gave you to an expert for verification before acting.

          • @nyan@lemmy.cafe
            link
            fedilink
            English
            310 months ago

            Sorry to break it to you, but it doesn’t “know” anything except what text is most likely to come after the text you just typed. It’s an autocomplete. A very sophisticated one, granted, but it has no notion of “fact” and no real understanding of the content of what it’s saying.

            Saying that it knows what it’s spouting back to you is exactly what I warned against up above: anthropomorphization. People did this with ELIZA too, and it’s even more dangerous now than it was then.