• Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    3 months ago

    You could use them to know what the text is about, and if it’s worth your reading time. In this situation, it’s fine if the AI makes shit up, as you aren’t reading its output for the information itself anyway; and the distinction between summary and shortened version becomes moot.

    However, here’s the catch. If the text is long enough to warrant the question “should I spend my time reading this?”, it should contain an introduction for that very purpose. In other words if the text is well-written you don’t need this sort of “Gemini/ChatGPT, tell me what this text is about” on first place.

    EDIT: I’m not addressing documents in this. My bad, I know. [In my defence I’m reading shit in a screen the size of an ant.]

    • queermunist she/her@lemmy.ml
      link
      fedilink
      English
      arrow-up
      21
      ·
      edit-2
      3 months ago

      ChatGPT gives you a bad summary full of hallucinations and, as a result, you choose not to read the text based on that summary.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        (For clarity I’ll re-emphasise that my top comment is the result of misreading the word “documents” out, so I’m speaking on general grounds about AI “summaries”, not just about AI “summaries” of documents.)

        The key here is that the LLM is likely to hallucinate the claims of the text being shortened, but not the topic. So provided that you care about the later but not the former, in order to decide if you’re going to read the whole thing, it’s good enough.

        And that is useful in a few situations. For example, if you have a metaphorical pile of a hundred or so scientific papers, and you only need the ones about a specific topic (like “Indo-European urheimat” or “Argiope spiders” or “banana bonds”).

        That backtracks to the OP. The issue with using AI summaries for documents is that you typically know the topic at hand, and you want the content instead. That’s bad because then the hallucinations won’t be “harmless”.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          English
          arrow-up
          14
          ·
          3 months ago

          But the claims of the text are often why you read it in the first place! If you have a hundred scientific papers you’re going to read the ones that make claims either supporting or contradicting your research.

          You might as well just skim the titles and guess.

          • Lvxferre@mander.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            But the claims of the text are often why you read it in the first place!

            By “not caring about the former” [claims], I mean in the LLM output, because you know that the LLM will fuck them up. But it’ll still somewhat accurately represent the topic of the text, and you can use this to your advantage.

            You might as well just skim the titles and guess.

            Nirvana fallacy.

            • self@awful.systems
              link
              fedilink
              English
              arrow-up
              17
              ·
              3 months ago

              not reading the fucking sidebar and thinking this is high school debate club fallacy

              • Lvxferre@mander.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                not reading the fucking sidebar

                Yeah, I get that this is a place to vent. And I get why to vent about this. LLMs and other A"I" systems (with quotation marks because this shite is not intelligent!) are being shoved down every bloody where, regardless of actual usefulness, safety, or user desire. Telling you to put glue on your pizza, to eat poisonous mushrooms, that “cherish” has five letters, that Latin had no [w], that the Chinese are inferior to Westerners.

                While a crowd of irrationals tell you “it is intelligent, you can’t prove otherwise! CHRUST IT YOU DIRTY SCEPTIC/INFIDEL/LUDDITE REEEE! LALALA I’M PRETENDING TO NOT SEE THE HALLUCINATION LALALA”.

                I also get the privacy nightmare that this shit is. And the whole deal behind “we’re using your content as training data, and then selling the result back to you”. Or that it’s eating electricity like there’s no tomorrow, in a planet where global warming is a present issue.

                I get it. I get it all. That’s why I’m here. And if you (or anyone else) think that I’m here for any other reason, by all means, check my profile - you’ll find plenty pieces of criticism against those stupid corporate AI takes from vulture capital. (And plenty instances of me calling HN “Redditors LARPing as Hax0rz”. )

                However. Pretending that there’s no use case ever for LLMs is the wrong way to go.

                and thinking this is high school debate club fallacy

                If calling it “nirvana fallacy” rubs you the wrong way, here’s an alternative: “this argument is fucking stupid, in a very specific way: it pretends that either something is perfect or it’s useless, with no middle ground.”

                The other user however does not deserve the unnecessary abrasiveness so I’ll keep simply calling it “nirvana fallacy”.

                • self@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  10
                  ·
                  3 months ago

                  holy shit, imagine getting a second chance to not be a fucking debatelord and doubling down this hard

                  off you fuck

                • froztbyte@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  ·
                  edit-2
                  3 months ago

                  this argument

                  I agree, you’re quite right, and I thank you for taking the time and putting in the effort on such a wonderfully thorough portrayal of why your argument is total horseshit

            • queermunist she/her@lemmy.ml
              link
              fedilink
              English
              arrow-up
              13
              ·
              3 months ago

              Unless it doesn’t accurately represent the topic, which happens, and then a researcher chooses not to read the text based on the chatbot’s summary.

              Nirvana fallacy.

              All these chatbots do is guess. I’m just saying a researcher might as well cut out the hallucinating middleman.

    • David Gerard@awful.systemsOPM
      link
      fedilink
      English
      arrow-up
      20
      ·
      3 months ago

      Both the use cases here are goverment documents. I’m baffled at the idea of it being “fine if the AI makes shit up”.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      if the text is well-written you don’t need this sort of “Gemini/ChatGPT, tell me what this text is about” on first place.

      And if it’s badly written then the LLM will shit itself.

      Now let’s ask ourselves how much of the text in the world is “well-written”?

      Or even better, you could apply this to Copilot. How much code in the world is good code? The answer is fucking none, mate.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        No, it’s just rambling. My bad.

        I focused too much on using AI to summarise and ended not talking about it summarising documents, even if the text is about the later.

        And… well, the later is such a dumb idea that I don’t feel like telling people “the text is right, don’t do that”, it’s obvious.