[deleted by user]

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.

      Eh. This reads very much like your headline is massively over-promising clickbait. If your fix for an LLM bullshitting is that you have to check all its sources then you haven’t fixed LLM bullshitting

      If it does that more than twice, straight in the bin. I have zero chill any more.

      That’s… not how any of this works…