• bradd@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    19 days ago

    I guess it depends on your models and tool chain. I don’t have this issue but I have seen it for sure, in the past with smaller models no tools and legal code.

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      You do have this issue, you can’t not have this issue, your LLM, no matter how big the model is and how much tooling you use, does not have criteria for truth. The fact that you made this invisible for you is worse, so much worse.

      • bradd@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        If I put text into a box and out comes something useful I could give a shit less if it has a criteria for truth. LLM’s are a tool, like a mannequin, you can put clothes on it without thinking it’s a person, but you don’t seem to understand that.

        I work in IT, I can write a bash script to set up a server pivot to an LLM and ask for a dockerfile that does the same thing, and it gets me very close. Sure, I need to read over it and make changes but that’s just how it works in the tech world. You take something that someone wrote and read over it and make changes to fit your use case, sometimes you find that real people make really stupid mistakes, sometimes college educated people write trash software, and that’s a waste of time to look at and adapt… much like working with an LLM. No matter what you’re doing, buddy, you still have to use your brian.