• tornavish@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 hours ago

    There have also been a number of studies that say that if a person knows how to use an LLM and provides it with a good prompt that it can give them something they can use.

    The biggest issue that I’ve seen with LLMs is that nobody knows how to write a prompt. Nobody knows how to really use it to their benefit. There is absolutely a benefit to someone who is proficient in writing. Just like there is absolutely a benefit to someone who is proficient in writing code.

    I’m guessing you belong in the category that cannot write a good prompt?

    • 7toed@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      No, I’ve done my actual work while people convinced they have “good prompts” weighed my whole team down (and promptly got laid off). We’ve burnt enough openai token and probed models on our own hardware to assertain their utility in my field. Manual automation with simple systems and hard logic is what the industry has ran on, and certainly will continue to.

      Explain to what makes a prompt good. As long as you’re using any provided model and not using sandbox you’re stuck to their initiating prompt. Change that, and you still have their parameters. Run an OS model with your own parameter tunings, you still are limited by your tempterature. What is a good temperature to use for rigid logic that doesn’t result in unexpected behavior but can adapt to user input well enough? These are questions every AI corp is dealing with.

      For context, all we were trying to do was implement some copilot/gpt shit onto our PMS to handle customer queries, data entry, scheduling information and notifications, and some other opened ended suggestions. C suite was giddy, IT not so much, but my team was to keep an open mind and see what we could achieve… so we evaluated it, as of about 6 months ago or so is when finally Cs stopped bugging since they had bigger fires to put out, and we had worked out a powerautomate routine (without the help of copilot… its unfunnily useless even though it’s implemented right into PA), making essentially all the effort put into working the AI from a LLM to an “agentic model” completrly mute, despite the tools the company bought into and everything.

      I’m guessing you belong in the category who hasn’t actually worked at a facility which part of your job is to deploy things like AI, but like to have an affirmative stance anyway.

      • tornavish@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        Yawn. Let’s do this, it’s even better: You tell me a task that you need to accomplish. Then you tell me the prompt you would give an LLM to accomplish that task.

        • 7toed@midwest.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 hours ago

          Clearly heavy LLM usage inhibits reading comprehension, I stated the usecase which my employer wanted to implement. Sorry normal people aren’t as dogmatic as your AI friends lmao

          • tornavish@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 hours ago

            Give me an example and the exact prompt. My reading is very good. You are refusing to do it