• luciole (they/them)@beehaw.org
    link
    fedilink
    arrow-up
    47
    ·
    18 hours ago

    It’s hard having two decades of experiences in a domain I suddenly find myself at odds with. Reading about others having the same qualm reassures me that I’m not going crazy. On the other I feel drawn further into an untenable contradictory position.

    Once in a while I give in. It’s typically when I’m faced with a non trivial problem I realize will take me days of learning before I have any chance of tackling it. My colleagues start suggesting it or share some slop to “help out”. So I think fuck it I’ll study later for now AI will solve it I need this ticket closed asap. I fire up a “decent” paid model and I start feeding it context. Every time it’s a nightmare. Hours of trying stuff that doesn’t stick, of questioning, of arguing with a chat bot, of wading through “here are the facts” and “good catch” and “I owe you an apology”. It’s not a shortcut it’s a fucking dead end. Then the bitter aftertaste can only be cleansed with cold hard time consuming actual learning.

    • resipsaloquitur@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      14
      ·
      15 hours ago

      At least after hours of arguing with a bot and burning tons of money and energy you have a pile of code you can’t understand without paying a chatbot.

      • luciole (they/them)@beehaw.org
        link
        fedilink
        arrow-up
        9
        ·
        15 hours ago

        But will the chat bot understand itself? It’s fun when you start questioning the LLM line by line about its own slop in the same session and it starts flagging all sorts of things it did wrong. Why didn’t it write it correctly in the first place? Or is the fix wrong? Who knows? People I guess. The model is fed on knowledge but whether it will activate in response to your prompt and be restored unadulterated is a coin toss.