And the real maddening part is that search engines have been so enshitfied to make way for AI that’s wrong like 9/10, so you’re forced to rely on it for answers because if you try google, the snake wraps around and eats it’s own tail giving you an AI answer! stalin-stressed

  • TreadOnMe [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    8 days ago

    Its fine for boilerplate simple programs. However, it will often make mistakes even for those, so you have to know what you are looking at. Still saves time, though idk if the actual energy usage etc., is actually saving you time and money without free money existing.

    However, I have seen people write big programs with it and then be surprised that they don’t work. Even more worrying though is when they do work, but then I walk through whoever wrote it and they cannot explain how or why it is working.

    Its real engineering logic.

    • Le_Wokisme [they/them, undecided]@hexbear.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 days ago

      though idk if the actual energy usage etc., is actually saving you time and money without free money existing.

      llm end-user energy consumption is pretty low. probably depends on the provider rates and your dev salaries.

      • neo [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        7
        ·
        8 days ago

        Yeah but inference cannot exist without the prohibitively expensive up-front cost of training. And of course the larger the model the more costly the inference. That’s why you read stories like “new trend in SV: pay in tokens.” Opus 4.6 is gonna mop the floor with a 2B param model designed to run on an edge PC, but the cost of getting to the point that it can be used, and actually using it, is still very high.