• setsubyou@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    13 hours ago

    If you want bug ridden code with security issues which is not extensible and which no-one understands, then sure, it’s a practical use case.

    This assumes you never review it, meaning it’s at best an argument against vibe coding. It’s not an argument against using LLMs for coding in general.

    Additionally, I’ve been writing software for a living for almost 30 years, and I could say the exact same thing about a lot of human generated code I’ve reviewed during that time. I don’t even know how often I’ve explained basic stuff like “security goes in the backend, not in the frontend” to humans.

    Let’s face it the only reason you’re saying “coding is a practical use case” is because you yourself don’t code, and don’t understand it.

    I certainly do code and if I don’t understand what the LLM outputs it doesn’t go in the project.

    I can’t see another reason why would assume the problems experienced in other domains somehow don’t apply to coding.

    I’m a software engineer, I can’t judge LLMs in most other domains. I also don’t think there are no problems. A tool doesn’t have to be 100% problem free to be useful as long as you recognize the limitations.

    So you’re going to have to pick your way through every single line it generates in order to have the same confidence you would have if you wrote it

    I don’t see a problem with this. The post even mentions pulling code from stackoverflow, which is the same. But nobody ever argued that it has no uses in coding because you still have to read the code.

    Honestly at this point any article just flat out dismissing LLMs for coding only reads to me like the author isn’t even trying to stay up to date. Which is understandable if they don’t like AI but makes posting about it a bit pointless.

    A year ago I would had a similar opinion as the author but in the last 3-4 months specifically, it feels like AI based tools made a huge leap. I went from using short snippets for learning to letting AI implement entire features and being actually happy with the result.

    There is however still a pretty big difference between what it produces for common problems vs. what it produces for specialized difficult ones. It’s also inherently better at some languages than others based on the availability of up-to-date training material. So you need some amount of breadth in your projects to accurately judge it.

    If you only try some AI service in free mode on one thing every month, for example, you’ll just have this very polarized opinion that’s either “AI is useless” or “AI can do everything”, but you won’t have a good idea of what it can and can’t do.

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      A year ago I would had a similar opinion as the author but in the last 3-4 months specifically, it feels like AI based tools made a huge leap. I went from using short snippets for learning to letting AI implement entire features and being actually happy with the result.

      Maybe if you’re only working with languages and features that are well documented and have a lot of examples out there. I’ve been trying to use LLM coding to assist me with a process automation at work, and the results are a couple steps up from dog vomit more often than not.

      AI code assistants aren’t making big strides, you’re likely just seeing them refine common scenarios to points where it becomes very usable for your specific use cases.

      • setsubyou@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        Sure. How much the language or features change is also important. For example Claude can build entire iPhone apps in Swift but you bet they’re going to be full of warnings about things that are illegal now and you bet if there’s any concurrency stuff it’s going to be a wild mix of everything async that ever existed in Swift. It makes sense too because LLMs are trained on code that’s, on average, outdated.

        But what it’s good at and what it’s not good at is just part of what you need to know when using AI, just like with any other tool. I have projects too where it can at best replace google, so I don’t try to make it implement those by itself.

    • The_Decryptor@aussie.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 hours ago

      A year ago I would had a similar opinion as the author but in the last 3-4 months specifically, it feels like AI based tools made a huge leap.

      I’ve seen this claim made basically weekly for the last couple of years, if we’re having “generational leaps” monthly then these LLMs would actually be capable of doing what people claim they can.

      • setsubyou@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        It’s just my experience as someone who was pretty much forced to use AI for coding by my employer for the last few years. For the longest time it was completely useless. And then it suddenly wasn’t. I’m sure you’ll keep hearing this kind of story though, because people have different requirements and AI assisted coding or even agents don’t have to start working for everybody at the same time.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 hours ago

      This assumes you never review it

      Too many people assume, that since genAI is a machine, it’ll never make any mistakes.