• Passerby6497@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 hours ago

    A year ago I would had a similar opinion as the author but in the last 3-4 months specifically, it feels like AI based tools made a huge leap. I went from using short snippets for learning to letting AI implement entire features and being actually happy with the result.

    Maybe if you’re only working with languages and features that are well documented and have a lot of examples out there. I’ve been trying to use LLM coding to assist me with a process automation at work, and the results are a couple steps up from dog vomit more often than not.

    AI code assistants aren’t making big strides, you’re likely just seeing them refine common scenarios to points where it becomes very usable for your specific use cases.

    • setsubyou@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      Sure. How much the language or features change is also important. For example Claude can build entire iPhone apps in Swift but you bet they’re going to be full of warnings about things that are illegal now and you bet if there’s any concurrency stuff it’s going to be a wild mix of everything async that ever existed in Swift. It makes sense too because LLMs are trained on code that’s, on average, outdated.

      But what it’s good at and what it’s not good at is just part of what you need to know when using AI, just like with any other tool. I have projects too where it can at best replace google, so I don’t try to make it implement those by itself.