…and I still don’t get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn’t work well. I thought that maybe this time it would be far along enough to be useful.

The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn’t until I had a full night’s sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.

The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would “fix” the bug, and provide a confident explanation of what was wrong… Except it was clearly bullshit because it didn’t work.

I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?

For reference, I used Opus 4.6 Extended.

  • Oisteink@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    18 hours ago

    I do apps that work, i do patches that are production quality. Half the cs world does… I do full stack ai debugging of esp32 projects.

    It’s a powerful tool, you just need to learn it’s strong and weak points, just like any other tool you use.

    • Kissaki@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      Half the cs world does…

      What’s the basis for this claim? I’m doubtful, but don’t have wide data for this.

      • Oisteink@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        4 hours ago

        Rough estimate from my personal connections only. Some work places where ai is not possible, but all that have made an effort report good code. You need to work with what it is - a word generator that sometimes gives correct results. Make it research and not trust training. Never let it do things on its own, require a plan and reason. Make it evaluate its own work/plan.

        Most issues i have stem from models beeing too eager. Restrain them and remove the “i can do this next…”behaviour.

        Context is king - so proper mcp and documentation that is agent facing. I use serena as i can get lsp for yaml, markup and keep these docs like that