ChatGPT has a style over substance trick that seems to dupe people into thinking it’s smart, researchers found::Developers often prefer ChatGPT’s responses about code to those submitted by humans, despite the bot frequently being wrong, researchers found.

  • @the_medium_kahuna@lemmy.world
    cake
    link
    fedilink
    English
    811 months ago

    But the fact is that you need to check every time to be sure it isn’t the rare inaccuracy. Even if it could cite sources, how would you know it was interpreting the source’s statements accurately?

    imo, it’s useful for outlining and getting ideas flowing, but anything beyond that high level, the utility falls off pretty quickly

    • DreamButt
      link
      fedilink
      English
      311 months ago

      Ya it’s great for exploring options. Anything that’s raw textual is good enough to give you a general idea. And moreoftenthannot it will catch a mistake about the explanation if you ask for a clarification. But actual code? Nah, it’s about a 50/50 if it gets it right the first time and even then the style is never to my liking