“The chatbot gave wildly different answers to the same math problem, with one version of ChatGPT even refusing to show how it came to its conclusion.”

It’s getting worse. And because it’s a black box model they don’t know why. The computer science professor here likens it to how human students make mistakes… but human students make mistakes because they don’t have perfect recall, mishear things being told to them, are tired and/or not paying attention… A bunch of reason that basically relate to having a human body that needs food, rest and water. A thing a computer does not have.

The only reason ChatGPT should be getting math wrong is that it’s getting inputs that are wrong, but without view into it they can’t figure out where it’s getting it wrong and who told it the wrong info.

  • Frog-Brawler
    link
    fedilink
    161 year ago

    Huh… so after months of being exposed to people that aren’t quite as smart as world class computer scientists and engineers, it gets dumber. Maybe it’s more human that I previously thought.

    • paper_clip
      link
      fedilink
      5
      edit-2
      1 year ago

      it gets dumber

      In six months, ChatGPT will be talking up Brawndo, because it’s got the electrolytes that plants crave.

    • C4RP3_N0CT3M
      link
      fedilink
      41 year ago

      I wonder if it is in fact learning from people’s prompts; I didn’t think that was part of the operation. That’s a huge design flaw if so.

      • Frog-Brawler
        link
        fedilink
        31 year ago

        ChatGPT existing is a design flaw. Just because we can do something, doesn’t mean we should.