I love to show that kind of shit to AI boosters. (In case you’re wondering, the numbers were chosen randomly and the answer is incorrect).

They go waaa waaa its not a calculator, and then I can point out that it got the leading 6 digits and the last digit correct, which is a lot better than it did on the “softer” parts of the test.

  • CodexArcanum@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    13 hours ago

    One of the big AI companies (Anthropic with claude? Yep!) wrote a long paper that details some common LLM issues, and they get into why they do math wrong and lie about it in “reasoning” mode.

    It’s actually pretty interesting, because you can’t say they “don’t know how to do math” exactly. The stochastic mechanisms that allow it to fool people with written prose also allow it to do approximate math. That’s why some digits are correct, or it gets the order of magnitude right but still does the math wrong. It’s actually layering together several levels of approximation.

    The “reasoning” is just entirely made up. We barely understsnd how LLMs actually work, so none of them have been trained on research about that, which means LLMs don’t understand their own functioning (not that they “understand” anything strictly speaking).

    • diz@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 hours ago

      Thing is, it has tool integration. Half of the time it uses python to calculate it. If it uses a tool, that means it writes a string that isn’t shown to the user, which runs the tool, and tool results are appended to the stream.

      What is curious is that instead of request for precision causing it to use the tool (or just any request to do math), and then presence of the tool tokens causing it to claim that a tool was used, the requests for precision cause it to claim that a tool was used, directly.

      Also, all of it is highly unnatural texts, so it is either coming from fine tuning or from training data contamination.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      12 hours ago

      We barely understsnd how LLMs actually work

      I would be careful how you say this. Eliezer likes to go on about giant inscrutable matrices to fearmoner, and the promptfarmers use the (supposed) mysteriousness as another avenue for crithype.

      It’s true reverse engineering any specific output or task takes a lot of effort and requires access to the model’s internals weights and hasn’t been done for most tasks, but the techniques exist for doing so. And in general there is a good high level conceptual understanding of what makes LLMs work.

      which means LLMs don’t understand their own functioning (not that they “understand” anything strictly speaking).

      This part is absolutely true. If you catch them in mistake, most of their data about responding is from how humans respond, or, at best fine-tuning on other LLM output and they don’t have any way of checking their own internals, so the words they say in response to mistakes is just more bs unrelated to anything.