• HotAtForty [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    44
    ·
    4 months ago

    We have successfully recreated human intelligence because AI refuses to acknowledge it is wrong and develops entire belief structures rather than revisit its earlier assumptions.

  • Llituro [he/him, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    36
    ·
    4 months ago

    this is actually the once and future llm banger, the only llm banger. because every single funny dumbass thing they do is essentially for this reason. humans have a much higher rate than 95% on dog recognition

      • flan [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        4 months ago

        cnns can recognize dogs but probably not as well as humans can. “vlms” (basically llms with a visual embedding layer on the front) can also recognize dogs and tell you it’s a dog (because words come out the back end). But again probably not as well as humans can.

    • Onno (VK6FLAB)@lemmy.radio
      link
      fedilink
      English
      arrow-up
      16
      ·
      4 months ago

      What’s sad is that CEO’s are too stupid to understand that an LLM isn’t intelligent and continue to use them as human replacements, followed shortly after by a surprised Pikachu face when, as a surprise to nobody, it turns out that, no, they’re not.

  • Lerios [hy/hym]@hexbear.net
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    4 months ago

    just did this test with the “assistant” at my job (different AI) and it did the exact same thing lmao. anyway we NEED to use ai, its the future

    capitalism is the best possible system

  • proceduralnightshade@lemmy.ml
    link
    fedilink
    English
    arrow-up
    19
    ·
    4 months ago

    The obvious easy solution would be to teach LLMs to guide the user through their “thinking process” or however you may call it. Instead of answering outright. This is what people do too, right? They look at what they thought and/or wrote. Or they would say “let’s test this”. Like good teachers do. Problem is, that would require some sort of intelligence, which artificial intelligence ironically doesn’t possess.

      • proceduralnightshade@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 months ago

        I would consider even LLMs actual AI. Even bots in video games are called AIs, no? But I agree that people are vastly overestimating their capabilities and I hate the entrepreneurial bullshitting as much as everyone else.

        Machine learning! That was the better term.

    • ArchRecord@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      4 months ago

      The very strategy of asking LLMs to “reason” or explain an answer tends to make them more accurate.

      Because instead of the first token being “Yes” or “No”, it’s “That depends,” or If we look at…"

      Thus increasing the number of tokens that determines the answer from 1, to theoretically hundreds or more.

        • invalidusernamelol [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          Those are all one token. A token can be a whole sentence. Tokenization tends to be based on LZW compression which combines common phrases (of any length, e.g. “Once upon a time” could be a single token because it’s recurring)

          “Yes” is almost always followed by an explanation of a single idea while “It depends” is followed by several possible explanations.

            • invalidusernamelol [he/him]@hexbear.net
              link
              fedilink
              English
              arrow-up
              3
              ·
              4 months ago

              I really hate that LLM stuff has been bazinga’d because it’s actually really cool. It’s just not some magical solution to anything beyond finding statistical patterns

              • FloridaBoi [he/him]@hexbear.net
                link
                fedilink
                English
                arrow-up
                3
                ·
                4 months ago

                Yes I recognize that it is very advanced statistical analysis at its core. It’s so difficult to get that concept across to people. We have a GenAI tool at work but I asked it a single question with easily verifiable and public data and it got it so wrong. It got the structure correct but all of the figures were made up

                • invalidusernamelol [he/him]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  4 months ago

                  I think the best way to show people is to get them to ask it leading questions.

                  LLMs can’t deal with leading questions by design unless the expert system sitting on top of them can deal with it.

                  Like get them to ask why a very obviously wrong thing is right. Works better with very industry specific stuff that they haven’t programmed the expert system managing responses to deal with.

                  In my industry: “Thanks for helping me figure out my 1:13 split fiber optic network, what even sized cable would I need to make the implementation work?”

                  It’ll just refuse to give you an answer or it’ll give you no answer and just start explaining terms. When you get a response like that it’s because another LLM system tailored the response because of low confidence in the answer. Those are usually asked to re-phrase the answer to not assert anything and just focus on individual elements of the question.

                  My usual response is a list of definitions and tautologies followed by “I need more information” but that’s not what the LLM responded with. Responses like that are tailored by another LLM that’s triggered when confidence in a response is low.

    • BountifulEggnog [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      4 months ago

      This is “chain of thought” (and a few others based on “chain of thought”), and yes it gives much better results. Very common thing to train into a model. Chatgpt will do this a lot, surprised it didn’t do that here. Only so much you can do I suppose.

    • dorythefish@discuss.online
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      There are chain of thought and tree of thought approaches and maybe even more. From what I understand it generates answer in several passes and even with smaller models you can get better results.

      However it is funny how AI (LLMs) is heavily marketed as a thing that will make many jobs obsolete and/or will take over humanity. Yet to get any meaningful results people start to build whole pipelines around LLMs, probably even using several models for different tasks. I also read a little about retrieval augmented generation (RAG) and apparently it has a lot of caveats in terms of what data can and can not be successfully extracted, data should be chunked to fit into the context and yet retain all the valuable information and this problem does not have “one size fits all” solution.

      Overall it feels like someone made black box (LLM), someone tried to use this black box to deal with the existing complexity, failed and started building another layer of complexity around the black box. So ultimately current AI adopters can find themselves with two complex entities at hand. And I find it kind of funny.

    • wellbuddyweek@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 months ago

      It would be, but thats because you already told it the answer to the original question.

      Assuming you don’t know if 3307 is prime, if you ask it like this for any number it wil (almost) always say the number is prime.

      • casskaydee [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        4 months ago

        Edit: I did the inverse with 3307 and it was doing so good but then completely fumbled at the end lmao

        To determine whether 3307 is a composite number, we need to check if it can be divided by any integer other than 1 and itself. If it can be divided by any other integer, it is composite.

        Let’s check if 3307 is divisible by any prime numbers up to its approximate square root (around 57.5).

        • 3307 is odd, so it is not divisible by 2.
        • The sum of its digits is 13, which is not divisible by 3.
        • It does not end in 0 or 5, so it is not divisible by 5.
        • Dividing by 7: (3307 \div 7 = 472.42857), not an integer.
        • Dividing by 11: Alternating sum of digits: (3 - 3 + 0 - 7 = -7), not divisible by 11.
        • Dividing by 13: (3307 \div 13 = 254.3846), not an integer.
        • Dividing by 17: (3307 \div 17 = 194.5294), not an integer.
        • Dividing by 19: (3307 \div 19 = 174.0526), not an integer.
        • Dividing by 23: (3307 \div 23 = 143.7826), not an integer.
        • Dividing by 29: (3307 \div 29 = 114.0345), not an integer.
        • Dividing by 31: (3307 \div 31 = 106.6774), not an integer.
        • Dividing by 37: (3307 \div 37 = 89.3784), not an integer.
        • Dividing by 41: (3307 \div 41 = 80.6585), not an integer.
        • Dividing by 43: (3307 \div 43 = 76.9069), not an integer.
        • Dividing by 47: (3307 \div 47 = 70.3617), not an integer.
        • Dividing by 53: (3307 \div 53 = 62.3962), not an integer.

        Now let’s check division by 7 again more carefully: [ 3307 \div 7 = 472.428571 ]

        So far so good

        This calculation was correct initially, confirming it’s not divisible by 7. But there’s another calculation involving 47 that needs rechecking: [ 3307 \div 47 = 70.3617 ]

        Uhh ok I guess.

        This means 3307 is not evenly divisible by 47 either.

        However, let’s check another prime factor: 53. [ 3307 \div 53 = 62.3962 ]

        …right

        Rechecking again for any missed divisor, we finally find: [ 3307 = 53 \times 59 ]

        Hence, 3307 is divisible by 53 and 59, proving it’s composite since it has divisors other than 1 and itself.

        WTF

  • D61 [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 months ago

    The Hitchhikers Guide to the Galaxy mistakenly released to the public as fiction, when it should have been released as non-fiction.

  • invalidusernamelol [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 months ago

    Having used Co-Pilot, this is a pervasive problem.

    It will frequently kneecap itself by writing an incorrect function header then spiraling into nonsense.

    Or if you ask it to complete something for you and you got the start wrong it’ll just keep generating different incorrect answers.

    It’s very useful for boilerplate stuff, but even then I’ve been got by mistyping a variable name and then it keeps using the wrong name over and over but in believable ways because it generates believable code.