• Sterile_Technique@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    11 months ago

    You’re right, but so is the previous poster. Actual AI doesn’t exist yet, and when/if it does it’s going to confuse the hell out of people who don’t get the hype over something we’ve had for years.

    But calling things like machine learning algorithms “AI” definitely isn’t going away… we’ll probably just end up making a new term for it when it actually becomes a thing… “Digital Intelligence” or something. /shrug.

    • tegs_terry@feddit.uk
      link
      fedilink
      English
      arrow-up
      12
      ·
      11 months ago

      It isn’t human-level, but you could argue it’s still intelligence of a sort, just erstatz

      • OpenStars@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        11 months ago

        I dunno… I’ve heard that argument, but when something gives you >1000 answers, among which the correct answer might be buried somewhere, and a human is paid to dig through it and return something that looks vaguely presentable, is that really “intelligence”, of any sort?

        Aka, 1 + 1 = 13, which is a real result that AI can and almost certainly has recently offer(ed).

        People are right to be excited about the potential that generative AI offers in the future, but we are far from that atm. Also it is vulnerable to misinformation presented in the training data - though some say that that process might even affect humans too (I know, you are shocked, right? well, hopefully not that shocked:-P).

        Oh wait, nevermind I take it all back: I forgot that Steven Huffman / Elon Musk / etc. exist, and if that is considered intelligence, then AI has definitely passed that level of Turing equivalence, so you’re absolutely right, erstatz it is, apparently!?

        • tegs_terry@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          What’s the human digging through answers thing? I haven’t heard anything about that.

          • OpenStars@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            ChatGPT was caught, and I think later admitted, to not actually using fully automated processes to determine those answers, iirc. Instead, a real human would curate the answers first before they went out. That human might reject answers to a question like “Computer: what is 1+1?” ten times before finally accepting one of the given answers (“you’re mother”, hehe with improper apostrophe intact:-P). So really, when you were asking for an “AI answer”, what you were asking was another human on the other end of that conversation!!!

            Then again, I think that was a feature for an earlier version of the program, that might no longer be necessary? On the other hand, if they SAY that they aren’t using human curation, but that is also what they said earlier before they admitted that they had lied, do we really believe it? Watch any video of these “tech Bros” and it’s obvious in less than a minute - these people are slimy.

            And to some extent it doesn’t matter bc you can download some open source AI programs and run them yourself, but in general from what I understand, when people say things nowadays like “this was made from an AI”, it seems like it is always a hand-picked item from among the set of answers returned. So like, “oooh” and “aaaahhhhh” and all that, that such a thing could come from AI, but it’s not quite the same thing as simply asking a computer for an answer and it returning the correct answer right away! “1+1=?” giving the correct answer of 13 is MUCH less impressive when you find that out of a thousand attempts at asking, it was only returned a couple times. And the situation gets even worse(-r) when you find out that ChatGPT has been getting stupider(-est?) for awhile now - https://www.defenseone.com/technology/2023/07/ai-supposed-become-smarter-over-time-chatgpt-can-become-dumber/388826/.

            • tegs_terry@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              There’s no way that’s the case now, the answers are generated way too quickly for a human to formulate. I can certainly believe it did happen at one point.

              • OpenStars@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                11 months ago

                Yes, and the fact that the quality suddenly declined awhile back - e.g. that article I linked to explained more - tracks along with those lines as well: when humans were curating the answers it took longer, whereas now the algorithm is unchained, hence able to move faster, and yet with far less accuracy than before.

            • Ookami38@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              11 months ago

              So reading through your post and the article, I think you’re a bit confused about the “curated response” thing. I believe what they’re referring to is the user ability to give answers a “good answer” or “bad answer” flag that would then later be used for retraining. This could also explain the AIs drop in quality, of enough people are upvoting bad answers or downvoting good ones.

              The article also describes “commanders” reviewing and having the code team be responsive to changing the algorithm. Again this isn’t picking responses for the AI. Instead ,it’s reviewing responses it’s given and deciding if they’re good or bad, and making changes to the algorithm to get more accurate answers in the future.

              I have not heard anything like what you’re describing, with real people generating the responses real time for gpt users. I’m open to being wrong, though, if you have another article.

              • OpenStars@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                11 months ago

                I might be guilty of misinformation here - perhaps it was a forerunner to ChatGPT, or even a different (competing) chatbot entirely, where they would read an answer from the machine before deciding whether to send it on to the end user, whereas the novelty of ChatGPT was in throwing off such shackles present in an older incarnation? I do recall a story along the lines that I mentioned, but I cannot find it now so that lends some credence to that thought. In any case it would have been multiple generations behind the modern ones, so you are correct that it is not so relevant anymore.

    • lad@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      This problem was kinda solved by adding AGI term meaning “AI but not what is now AI, what we imagined AI to be”

      Not going to say that this helps with confusion much 😅 and to be fair, stuff like autocomplete in office soft was called AI long time ago but it was far from LLMs of now

    • Klear@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      Enemies in Doom have AI. We’ve been calling simple algorythms in a handful lines of code AI for a long time, the trend has nothing to do with languege models etc.