As soon as Apple announced its plans to inject generative AI into the iPhone, it was as good as official: The technology is now all but unavoidable. Large language models will soon lurk on most of the world’s smartphones, generating images and text in messaging and email apps. AI has already colonized web search, appearing in Google and Bing. OpenAI, the $80 billion start-up that has partnered with Apple and Microsoft, feels ubiquitous; the auto-generated products of its ChatGPTs and DALL-Es are everywhere. And for a growing number of consumers, that’s a problem.

Rarely has a technology risen—or been forced—into prominence amid such controversy and consumer anxiety. Certainly, some Americans are excited about AI, though a majority said in a recent survey, for instance, that they are concerned AI will increase unemployment; in another, three out of four said they believe it will be abused to interfere with the upcoming presidential election. And many AI products have failed to impress. The launch of Google’s “AI Overview” was a disaster; the search giant’s new bot cheerfully told users to add glue to pizza and that potentially poisonous mushrooms were safe to eat. Meanwhile, OpenAI has been mired in scandal, incensing former employees with a controversial nondisclosure agreement and allegedly ripping off one of the world’s most famous actors for a voice-assistant product. Thus far, much of the resistance to the spread of AI has come from watchdog groups, concerned citizens, and creators worried about their livelihood. Now a consumer backlash to the technology has begun to unfold as well—so much so that a market has sprung up to capitalize on it.


Obligatory “fuck 99.9999% of all AI use-cases, the people who make them, and the techbros that push them.”

  • Zaktor@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    18
    ·
    6 months ago

    Sometimes. Sometimes it’s more accurate than anyone in the village. And it’ll be reliably getting better. People relying on “AI is wrong sometimes” as the core plank of opposition aren’t going to have a lot of runway before it’s so much less error prone than people the complaint is irrelevant.

    The jobs and the plagiarism aspects are real and damaging and won’t be solved with innovation. The “AI is dumb” is already only selectively true and almost all the technical effort is going toward reducing that. ChatGPT launched a year and a half ago.

    • Lvxferre@mander.xyz
      link
      fedilink
      arrow-up
      22
      ·
      6 months ago

      Sometimes. Sometimes it’s more accurate than anyone in the village.

      So does the village idiot. Or a tarot player. Or a coin toss. And you’d still need to be a fool if your writing relies on the output of those three. Or of a LLM bot.

      And it’ll be reliably getting better.

      You’re distorting the discussion from “now” to “the future”, and then vomiting certainty on future matters. Both things make me conclude that reading your comment further would be solely a waste of my time.

      • Zaktor@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        15
        ·
        6 months ago

        You’re lovely. Don’t think I need to see anything you write ever again.

    • Ilandar@aussie.zone
      link
      fedilink
      arrow-up
      10
      ·
      6 months ago

      Yes, I always get the feeling that a lot of these militant AI sceptics are pretty clueless about where the technology is and the rate at which it is improving. They really owe it to themselves to learn as much as they can so they can better understand where the technology is heading and what the best form of opposition will be in the future. As you say, relying on “haha Google made a funny” isn’t going to cut it forever.

      • Zaktor@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        11
        ·
        6 months ago

        Yeah. AI making images with six fingers was amusing, but people glommed onto it like it was the savior of the art world. “Human artists are superior because they can count fingers!” Except then the models updated and it wasn’t as much of a problem anymore. It felt good, but it was just a pleasant illusion for people with very real reasons to fear the tech.

        None of these errors are inherent to the technology, they’re just bugs to correct, and there’s plenty of money and attention focused on fixing bugs. What we need is more attention focused on either preparing our economies to handle this shock or greatly strengthen enforcement on copyright (to stall development). A label like this post is about is a good step, but given how artistic professions already weren’t particularly safe and “organic” labeling only has modest impacts on consumer choice, we’re going to need more.

        • Sonori@beehaw.org
          link
          fedilink
          arrow-up
          12
          ·
          6 months ago

          Except when it comes to LLM, the fact that the technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data is a fundamental limitation of the technology.

          So long as a model has no regard for the actual you know, meaning of the word, it definitionally cannot create a truly meaningful sentence. Instead, in order to get a coherent output the system must be fed training data that closely mirrors the context, this is why groups like OpenAi have been met with so much success by simplifying the algorithm, but progressively scrapping more and more of the internet into said systems.

          I would argue that a similar inherent technological limitation also applies to image generation, and until a generative model can both model a four dimensional space and conceptually understand everything it has created in that space a generated image can only be as meaningful as the parts of the work the tens of thousands of people who do those things effortlessly it has regurgitated.

          This is not required to create images that can pass as human made, but it is required to create ones that are truely meaningful on their own merits and not just the merits of the material it was created from, and nothing I have seen said by experts in the field indicates that we have found even a theoretical pathway to get there from here, much less that we are inevitably progressing on that path.

          Mathematical models will almost certainly get closer to mimicking the desired parts of the data they were trained on with further instruction, but it is important to understand that is not a pathway to any actual conceptual understanding of the subject.

          • Zaktor@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            8
            ·
            5 months ago

            Except when it comes to LLM, the fact that the technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data is a fundamental limitation of the technology.

            So long as a model has no regard for the actual you know, meaning of the word, it definitionally cannot create a truly meaningful sentence.

            This is a misunderstanding of what “probabilistic word choice” can actually accomplish and the non-probabilistic systems that are incorporated into these systems. People also make mistakes and don’t actually “know” the meaning of words.

            The belief system that humans have special cognizance unlearnable by observation is just mysticism.

            • Sonori@beehaw.org
              link
              fedilink
              arrow-up
              2
              ·
              5 months ago

              To note the obvious, an large language model is by definition at its core a mathematical formula and a massive collection of values from zero to one which when combined give a weighted average of the percentage that word B follows word A crossed with another weighted average word cloud given as the input ‘context’.

              A nuron in machine learning terms is a matrix (ie table) of numbers between zero and 1 by contrast a single human nuron is a biomechanical machine with literally hundreds of trillions of moving parts that darfs any machine humanity has ever built in terms of complexity. This is just a single one of the 86 billion nurons in an average human brain.

              LLM’s and organic brains are completely different and in both design, complexity, and function, and to treat them as closely related much less synonymous betrays a complete lack of understanding of how one or both of them fundamentally functions.

              We do not teach a kindergartner how to write by having them read for thousands of years until they recognize the exact mathematical odds that string of letters B comes after string A, and is followed by string C x percent of the time. Indeed humans don’t naturally compose sentences one word at a time starting from the beginning, instead staring with the key concepts they wish to express and then filling in the phrasing and grammar.

              We also would not expect that increasing from hundreds of years of reading text to thousands would improve things, and the fact that this is the primary way we’ve seen progress in LLMs in the last half decade is yet another example of why animal learning and a word cloud are very different things.

              For us a word actually correlates to a concept of what that word represents. They might make mistakes and missunderstand what concept a given word maps to in a given language, but we do generally expect it to correlate to something. To us a chair is a object made to sit down on, and not just the string of letters that comes after the word the in .0021798 percent of cases weighted against the .0092814 percent of cases related to the collection of strings that are being used as the ‘context’.

              Do I believe there is something intrinsically impossible for a mathematical program to replicate about human thought, probably not. But this this not that, and is nowhere close to that on a fundamental level. It’s comparing apples to airplanes and saying that soon this apple will inevitably take anyone it touches to Paris because their both objects you can touch.

              • Zaktor@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                5 months ago

                None of these appeals to relative complexity, low level structure, or training corpuses relates to whether a human or NN “know” the meaning of a word in some special way. A lot of your description of what “know” means could be confused to be a description of how Word2Vec encodes words. This just indicates ignorance of how ML language processing works. It’s not remotely on the same level as a human brain, but your view on how things work and what its failings are is just wrong.

          • localhost@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            5 months ago

            technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data

            What you’re describing is Markov chain, not an LLM.

            So long as a model has no regard for the actual you know, meaning of the word

            It does, that’s like the entire point of word embeddings.

            • Sonori@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              5 months ago

              Generally the term Markov chain is used to discribe a model with a few dozen weights, while the large in large language model refers to having millions or billions of weights, but the fundamental principle of operation is exactly the same, they just differ in scale.

              Word Embeddings are when you associate a mathematical vector to the word as a way of grouping similar words are weighted together, I don’t think that anyone would argue that the general public can even solve a mathematical matrix, much less that they can only comprehend a stool based on going down a row in a matrix to get the mathematical similarity between a stool, a chair, a bench, a floor, and a cat.

              Subtracting vectors from each other can give you a lot of things, but not the actual meaning of the concept represented by a word.

              • localhost@beehaw.org
                link
                fedilink
                arrow-up
                2
                ·
                5 months ago

                I don’t think that anyone would argue that the general public can even solve a mathematical matrix, much less that they can only comprehend a stool based on going down a row in a matrix to get the mathematical similarity between a stool, a chair, a bench, a floor, and a cat.

                LLMs rely on billions of precise calculations and yet they perform poorly when tasked with calculating numbers. Just because we don’t calculate anything consciously to get a meaning of a word doesn’t mean that no calculations are actually done as part of our thinking process.

                What’s your definition of “the actual meaning of the concept represented by a word”? How would you differentiate a system that truly understands the meaning of a word vs a system that merely mimics this understanding?

                • Sonori@beehaw.org
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  5 months ago

                  No part of a human or animal brain operates on subtracting tables of cleanly defined numbers from each other so I think it’s pretty safe to say that no matrix calculation is done on a handful of numbers as part of much less as our sole means of understanding concepts or objects.

                  I don’t know exactly how one could tell true understanding from minicry, far smarter and more well researched people than me have debated that for decades, i’m just pretty sure what we think an kindness is boils down to something a bit more complex than a high school math problem discribing a word cloud.

                  • localhost@beehaw.org
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    5 months ago

                    So you’re basically saying that, in your opinion, tensor operations are too simple of a building block for understanding to ever appear out of them as an emergent behavior? Do you feel that way about every mathematical and logical operation that a high school student can perform? That they can’t ever in whatever combination create a system complex enough for understanding to emerge?