• 𝕯𝖎𝖕𝖘𝖍𝖎𝖙
    link
    fedilink
    339 months ago

    AI / LLM only tries to predict the next word or token. it cannot understand or reason, it can only sound like someone who knows what they are talking about. you said elephants and it gave you elephants. the “no” modifier makes sense to us but not to AI. it could, if we programmed it with if/then statements, but that’s not LLM, that’s just coding.

    AI is really, really good at bullshitting.

    • Turun
      link
      fedilink
      32
      edit-2
      9 months ago

      AI / LLM only tries to predict the next word or token

      This is not wrong, but also absolutely irrelevant here. You can be against AI, but please make the argument based on facts, not by parroting some distantly related talking points.

      Current image generation is powered by diffusion models. Their inner workings are completely different from large language models. The part failing here in particular is the text encoder (clip). If you learn how it works and think about it you’ll be able to deduce how the image generator is forced to draw this image.

      Edit: because it’s an obvious limitation, negative prompts have existed pretty much since diffusion models came out

        • Turun
          link
          fedilink
          59 months ago

          No, it does not. At least not in the same way that generative pre-trained transformers do. It is handling natural language though.

          The research is all open source if you want details. For Stable Diffusion you’ll find plenty of pretty graphs that show how the different parts interact.

          • 𝕯𝖎𝖕𝖘𝖍𝖎𝖙
            link
            fedilink
            49 months ago

            There would still need to be a corpus of text and some supervised training of a model on that text in order to “recognize” with some level of confidence what the text represents, right?

            I understand the image generation works differently, which I sort of gather starts with noise and a random seed and then via learnt networks has pathways a model can take which (“automagic” goes here) it takes from what has been recognized with NLP on the text. something in the end like “elephant (subject) 100% confidence, big room (background) 75% confidence, windows (background) 75% confidence”. I assume then that it “merges” the things which it thinks make up those tokens along with the noise and (more “automagic” goes here) puts them where they need to go.

            • Turun
              link
              fedilink
              29 months ago

              There would still need to be a corpus of text and some supervised training of a model on that text in order to “recognize” with some level of confidence what the text represents, right?

              Correct. The clip encoder is trained on images and their corresponding description. Therefore it learns the names for things in images.

              And now it is obvious why this prompt fails: there are no images of empty rooms tagged as “no elephants”. This can be fixed by adding a negative prompt, which subtracts the concept of “elephants” from the image in one of the automagical steps.

      • @Z4rK@lemmy.world
        link
        fedilink
        English
        39 months ago

        All these examples are not just using stable diffusion though. They are using an LLM to create a generative image prompt for DALL-E / SD, which then gets executed. In none of these examples are we shown the actual prompt.

        If you instead instruct the LLM to first show the text prompt, review it and make sure the prompt does not include any elephants, revise it if necessary, then generate the image, you’ll get much better results. Now, ChatGPT is horrible in following instructions like these if you don’t set up the prompt very specifically, but it will still follow more of the instructions internally.

        Anyway, the issue in all the examples above does not stem from stable diffusion, but from the LLM generating an ineffective prompt to the stable diffusion algorithm by attempting to include some simple negative word for elephants, which does not work well.

        • Turun
          link
          fedilink
          29 months ago

          If you prompt stable Diffusion for “a room without elephants in it” you’ll get elephants. You need to add elephants to the negative prompt to get a room without them. I don’t think LLMs have been given the ability to add negative prompts

    • DarkThoughts
      link
      fedilink
      19 months ago

      That’s what negative prompts are for in those image generating AIs (I have never used DALL-E so no idea if they support negative prompts). I guess you could have an LLM interpret a sentence like OPs to extract possible positive & negative prompts based on sentence structure but that would always be less accurate than just differentiating them. Because once you spend some time with those chat bot LLMs you notice very quickly just how fucking stupid they actually are. And unfortunately things like larger context / token sizes won’t change that and would scale incredibly badly in regards to hardware anyway. When you regenerate replies a few times you kinda understand how much guesswork they make, and how often they completely go wrong in interpreting the previous tokens (including your last reply). So yeah, they’re definitely really good at bullshitting. Can be fun, but it is absolutely not what I’d call “AI”, because there’s simply no intelligence behind it, and certainly pretty overhyped (not to say that there aren’t actually useful fields for those algorithms).