• FatCrab
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Regardless of training data, it isn’t matching to anything it’s found and squigglying shit up or whatever was implied. Diffusion models are trained to iteratively convert noise into an image based on text and the current iteration’s features. This is why they take multiple runs and also they do that thing where the image generation sort of transforms over multiple steps from a decreasingly undifferentiated soup of shape and color. My point was that they aren’t doing some search across the web, either externally or via internal storage of scraped training data, to “match” your prompt to something. They are iterating from a start of static noise through multiple passes to a “finished” image, where each pass’s transformation of the image components is a complex and dynamic probabilistic function built from, but not directly mapping to in any way we’d consider it, the training data.

      • SoftestSapphic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        Oh ok so training data doesn’t matter?

        It can generate any requested image without ever being trained?

        Or does data not matter when it makes your agument invalid?

        Tell me how you moving the bar proves that AI is more intelligent than the sum of its parts?

        • FatCrab
          link
          fedilink
          English
          arrow-up
          2
          ·
          19 hours ago

          Ah, you seem to be engaging in bad faith. Oh, well, hopefully those reading at least now between understanding what these models are doing and can engage in more informed and coherent discussion on the subject. Good luck or whatever to you!