ComradeSharkfucker, because I love you, I went and found these AI images from the brief period when Bing hardcoded “Ethnically Ambiguous” into its prompt field

    • Kefla [she/her, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      24
      ·
      10 days ago

      Generative AI stuff doesn’t have any concept of what a word is. It’s just trying to jam stuff together that looks vaguely like what its weird black box algorithm is telling it the result of the prompt should look like. And it’s not consistent at all lol, these pictures have ethnically anbigrious, ethinically ambigaus, ethnically ambigauus, etthnicicllly anbiguauu, and ethnniclly ambigaus

    • Owl [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      16
      ·
      10 days ago

      They trained the early models on whatever images they found on the internet. People noticed that “doctor” made a white dude and “basketball player” made a black dude, and made a stink about the racism in this*. Microsoft went into cover-my-ass mode and put in a simple AI model that checks for if the prompt sounds like it’s about people, and if it does, inserting “ethnically ambiguous” into the prompt before passing it to the image model**. The image models were shit, so they’d leak random parts of their prompt into any text on screen all the time.

      * note that this wasn’t some intentional racist thing, it’s just a model thoughtlessly replicating the racism in its training data, after being made by thoughtless people who didn’t foresee this problem (ie: exactly how racism usually works)

      ** IIRC, google’s AI did the same thing, but instead of always adding “ethnically ambiguous”, it had some percent chance of adding a random racial descriptor. So you’d ask for George Washington and get a 10% chance it made him black