• KairuByte@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    13 hours ago

    I feel the need to point out that enough shots from enough angles in anything other than multiple layers or sweats, is going to essentially result in an “xray” effect. Yeah, it won’t know the exact hue of your nipples, or the precise single of your dangle, but it’s going to be close enough to be considered enough to end a career.

      • KairuByte@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        I’m not sure I get what your comment is referencing. If you feel enough data into an “ai” meant to generate lifelike images, you’re going to get a relatively close approximation of the person. Will it be exact? Absolutely not. But again, it will be enough to put your job in danger in many situations.

        • The Octonaut@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          The people - very, very many of them literal school children - doing this are not training image AI models or even LoRAs or whatever on their home servers by feeding them images of a person from multiple angles and different parts exlosed. They’re just taking a single image and uploading it to some dodgy Android store app or, y’know, Grok. Which then colours in the part it identifies as clothes with a perfectly average image from the Internet (read: heavily modified in the first place and skewed towards unrealistic perfection). The process is called in-painting. The same models use the same technique if you just want to change the clothes, and people find that a brief amusement. If you want to replace your bro’s soccer jersey with a jersey of a team he hates to wind him up, you are not carefully training the AI to understand what he’d look like in that jersey. You just ask the in-painter to do it and assuming it already has been fed what the statistical average combination of pixels for “nude girl” or “Rangers jersey” are, it applies a random seed and starts drawing, immediately and quickly.

          That’s the problem. It has always been possible to make a convincing fake nude of someone. But there was a barrier to entry - Photoshop skills, or paying someone for photoshop skills, time, footprint (you’re not going to be doing this on dad’s PC).

          Today that barrier to entry is reduced massively which has put this means of abuse in the hands of every preteen with a smartphone, and in a matter of seconds. And then shared with all your peer group, in a matter of seconds.

          It’s the exact same logic which means that occasionally I find a use for image generation tools. Yes I can probably draw an Orc with a caltrop stuck up his nose, but I can’t do that mid-session of D&D and if it’s for a 10 second bit, why bother. Being able to create and share it within seconds is a large part of the selling point of these tools. Did I just steal from an artist? Maybe. Was I going to hire an artist to do it for me? No. Was I going to Google the words “orc” and “caltrop” and overlay the results for a cheap laugh? Maybe. Is that less stealing? Maybe. Am I getting way off the point that these people aren’t training image generation AIs with fragments of photos in order to make a convincing fake? Yes.