• @atx_aquarian@lemmy.world
    link
    fedilink
    English
    31 year ago

    From my understanding of the article, it’s more about associating misleading terms with images to confuse the associations learned by the model. I didn’t see anything in the article about some sneaky way of tainting images themselves unless it means a server is serving bogus images when a client fails the “are you a robot” test.

    Curious to learn if anyone knows more about what it’s actually doing.

    • @hypnicjerk@lemmy.world
      link
      fedilink
      English
      31 year ago

      yes to me it read like it was manipulating metadata somehow, not the images themselves, but the article directly contradicts that. and that would be useless as soon as someone saves it as a flat image file or screenshots and cuts it out. i’m assuming for this tool to work it needs to be changing the image directly through some sort of watermark-like system.