A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.

The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission.
[…]
Zhao’s team also developed Glaze, a tool that allows artists to “mask” their own personal style to prevent it from being scraped by AI companies. It works in a similar way to Nightshade: by changing the pixels of images in subtle ways that are invisible to the human eye but manipulate machine-learning models to interpret the image as something different from what it actually shows.

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    I remember in the early 2010s reading an article like this one on openai.com talking about the dangers of using AI for image search engines to moderate against unwanted content. At the time the concern was CSAM salted to prevent its detection (along with other content salted with CSAM to generate false positives).

    My guess is since we’re still training AI with pools of data-entry people who tag pictures with what they appear to be, so that AI reads more into images than their human trainers (the proverbial man inside the Iron Turk).

    This is going to be an interesting technology war.