A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.

The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission.
[…]
Zhao’s team also developed Glaze, a tool that allows artists to “mask” their own personal style to prevent it from being scraped by AI companies. It works in a similar way to Nightshade: by changing the pixels of images in subtle ways that are invisible to the human eye but manipulate machine-learning models to interpret the image as something different from what it actually shows.

  • @Zeth0s@lemmy.world
    link
    fedilink
    English
    31
    edit-2
    1 year ago

    Don’t worry, it is normal.

    People don’t understand AI. Probably all articles I have read on it by mainstream media were somehow wrong. It often feels like reading a political journalist discussing about quantum mechanics.

    My rule of thumb is: always assume that the articles on AI are wrong. I know it isn’t nice, but that’s the sad reality. Society is not ready for AI because too few people understand AI. Even AI creators don’t fully understand AI (this is why you often hear about “emergent abilities” of models, it means “we really didn’t expect it and we don’t understand how this happened”)

    • ElleOP
      link
      fedilink
      English
      4
      edit-2
      1 year ago

      Probably all articles I have read on it by mainstream media were somehow wrong. It often feels like reading a political journalist discussing about quantum mechanics.

      Yeah, I view science/tech articles from sources without a tech background this way too. I expected more from this source given that it’s literally MIT Tech Review, much as I’d expect more from other tech/science-focused sources, albeit I’m aware those require scrutiny just as well (e.g. Popular Science, Nature, etc. have spotty records from what I gather).

      Also regarding your last point, I’m increasingly convinced AI creators’ (or at least their business execs/spokespeople) are trying to have their cake and eat it too in terms of how much they claim to not know/understand how their creations work while also promoting how effective it is. On one hand, they genuinely don’t understand some of the results, but on the other, they do know enough of how it works to have an idea of how/why those results came about, however it’s to their advantage to pretend they don’t insofar as it may mitigate their liability/responsibility should the results lead to collateral damage/legal issues.

      • @Zeth0s@lemmy.world
        link
        fedilink
        English
        6
        edit-2
        1 year ago

        Kind of true. Check the law proposals on encryption around the world…

        Technology is difficult, most people don’t understand it, result is awful laws. AI is even more difficult, because even creators don’t fully understand it (see emergent behaviors, i.e. capabilities that no one expected).

        Computers luckily are much easier. A random teenager knows how to build one, and what it can do. But you are right, many are not yet ready even for computers

        • @joel_feila@lemmy.world
          link
          fedilink
          English
          31 year ago

          I read an article the other day about managers complaining about zoomers not even knowing how type on a keyboard.

      • @GenderNeutralBro@lemmy.sdf.org
        link
        fedilink
        English
        41 year ago

        That was certainly true in the 90s. Mainstream journalism on computers back then was absolutely awful. I’d say that only changed in the mid-2000 or 2010s. Even today, tech literacy in journalism is pretty low outside of specialist outlets like, say, Ars.

        Today I see the same thing with new tech like AI.