I heard about C2PA and I don’t believe for a second that it’s not going to be used for surveillance and all that other fun stuff. What’s worse is that they’re apparently trying to make it legally required. It also really annoys me when I see headlines along the lines of “Is AI the end of creativity?!1!” or “AI will help artists, not hurt them!1!!” or something to that effect. So, it got me thinking and I tried to come up with some answers that actually benefit artists and their audience rather that just you know who.

Unfortunately my train of thought keeps barreling out of control to things like, “AI should do the boring stuff, not the fun stuff” and “if people didn’t risk starvation in the first place…” So I thought I’d find out what other people think (search engines have become borderline useless haven’t they).

So what do you think would be the best way to satisfy everyone?

  • Spiracle@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    It’s a very difficult topic, and I don’t see any satisfying real-world solutions. Two big issues:

    1. Obvious solutions are impossible. Generative AI are impossible to “undo”. Much of the basic tech, and many simpler models, are spread far and wide. Research, likewise, is spread out both globally and on varying levels from large Megacorps down to small groups of researchers. Even severe attempts at restricting it would, at most, punish the small guys.

    I don’t want a world, where corporations like Adobe or Microsoft hold sole control over legal “ethically trained” generative AI. However, that is where insistence on copyright for training sets, or insistence on censored “safe” LLMs would lead us.

    1. Many of the ethical and practical concerns are on sliding scales. They are also on the edge of these scales. When does machine assistance become unethical? When does imitating the specific style of an artist become wrong? Where does inspiration end and intellectual rights infringement begin? At what point does reducing racial and other biases from LLMs switch over to turning them into biased propaganda machines?

    There are dozens of questions like these, and I have found no satisfying answers to any of them. Yet the answers to some of them are required in order to produce reasonable solutions.