• jacksilver@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I am familiar with how LLMs work and are trained. I’ve been using transformers for years.

    The core question I’d ask is, if the copyrighted material isn’t essential to the model, why don’t they just train the models without that data? If it is core to the model, then can you really say they aren’t derivative of that content?

    I’m not saying that the models don’t do something more, just that the more is built upon copyrighted material. In any other commercial situation, you’d have to license/get approval for the underlying content if you were packaging it up. When sampling music, for example, the output will differ greatly from the original song, but because you are building off someone else’s work you must compensate them.

    Its why content laundering is a great term. The models intermix so much data that it’s hard to know if the content originated from copyrighted materials. Just like how money laundering is trying to make it difficult to determine if the money comes from illicit sources.