OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling’s Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

  • TwilightVulpine@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    You joke but AI advocates seem to forget that people have fundamentally different rights than tools and objects. A photocopier doesn’t get the right to “memorize” and “learn” from a text that a human being does. As much as people may argue that AIs work different, AIs are still not people.

    And if they ever become people, the situation will be much more complicated than whether they can imitate some writer. But we aren’t there yet, even their advocates just uses them as tools.

      • TwilightVulpine@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        1 year ago

        But this falls exactly under what I just said. To say that using Machine Learning to imitate an artist without permission is fine, because humans are allowed to learn to each other, is making the mistake of assigning personhood to the system, that it ought to have the same rights that human beings do. There is a distinction between the rights of humans as opposed to tools, so to say that an AI can’t be trained on someone’s works to replicate their style doesn’t need to apply to people.

        Even if you support that reasoning, that still doesn’t help the writers and artists whose job is threatened by AI models based on their work. That it isn’t an exact reproduction doesn’t change that it relied on using their works to begin with, and it doesn’t change that it serves as a way to undercut them, providing a cheaper replacement for their work. Copyright law as it was, wasn’t envisioned for a world where Machine Learning exists. It doesn’t really solve the problem to say that technically it’s not supposed to cover ideas and styles. The creators will be struggling just the same.

        Either the law will need to emphasize the value of human autorship first, or we will need to go through drastic socioeconomic changes to ensure that these creators will be able to keep creating despite losing market to AI. Otherwise, to simply say that AI gets to do this and change nothing else, will cause enormous damage to all sort of creative careers and wider culture. Even AI will become more limited with less fresh new creators to learn elements from.

          • TwilightVulpine@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            There is a difference between “analyzing” and derivating. The authorship of AI-created works is also not the user’s, it takes more than a prompt for that, and that seems to be the conclusion courts are leaning towards.

            Still, even if that turns out to be technically correct, it still doesn’t help the creators getting undercut who might be driven out of their careers by AI.

              • TwilightVulpine@lemmy.world
                link
                fedilink
                English
                arrow-up
                6
                ·
                1 year ago

                They do specify that the human’s involvement needs to be more extensive than prompting for a certain image or text. The output itself is not copyrightable. If we are speaking about the process of “analysis” that the ML model does, then the user does not get the rights over it.

                This discussion is becoming increasingly overly specific and getting away from my point. My sole concern in all this is what happens to the artists who’ll have to compete with AI?

                • Even_Adder@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  They do specify that the human’s involvement needs to be more extensive than prompting for a certain image or text. The output itself is not copyrightable. If we are speaking about the process of “analysis” that the ML model does, then the user does not get the rights over it.

                  It says :

                  In other cases, however, a work containing AI-generated material will also contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that “the resulting work as a whole constitutes an original work of authorship.” Or an artist may modify material originally generated by AI technology to such a degree that the modifications meet the standard for copyright protection.

                  And you do get rights to your own original analysis of data. That isn’t even in question.

                  This discussion is becoming increasingly overly specific and getting away from my point. My sole concern in all this is what happens to the artists who’ll have to compete with AI?

                  I guess all I have to say here is that generative models are a free and open source tool anyone can use. It took us 100,000 years to get from cave drawings to Leonard Da Vinci. This is just another step, like Camera Obscura.

                  • TwilightVulpine@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    ·
                    1 year ago

                    When you call the output itself “analysis”, that’s not what they say.

                    In February 2023, the Office concluded that a graphic novel comprised of human-authored text combined with images generated by the AI service Midjourney constituted a copyrightable work, but that the individual images themselves could not be protected by copyright

                    This is in your own link. Simply prompting Midjourney doesn’t get the user copyright.

                    I guess all I have to say here is that generative models are a free and open source tool anyone can use. It took us 100,000 years to get from cave drawings to Leonard Da Vinci. This is just another step, like Camera Obscura.

                    That is not something many of those people whose work is being used to enable it even want to use. Not to mention, if AI art were to be the “next evolution” in media, which it isn’t since it output the same medium, there wouldn’t be a need for as many AI prompters as there are artists right now. This glosses over the issue entirely.

    • kmkz_ninja@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      How do you see that as a difference? Tools are extensions of ourselves.

      Restricting the use of LLMs is only restricting people.

      • TwilightVulpine@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        When we get to the realm of automation and AI, calling tools just an “extension of ourselves” doesn’t make sense.

        Especially not when the people being “extended” by Machine Learning models did not want to be “extended” to begin with.