Comedian and author Sarah Silverman, as well as authors Christopher Golden and Richard Kadrey — are suing OpenAI and Meta each in a US District Court over dual claims of copyright infringement.

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    Even if they did train the model on the entire text of the book, that’s still not necessarily copyright violation. I would think not, since the resulting model doesn’t actually have a copy of the book embedded within it.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        9
        ·
        edit-2
        1 year ago

        How do we “know” anything where the answers are just being made up as part of humanity’s collective cultural game of Calvinball?

        Courts in various jurisdictions will make various rulings. Judges will interpret them in various ways. Legislators will chime in with new legislation and new treaties. Internet arguments will churn away with a whole range of assumptions about what is true or false that may or may not have anything to do with reality.

        I present my opinion here. I feel it is well informed and I can back it up in various ways when challenged. But nobody “knows” anything because these aren’t laws of physics or math that we’re talking about here.

        Or did you mean whether we know if a copy of the book is embedded in the model? That can be more objectively tested, at least.

      • secrethat@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        AFAIK it takes these large bodies of text and rather than digesting them and keeping it in some sort of database, rather it holistically (and i’m generalising here), see how often certain words are strung together and taking note of that. Let’s call them weights.

        Then users can prompt something and the ‘magic’ here is that it is able to pick out words of different weights based on the prompt. Be it, are you writing an angry email to your boss, a code in python, or structure for a book.

        But it is unable to recreate the book from a prompt.
        People who know the topic more intimately please correct me if I am wrong .

    • Doomhammer458@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      But the server used to calculate the model would have a copy of it. If training an AI model is not fair use then the mere act of loading a book you don’t have a license for into the server would be copyright infringement. Like text book. It’s a unauthorized digital copy. It’s all very untested legal grounds and seems like lots of people want to be the first to test it. Not everyone has a great case but if the courts interpret things a certain way there’s gonna be lots of payouts so maybe best to get in line early?

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        1 year ago

        Perhaps, but that’s a separate legal issue from the model itself. You might have committed a breach of copyright in the process of gathering the material that the AI was trained on but the model itself is not a copy of that material and so is not itself illegal to train or use. And perhaps not even that, since downloading a pirated book is not the illegal part (uploading it is).

        As you say, there’s some untested legal waters here. But it seems likely to me that the best that Silverman will accomplish is some nibbling and quibbling around the edges.

        • Ferk@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          If you can give some vague prompts to the model to obtain something that is close enough to a significant chunk of the work that, had it been written by a human, was susceptible of being considered plagiarism… then I’d say the same laws protecting from plagiarism should operate there.

          It doesn’t matter whether it’s really stored there in some form or not (in fact, it’s probably ok for to store copyrighted material in a private server as long as it’s lawfully obtained), but whether the output that is being distributed to third parties is violating the license of the work or not.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            If you can give some vague prompts to the model to obtain something that is close enough to a significant chunk of the work that, had it been written by a human, was susceptible of being considered plagiarism… then I’d say the same laws protecting from plagiarism should operate there.

            Perhaps, but that’s not even remotely what’s being accused in this case. They’re asking ChatGPT for a summary of the book and it’s generating a summary a couple of pages long. Nothing is even close to verbatim, and I don’t know enough about any of the books to know if those summaries are even accurate. In my experience ChatGPT often ends up hallucinating a lot of details when asked stuff like this.

        • Doomhammer458@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Right but you can sue for what happened on the training server. I’m guessing the training server still exists. I doubt they wiped it completely before the next round of training. If the training server infringes copyright then you still lose the suit. Maybe. Remember that copyright law is not written with the internet in mind. If you have a “copy” and it’s not authorized that might just be enough for a backwards court to find infringement.

          I think of it in extremes. Imagine you had a video producing model of the future. Could you then load up every MLB game recorded and train the model to make novel baseball games based on that or would the MLB be pissed you had a server full of every MLB game ever recorded?

    • magic_lobster_party@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It’s difficult to tell to what extent books are encoded into the model. The data might be there in some abstract form or another.

      During training it is kind of instructed to plagiarize the text it’s given. The instruction is basically “guess the next word of this unfinished excerpt”. It probably won’t memorize all input it’s given, but there’s a nonzero chance it manages to memorize some significant excerpts.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        It’s difficult to tell to what extent books are encoded into the model. The data might be there in some abstract form or another.

        This is a court case so the accusers are going to have to prove it.

        The evidence provided is that ChatGPT can produce two-page summaries of the books. The summaries are of unknown accuracy, I haven’t read the books myself so I have no idea how much of those summaries are hallucinations. This is very weak.

        • Doomhammer458@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          They have to prove it but if they case gets far enough they will have the right to ask for discovery and they can see for themselves what was included. Thats why it might just settle quietly to avoid discovery.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            The important question is not what was in the training data. The important question is what is in the model. The training data is not magically compressed into the model like some kind of physics-defying ultra-Zip, the model does not contain a copy of the training data.

            There are open-source language models out there, you can experiment with training them. Unless you massively over-fit it on a specific source document (an error that real AI training procedures do everything they can to avoid) you won’t be able to extract the source documents from the resulting model.