In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.

Aaaaaand there it is. They don’t want to admit how much copyrighted materials they’ve been using.

  • cendawanita
    link
    fedilink
    7
    edit-2
    1 year ago

    @chemical_cutthroat

    If I do a book report based on a book that I picked up from the library, am I violating copyright? If I write a movie review for a newspaper that tells the plot of the film, am I violating copyright?

    The first conceptual mistake in this analogy is assuming the LLM entity is “writing”. A person or a sentient being writing is still showing signs of intellectual work, which is how the example book report and movie review will not be accused of plagiarism, which is very very basically stealing someone’s output but one that is not made legally ownership of (which then brings it to copyright infringement territory).

    LLMs are producing text based on statistical probability meaning it is quite literally aping/replicating the aesthetic form of a known genre of textual output, which in these cases are given the legal status of intellectual property. So yes, an LLM-generated textual output that is in the form of a book report or movie review looks the way it does by copying with no creative intent previous works of the genre. It’s the same way YouTube video essays get taken down if it’s just a collection of movie clips that might sound like a full dialogue. Of course in that example yt clip, if you can argue it’s a creative output where an artist is forming a new piece out of a collage of previous media, the rights owner to those movie clips might lose their claim to the said video. You can’t make that defence with OpenAI.

    @stopthatgirl7

    • chemical_cutthroat
      link
      fedilink
      11 year ago

      If you can truly tell me how our form of writing is any different than how an AI writes, I’ll do a backflip. Humans are pattern seekers. We do everything based on one. We can’t handle chaos. Here’s an example.

      Normal sentence:

      Jane walked to the end of the road and turned around.

      Chaotic Sentence:

      The terminal boundary of the linear thoroughfare did Jane ambulate toward, then her orientation underwent a 180-degree about-face, confounding the conventional concept of destinational progression.

      On first pass, I bet you zoned out half way through that second sentence because there was no pattern or rhythm to it, it was word salad. It still works as a sentence, but it’s chaotic and strange to read.

      The first sentence is a generic sentence. Subject, predicate, noun, verb, etc. It follows the pattern of English writing that we are all familiar with because it’s how we were taught. An AI will do the same thing. It will generate a pattern of speech the same way that it was taught. Now, if you were taught in a public school and didn’t read a book or watch a movie for your entire life, I would let you have your argument that

      @cendawanita

      an LLM-generated textual output that is in the form of a book report or movie review looks the way it does by copying with no creative intent previous works of the genre.

      However, you can’t say that a human does any different. We are the sum of our experience and our teachings. If you get truly granular with it, you can trace the genesis of every sentence a human writes or even every thought a human thinks back to a point of inception, where the human learned how to write and think in the first place, and it will always be based on some sensory experience that the human has had, whether through reading, listening to music, watching a movie, or any other way we consume the data around us. The second sentence is an example of this. I thought to myself, “how would a pedantic asshat write this sentence?” and I wrote it. It didn’t come from some grand creative well of sentience that every human can draw from when they need a sentence, it came from experience and learning, just like the first, and the same well of knowledge than an AI draws from when it writes its sentences.

      • cendawanita
        link
        fedilink
        11 year ago

        @chemical_cutthroat
        Again, all of your analogical effort presumes that an LLM is synthesizing. When I say, specifically, they generate outputs based on statistical probability it’s not at all the same as a sentient process of reiterative learning based on their available knowledge.

        If you can’t get that distinction, then all the effort to respond to you will expect too much from me (personally; I wish the best to others who’d like). If you’re really sincere though, honestly it’s been best elaborated by Timnit Gebru and Emily Bender in their writings about the “stochastic parrot”. Please do have a read. https://dl.acm.org/doi/10.1145/3442188.3445922
        @stopthatgirl7