• essell@lemmy.worldOP
      link
      fedilink
      arrow-up
      32
      ·
      8 months ago

      And yet so many of the debates around this new formation of media and creativity come down to the grey space between what is inspiration and what is plagiarism.

      Even if everyone agreed with your point, and I think broadly they do, it doesn’t settle the debate.

      • Wet Noodle@sopuli.xyz
        link
        fedilink
        arrow-up
        11
        ·
        8 months ago

        The real problem is that ai will never ever be able to make art without using content copied from other artists which is absolutely plagiarism

        • SleepyPie@lemmy.world
          link
          fedilink
          arrow-up
          8
          ·
          8 months ago

          But an artist cannot be inspired without content from other artists. I don’t agree to the word “copied” here either, because it is not copying when it creates something new.

          • frezik@midwest.social
            link
            fedilink
            arrow-up
            4
            ·
            8 months ago

            Yeah, unless they lived in a cave with some pigments, everyone started by being inspired in some way.

            • Wet Noodle@sopuli.xyz
              link
              fedilink
              arrow-up
              3
              ·
              8 months ago

              But nobody’s starts by downloading and pulling elements of all living and dead artists works without reference or license, it is not the same.

              • SleepyPie@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                8 months ago

                I’m sure many artists would love having ultimate knowledge about all art relevant to their craft - it just hasn’t been feasible. Perhaps if art-generating AI could correctly cite their references it would be more acceptable for commercial use.

    • Octopus1348@lemy.lol
      link
      fedilink
      arrow-up
      25
      ·
      edit-2
      8 months ago

      Humans learn from other creative works, just like AI. AI can generate original content too if asked.

      • Prunebutt@slrpnk.net
        link
        fedilink
        arrow-up
        30
        ·
        8 months ago

        AI creates output from a stochastic model of its’ training data. That’s not a creative process.

          • Prunebutt@slrpnk.net
            link
            fedilink
            arrow-up
            22
            ·
            8 months ago

            LLMs analyse their inputs and create a stochastic model (i.e.: a guess of how randomness is distributed in a domain) of which word comes next.

            Yes, it can help in a creative process, but so can literal noise. It can’t “be creative” in itself.

            • Even_Adder@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              13
              ·
              8 months ago

              How that preclude these models from being creative? Randomness within rules can be pretty creative. All life on earth is the result of selection on random mutations. Its output is way more structured and coherent than random noise. That’s not a good comparison at all.

              Either way, generative tools are a great way for the people using to create with, no model has to be creative on its own.

              • Prunebutt@slrpnk.net
                link
                fedilink
                arrow-up
                16
                ·
                8 months ago

                How that preclude these models from being creative?

                They lack intentionality, simple as that.

                Either way, generative tools are a great way for the people using to create with, no model has to be creative on its own.

                Yup, my original point still stands.

          • irmoz@reddthat.com
            link
            fedilink
            arrow-up
            12
            ·
            edit-2
            8 months ago

            A person sees a piece of art and is inspired. They understand what they see, be it a rose bush to paint or a story beat to work on. This inspiration leads to actual decisions being made with a conscious aim to create art.

            An AI, on the other hand, sees a rose bush and adds it to its rose bush catalog, reads a story beat and adds to to its story database. These databases are then shuffled and things are picked out, with no mind involved whatsoever.

            A person knows why a rose bush is beautiful, and internalises that thought to create art. They know why a story beat is moving, and can draw out emotional connections. An AI can’t do either of these.

            • agamemnonymous@sh.itjust.works
              link
              fedilink
              arrow-up
              10
              ·
              8 months ago

              A person is also very much adding rose bushes and story beats to their internal databases. You learn to paint by copying other painters, adding their techniques to a database. You learn to write by reading other authors, adding their techniques to a database. Original styles/compositions are ultimately just a rehashing of countless tiny components from other works.

              An AI understands what they see, otherwise they wouldn’t be able to generate a “rose bush” when you ask for one. It’s an understanding based on a vector space of token sequence weights, but unless you can describe the actual mechanism of human thought beyond vague concepts like “inspiration”, I don’t see any reason to assume that our understanding is not just a much more sophisticated version of the same mechanism.

              The difference is that we’re a black box, AI less so. We have a better understanding of how AI generates content than how the meat of our brain generates content. Our ignorance, and use of vague romantic words like “inspiration” and “understanding”, is absolutely not proof that we’re fundamentally different in mechanism.

              • irmoz@reddthat.com
                link
                fedilink
                arrow-up
                6
                ·
                8 months ago

                A person painting a rose bush draws upon far more than just a collection of rose bushes in their memory. There’s nothing vague about it, I just didn’t feel like getting into much detail, as I thought that statement might jog your memory of a common understanding we all have about art. I suppose that was too much to ask.

                For starters, refer to my statement “a person understands why a rose bush is beatiful”. I admit that maybe this is vague, but let’s unpack.

                Beaty is, of course, in the eye of the beholder. It is a subjective thing, requiring opinion, and AIs cannot hold opinions. I find rose bushes beautiful due to the inherent contrast between the delicate nature of the rose buds, and the almost monstrous nature of the fronds.

                So, if I were to draw a rose bush, I would emphasise these aspects, out of my own free will. I might even draw it in a way that resembles a monster. I might even try to tell a story with the drawing, one about a rose bush growing tired of being pkucked, and taking revenge on the humans who dare to steal its buds.

                All this, from the prompt “draw a rose bush”.

                What would an AI draw?

                Just a rose bush.

                • agamemnonymous@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  6
                  ·
                  8 months ago

                  “Beauty”, “opinion”, “free will”, “try”. These are vague, internal concepts. How do you distinguish between a person who really understands beauty, and someone who has enough experience with things they’ve been told are beautiful to approximate? How do you distinguish between someone with no concept of beauty, and someone who sees beauty in drastically different things than you? How do you distinguish between the deviations from photorealism due to imprecise technique, and deviations due to intentional stylistic impressionism?

                  What does a human child draw? Just a rosebush, poorly at that. Does that mean humans have no artistic potential? AI is still in relative infancy, the artistic stage of imitation and technique refinement. We are only just beginning to see the first glimmers of multi-modal AI, recursive models that can talk to themselves and pass information between different internal perspectives. Some would argue that internal dialogue is precisely the mechanism that makes human thought so sophisticated. What makes you think that AI won’t quickly develop similar sophistication as the models are further developed?

                  • irmoz@reddthat.com
                    link
                    fedilink
                    arrow-up
                    3
                    ·
                    edit-2
                    8 months ago

                    Philosophical masturbation, based on a poor understanding of what is an already solved issue.

                    We know for a fact that a machine learning model does not even know what a rosebush is. It only knows the colours of pixels that usually go into a photo of one. And even then, it doesn’t even know the colours - only the bit values that correspond to them.

                    That is it.

                    Opinions and beauty are not vague, and nor are free will and trying, especially in this context. You only wish them to be for your argument.

                    An opinion is a value judgment. AIs don’t have values, and we have to deliberately restrict them to stop actual chaos happening.

                    Beauty is, for our purposes, something that the individual finds worthy of viewing and creating. Only people can find things beautiful. Machine learning algrorithms are only databases with complex retrieval systems.

                    Free will is also quite obvious in context: being able to do something of your own volition. AIs need exact instructions to get anything done. They can’t make decisions beyond what you tell them to do.

                    Trying? I didn’t even define this as human specific

              • Prunebutt@slrpnk.net
                link
                fedilink
                arrow-up
                4
                ·
                8 months ago

                You’re presupposing that brains and computers are basically the same thing. They are fundamentally different.

                An AI doesn’t understand. It has an internal model which produces outputs, based on the training data it received and a prompt. That’s a different cathegory than “understanding”.

                Otherwise, spotify or Youtube recommendation algorithms would also count as understanding the contents of the music/videos they supply.

                • agamemnonymous@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  8 months ago

                  An AI doesn’t understand. It has an internal model which produces outputs, based on the training data it received and a prompt. That’s a different cathegory than “understanding”.

                  Is it? That’s precisely how I’d describe human understanding. How is our internal model, trained on our experiences, which generates responses to input, fundamentally different from an LLM transformer model? At best we’re multi-modal, with overlapping models which we move information between to consider multiple perspectives.

              • irmoz@reddthat.com
                link
                fedilink
                arrow-up
                2
                ·
                8 months ago

                Yeah, I know it doesn’t actually “see” anything, and is just making best guesses based on pre-gathered data. I was just simplifying for the comparison.

      • steakmeoutt@sh.itjust.works
        link
        fedilink
        arrow-up
        13
        ·
        8 months ago

        LLM AI doesn’t learn. It doesn’t conceptualise. It mimics, iterates and loops. AI cannot generate original content with LLM approaches.

        • Quik@infosec.pub
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          Interesting take on LLMs, how are you so sure about that?

          I mean I get it, current image gen models seem clearly uncreative, but at least the unrestricted versions of Bing Chat/ChatGPT leave some room for the possibility of creativity/general intelligence in future sufficiently large LLMs, at least to me.

          So the question (again: to me) is not only “will LLM scale to (human level) general intelligence”, but also “will we find something better than RLHF/LLMs/etc. before?”.

          I’m not sure on either, but asses roughly a 2/3 probability to the first and given the first event and AGI in reach in the next 8 years a comparatively small chance for the second event.

    • ReCursing@kbin.social
      link
      fedilink
      arrow-up
      20
      ·
      8 months ago

      This is true but AI is not plagiarism. Claiming it is shows you know absolutely nothing about how it works

      • Prunebutt@slrpnk.net
        link
        fedilink
        arrow-up
        26
        ·
        8 months ago

        Correction: they’re plagiarism machines.

        I actually took courses in ML at uni, so… Yeah…

        • bort@sopuli.xyz
          link
          fedilink
          arrow-up
          8
          ·
          8 months ago

          At the ML course at uni they said verbatime that they are plagiarism machines?

          Did they not explain how neural networks start generalizing concepts? Or how abstractions emerge during the training?

          • Prunebutt@slrpnk.net
            link
            fedilink
            arrow-up
            10
            ·
            8 months ago

            At the ML course at uni they said verbatime that they are plagiarism machines?

            I was refuting your point of me not knowing how these things work. They’re used to obfuscate plagiarism.

            Did they not explain how neural networks start generalizing concepts? Or how abstractions emerge during the training?

            That’s not the same as being creative, tho.

      • Sylvartas@lemmy.world
        link
        fedilink
        arrow-up
        14
        ·
        8 months ago

        Please tell me how an AI model can distinguish between “inspiration” and plagiarism then. I admit I don’t know that much about them but I was under the impression that they just spit out something that it “thinks” is the best match for the prompt based on its training data and thus could not make this distinction in order to actively avoid plagiarism.

            • ReCursing@kbin.social
              link
              fedilink
              arrow-up
              3
              ·
              8 months ago

              Go read how it works, then think about how it is used by people, then realise you are an absolute titweasel, then come back and apologise

              • Prunebutt@slrpnk.net
                link
                fedilink
                arrow-up
                9
                ·
                8 months ago

                I know how it works. And you obviously can’t admit, that you can’t explain how latent diffusion is supposedly a creative process.

                • ReCursing@kbin.social
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  8 months ago

                  Not my point at all. Latent diffusion is a tool used by people in a creative manner. It’s a new medium. Every argument you’re making was made again photography a century ago, and against pre-mixed paints before that! You have no idea what you’re talking about and can;t even figure out where the argument is let alone that you lost it before you were born!

                  Or do you think no people are involved? That computers are just sitting there producing images with no involvement and no-one is ever looking at them, and that that is somehow a threat to you? What? How dumb are you?

                  • Sylvartas@lemmy.world
                    link
                    fedilink
                    arrow-up
                    9
                    ·
                    8 months ago

                    Dude I am actively trying to take your arguments in good faith but the fact that you can hardly post an answer without name calling someone is making it real hard to believe you are being genuine about this

                  • Prunebutt@slrpnk.net
                    link
                    fedilink
                    arrow-up
                    6
                    ·
                    8 months ago

                    I repeatedly agreed that AI models can be used as a tool by creative people. All I’m saying is that it can’t be creative by itself.

                    When I say they’re “plagiarism machines”, I’m claiming that they’re currently mostly used to plagiarise by people without a creative bone in their body who directly use the output of an AI, mistaking it for artwork.

        • Ragdoll X@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          8 months ago

          Please tell me how an AI model can distinguish between “inspiration” and plagiarism then.

          […] they just spit out something that it “thinks” is the best match for the prompt based on its training data and thus could not make this distinction in order to actively avoid plagiarism.

          I’m not entirely sure what the argument is here. Artists don’t scour the internet for any image that looks like their own drawings to avoid plagiarism, and often use photos or the artwork of others as reference, but that doesn’t mean they’re plagiarizing.

          Plagiarism is about passing off someone else’s work as your own, and image-generation models are trained with the intent to generalize - that is, being able to generate things it’s never seen before, not just copy, which is why we’re able to create an image of an astronaut riding a horse even though that’s something the model obviously would’ve never seen, and why we’re able to teach the models new concepts with methods like textual inversion or Dreambooth.

          • irmoz@reddthat.com
            link
            fedilink
            arrow-up
            3
            ·
            8 months ago

            Both the astronaut and horse are plagiarised from different sources, it’s definitely “seen” both before

          • Sylvartas@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            8 months ago

            I get your point, but as soon as you ask them to draw something that has been drawn before, all the AI models I fiddled with tend to effectively plagiarize the hell out of their training data unless you jump through hoops to tell them not to

            • Quik@infosec.pub
              link
              fedilink
              arrow-up
              2
              ·
              8 months ago

              You’re right, as far as I know we have not yet implemented systems to actively reduce similarity to specific works in the training data past a certain point, but if we chose to do so in the future this would raise the question of formalising when plagiarism starts; which I suspect to be challenging in the near future, as society seems to not yet have a uniform opinion on the matter.

    • TimeSquirrel@kbin.social
      link
      fedilink
      arrow-up
      18
      ·
      edit-2
      8 months ago

      This argument was settled with electronic music in the 80s/90s. Samples and remixes taken directly from other bits of music to create a new piece aren’t plagiarism.

        • Xhieron@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          8 months ago

          And you’re absolutely right about that. That’s not the same thing as LLMs being incapable of constituting anything written in a novel way, but that they will readily with very little prodding regurgitate complete works verbatim is definitely a problem. That’s not a remix. That’s publishing the same track and slapping your name on it. Doing it two bars at a time doesn’t make it better.

          It’s so easy to get ChatGPT, for example, to regurgitate its training data that you could do it by accident (at least until someone published it last year). But, the critics cry, you’re using ChatGPT in an unintended way. And indeed, exploiting ChatGPT to reveal its training data is a lot like lobotomizing a patient or torture victim to get them to reveal where they learned something, but that really betrays that these models don’t actually think at all. They don’t actually contribute anything of their own; they simply have such a large volume of data to reorganize that it’s (by design) impossible to divine which source is being plagiarised at any given token.

          Add to that the fact that every regulatory body confronted with the question of LLM creativity has so far decided that humans, and only humans, are capable of creativity, at least so far as our ordered societies will recognize. By legal definition, ChatGPT cannot transform (term of art) a work. Only a human can do that.

          It doesn’t really matter how an LLM does what it does. You don’t need to open the black box to know that it’s a plagiarism machine, because plagiarism doesn’t depend on methods (or sophisticated mental gymnastics); it depends on content. It doesn’t matter whether you intended the work to be transformative: if you repeated the work verbatim, you plagiarized it. It’s already been demonstrated that an LLM, by definition, will repeat its training data a non-zero portion of the time. In small chunks that’s indistinguishable, arguably, from the way a real mind might handle language, but in large chunks it’s always plagiarism, because an LLM does not think and cannot “remix”. A DJ can make a mashup; an AI, at least as of today, cannot. The question isn’t whether the LLM spits out training data; the question is the extent to which we’re willing to accept some amount of plagiarism in exchange for the utility of the tool.

      • snooggums@midwest.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        8 months ago

        The samples were intentionally rearranged and mixed with other content in a new and creative way.

        When sampling took off, the copyright situation was sorted out and the end result is that there are ways to license samples. Some samples are produced like stock footage hat could be pirchased inexpensively, which is why a lot of songs by different artists have the same samples included. Samples of specific songs have to be licensed, so a hip hop song with a riff from an older famous song had some kind of licensing or it wouldnt be played on the radio or streaming services. They might have paid one time, or paid an artist group for access to a bunch of songs, basically the same kind of thing as covers.

        Samples and covers are not plagarism if they are licensed and credit their source. Both are creating someing new, but using and crediting existing works.

        AI is doing the same sampling and copying, but trying to pretend that it is somehow not sampling and copying and the companies running AI don’t want to credit the sources or license the content. That is why AI is plagarism.

      • irmoz@reddthat.com
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        8 months ago

        Not even remotely the same. A producer still has to choose what to sample, and what to do with it.

        An AI is just a black box with a “create” button.

    • BruceTwarzen@kbin.social
      link
      fedilink
      arrow-up
      12
      ·
      8 months ago

      Ray parker’s Ghostbusters is inspired by huey lewis and the new’s i want a new drug. But actually it’s just blatant plagiarism. Is it okay because a human did it?

    • dudinax@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      8 months ago

      We’ll soon see whether or not it’s the same thing.

      Only a 50 years ago or so, some well-known philosophers off AI believed computers would write great poetry before they could ever beat a grand master at chess.

        • dudinax@programming.dev
          link
          fedilink
          arrow-up
          5
          ·
          8 months ago

          The formalization of chess can’t be practically applied. The top chess programs are all trained models that evaluate a position in a non-formal way.

          They use neural nets, just like the AIs being hyped these days.

              • Buddahriffic@lemmy.world
                link
                fedilink
                arrow-up
                4
                ·
                8 months ago

                I think the relevant point is that chess is discrete while art isn’t. Or they both are but the problem space that art can explore is much bigger than the space chess can (chess has 64 square on the board and 7 possibilities for each square, which would be a tiny image that an NES could show more colours for or a poem with 64 words, but you can only select from 7 words).

                Chess is an easier problem to solve than art is, unless you define a limited scope of art.

                • dudinax@programming.dev
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  8 months ago

                  We could use “Writing a Sonnet” as a suitably discrete and limited form of art that’s undeniably art, and ask the question “Can a computer creatively write a sonnet”? Which raises the question “Do humans creatively write sonnets?” or are they all derivative?

                  Humans used to think of chess as an art and speak of “creativity” in chess, by which they meant the expression of a new idea on how to play. This is a reasonable definition, and going by it, chess programs are undeniably creative. Yet for whatever reason, the word doesn’t sit right when talking about these programs.

                  I suspect we’ll continue to not find any fundamental difference between what the machines are doing and what we are doing. Then unavoidably we’ll either have to concede that the machines are “creative” or we are not.