Setting aside the usual arguments on the anti- and pro-AI art debate and the nature of creativity itself, perhaps the negative reaction that the Redditor encountered is part of a sea change in opinion among many people that think corporate AI platforms are exploitive and extractive in nature because their datasets rely on copyrighted material without the original artists’ permission. And that’s without getting into AI’s negative drag on the environment.

  • barsoap@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 months ago

    Unless you are creating your own AI model from scratch and training it purely on your own artworks, I don’t see how you can, in good conscience, claim the results to be your own.

    Did you create all the textures you put onto your 3d models? Did you use substance painter? Any sort of asset library? If you’re working in 2d, did you create your own brush textures?

    Did you create colour and perspective theory from scratch? If not, how can you call yourself a painter?

    Did Duchamp study the manufacture of ceramics before putting a factory-made urinal on a pedestal and called it a piece of art?

    • deepblueseas@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      Wow, nice rhetorical questions you got there, bud.

      What the fuck do you think?

      If you had enough reading comprehension and read through my whole response, you would have got to the part where I said creating art is about the culmination of choices you make in each part of the process.

      Maybe you can point it out to me, but I don’t recall the part where I said you have to recreate the fucking wheel every time you create something.

      That particular quote you pointed out, was specific to generative AI, because you don’t make those same choices. The model and the training data is what produces those results for you.

      But since you asked, yes I do have the knowledge to create textures by hand without Substance Painter. I’ve been doing 3d art since 2003, before that shit even existed and we hand to do it all manually in Photoshop.

      No, I didn’t fucking create color and perspective theory. What do you think I am… like a fucking immortal from ancient times? But I did have to learn that shit and took multiple classes dedicated to each of those topics.

      Lastly, you must have skipped on your art history for the last one, because the whole concept of that particular piece was that it was absurdist - an every day object raised to status of art by the artist. He didn’t fucking sculpt the urinal himself. So it would have been more appropriate to say he was a janitor that got lucky. Nice try, though.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        9
        ·
        9 months ago

        That particular quote you pointed out, was specific to generative AI, because you don’t make those same choices. The model and the training data is what produces those results for you.

        And for a photographer, their surroundings is what produces many results, leading them to not make choices about those things. They focus on other things, don’t express themselves in the arrangement of leaves on a tree, leave that stuff to chance.

        The important part is not that choices are made for you, but that you do make, at the very least, a choice. One single choice suffices to have intent. It is not even necessary to make that choice during the creation of the piece, splattering five buckets of paint onto five canvases and choosing the one that sparks the right impression a choice.

        because the whole concept of that particular piece was that it was absurdist - an every day object raised to status of art by the artist.

        Yes, precisely. That one concept, the single choice, “yep a urinal should be both provocative and banal enough”, is what made it art.

        There is no minimum level of craft necessary for art.

        • deepblueseas@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 months ago

          Ah, very interesting that you want to focus on photography as a comparison. To me, this just infers that you are not familiar with the type of choices that photographers do make, creatively. Just because they have endless amounts of subject matter readily available at their disposal, does not make the process any easier or different than other types of art.

          Photographers still consider composition, lighting, area of focus, color, etc. Along with a large amount of other factors such as camera body, filmback, lens, fstop, iso, flash, supplemental lighting, post-processing, the list goes on.

          Again, all of these choices are actively made when creating the work - using one’s critical thinking, decision making, experience and knowledge to inform each choice and how it will affect the outcome.

          Generative AI is not that and will never be that, no matter how much you argue otherwise. You are entering a prompt, the model is interpreting that and generating a result that it calculates to be most statistically accurate. Your choice of words are not artistic choices, they are at most, requests or instructions. If you iterate, you are not in control of what changes. You only find out what has changed after the result has been generated.

          Again, you are totally missing the point to the Fountain and using it as a false equivalence. It was made as a critique of the art world, to show the absurdity of what art critics said was valid art at the time. Whereas today, generative AI is not being made as a critique to anything. It’s being made for profit, to replaced skilled labor and using the work of the same people it’s trying to replace. Hopefully you can see how the two are different.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            6
            ·
            9 months ago

            If you iterate, you are not in control of what changes. You only find out what has changed after the result has been generated.

            If you think that’s the case then you don’t understand the medium. Once you’ve explored a model, seen into its mind, understand how it understands things, you can direct it quite precisely. At least as precisely as a photographer taking a picture of a tree – yes, if you care about the arrangement of leaves then it might take a couple of tries until the wind moves them just right but you’ve made a point of going to the right tree, in the right season, on a day with the right weather, at a time with the right light.

            Whereas today, generative AI is not being made as a critique to anything.

            I’m not claiming that. There’s an incidental artistry in the sense that now some progressives have their underwear in a twist just as conservatives had theirs in a twist about Fountain but I’ll readily grant that there was no human intent behind it. Sometimes it’s not artists who troll people but the general machinations of the world. Still worthy of appreciation but calling it “art” is not a hill I would die on.

            What I’m claiming is that you can’t judge art by the level of craft involved: It can be zero and still be art. Any argument involving craft is literally missing the point of what art is.

          • Deceptichum@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            9 months ago

            Photographers still consider composition, lighting, area of focus, color, etc. Along with a large amount of other factors such as camera body, filmback, lens, fstop, iso, flash, supplemental lighting, post-processing, the list goes on.

            You do know many of those things are considered in AI generated images as well, right?

            And there is so much more to it than a simple text prompt, even something as basic as what nodes i feed into what else and in what order/ weight can have vast impacts, do i want to use a depth map based on a 3d mannequin I’ve rigged up in blender to use as my pose or go with a canny line filter to keep the form as the focus, should i overlay the image cutout layer before filling in the background and running a detailer node on top or merge them together and see how that goes, etc.