I just looked at the campaign to get back in the game nooooooooo

  • yoink [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    4 months ago

    Games are a great example there: a given texture, or mesh, or sound, etc is technically “art” because it is a thing created by labor related to art, but it’s not itself a complete and coherent whole, it’s not the piece itself nor does it have any more purpose than that the greater work it’s a part of needed it.

    I’m a game dev, you’re speaking to the exact wrong person for this example lmao seeing as I’m someone who believes that each piece IS important and requires active decision making - I’m picking specific menu UI sounds for a reason. You know where this sort of ‘it doesn’t matter, it’s just filler’ does happen far more often? In large, AAA game development spaces, the places that inherently intersect with capital and which leads to thinking ‘oh this part doesn’t matter, just use the AI who cares’. I’m sorry, but that really exemplifies exactly what I’m talking about - yes, there are things that are called ‘art’ that are more functional than they are culturally relevant things, and I agree there is a degree of ‘misnomer’ around this. But i think you’ll agree it’s not just from one side here - a lot of people who advocate for AI art rely on this blurry line, because they want to be able to generate Art pieces with minimal effort on their part, relying on a defense of ‘it’s just functional, you’re thinking too hard’, while also wanting to be conferred the respectability that ‘art as a meaningful thing’ gets.

    Hell, if we’re bringing up games, this discourse is eerily reminiscent of the ‘Games are Art’ discourse. We can say that ‘oh AI art is just meant for functional art’, but you have to agree that that is not how it’s being treated, how it’s being used nor what most people who want AI art actually believe - if they did, then this whole discussion wouldn’t be such a pain point, and we wouldn’t be talking about ‘democratising art’ - unless people specifically want to democratise corporate clip art for some reason?

    • KobaCumTribute [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      11
      ·
      4 months ago

      You know where this sort of ‘it doesn’t matter, it’s just filler’ does happen far more often? In large, AAA game development spaces,

      Devs at all levels rely on stock asset libraries for generic sound effects and the like just like the film and animation industries do. If anything a AAA dev is more likely to be able to have some foley artists to produce these sorts of things than a small dev is. In fact, that sort of thing is the exact distinction I was talking about: the production of sound effects is fascinating and ingenious, but it’s ultimately just about creating a functional bit of audio to complete some greater work, and as far as the greater whole is concerned there is no real difference between grabbing a sound bite from a stock library and distorting it until it fits or hiring a professional foley artist to go out and make a bespoke effect for them, all that matters is getting a piece that fits within the resources the dev has on hand.

      a lot of people who advocate for AI art rely on this blurry line, because they want to be able to generate Art pieces with minimal effort on their part, relying on a defense of ‘it’s just functional, you’re thinking too hard’, while also wanting to be conferred the respectability that ‘art as a meaningful thing’ gets.

      There’s a reason I stress that the machine itself is a tool and is neutral in and of itself, instead of defending the field. Corporate AI people are 100% pure unbridled grift, and the hobbyist scene is at least 90% grifters and worse, and every time I interact with or look at that community all I can think is quite literally kind-vladimir-ilyich pikmin-carry-lbazingapikmin-carry-rpikmin-carry-lno-mouth-must-screampikmin-carry-r barbara-pit.

      The generators themselves, on the other hand, are fascinating and controllable machines with massive still-untapped potential. Right now we mostly just have a rush of grifters churning out generic images with them and chasing a pseudo-photorealistic style that looks like absolute dogshit at best, but the machines themselves are 100% capable of being used for more than that and I cannot argue strongly enough that the left and individual artists should be seizing upon them and learning to exploit them before the pipeline gets smoothed out and it becomes a standardized corporate tool for animation.

      Because even within the context of how, for example, TV shows get animated already the local models we have now that can run on any modern midrange gaming computer can easily slot in and replace or streamline certain roles to the point that you could probably almost reduce an animation team down to the storyboarders and some techs - in corporate hands that’s going to look like absolute dogshit, and the same for a grifter’s hands, but it also means that suddenly projects that would never get the sort of institutional support that entangling a bunch of different contracted studios requires can conceivably get off the ground.

      And that corporate adoption of the tech is going to happen, for all we know it’s already going on and the projects coming out of it will start showing up in the next year or two. All the bad shit is going to happen because of corporate involvement. There’s no stopping that, so it has to be understood as inevitable at this point. So having established that baseline moving forwards is “very bad,” there is nothing to be gained by turning one’s nose up at the machinery and letting corporations keep a monopoly on it, nor by leaving the hobbyist AI scene to the dipshits currently filling it. Literally the only mitigation option at this point is to seize the tools and adapt to use them as well, and to do so as quickly as possible.

      And to be clear, the current tools aren’t just the prompt boxes that churn out some random image vaguely matching the description, with local models there’s a whole suite of things to control exactly what gets generated and how it gets laid out, and that can easily be meshed with established traditional methods to composite generated bits into a scene, turning tens of hours of work into a single hour of work or less. There’s so much untapped potential for a small team or an individual artist to punch way above their weight there in a way that all the “please oh infernal machine give me a waifu in the style of Norman Rockwell” or “edgy pixar meets artstation” schlock produced by techbros treating the generate button as a gacha pull just does not adequately represent.

      • yoink [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 months ago

        I don’t want my message to get confused here, and since you seem to actually be responding in good faith (instead of jumping straight to ‘Luddite’) I’m happy to engage. Fundamentally, I think we are weirdly enough on the same page - I agree, the machine in and of itself is fascinating and incredibly important for progress and for the continued development and innovation of human creativity. Despite what I’ve said elsewhere on this site and in this thread, I do believe that there exists a place for AI (or rather, what we currently want to call AI), and I do believe there exists a world in which it isn’t fully captured and diverted towards capital interest - it would be kinda foolish not to believe that. I am, after all, first and foremost a computer scientist on some level - I am endlessly interested in the innovations that are on offer for us, and have honestly contemplated a Masters in the field with a focus on AI in the past. But as you’ve touched on, and as I’ve alluded to, I’m incredibly hesitant due to the fact that so much of this is bad actors dressing up their attempt to build the Exploitation Machine 5000TM but couching it in ‘nice’, FOSS-adjacent, communist-adjacent language.

        If anything a AAA dev is more likely to be able to have some foley artists to produce these sorts of things than a small dev is.

        I will say though, I kinda disagree with this - this is the case right now, but I think you’d agree that once AI is completely normalised, it’s the AAA company which could hire an artist but wants to cut corners that will turn to AI, rather than the small dev who is more likely to stick to their ideological guns (and less likely to want to engage in exploiting a fellow artist). I mean, you touch on it too - that this corporatisation is inevitable. I agree, ideologically, with not allowing a monopoly to naturally form, but I also can’t shake the feeling that doing so is the same as simply helping to build their machine for them - there doesn’t exist a world in which we could sufficiently stop that from happening, at least not under our current system. Maybe that doesn’t matter, maybe I’m overthinking it and that it’s worth doing regardless. Again, trying my best to shake off the Luddite accusations here hahaha

        And while I agree there’s a way here for smaller, more communist focused and more anti-corporate teams to punch above their weight, it still (as it stands) relies on the work of people outside those teams, who must necessarily have their work fed to the AI in order for it to be useful - we can talk about the degrees of modification here, and we can draw comparisons to things like collage and sampling, and perhaps in that intentionality is some sort of answer but i also can’t lie and say it feels good to me. Maybe there’s some path to genuinely ethical AI, maybe an actually novel AI comes through and this is just no longer a concern whatsoever, but from here there still seems to be a lot of work to do before we approach that point

        • KobaCumTribute [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          7
          ·
          4 months ago

          trying my best to shake off the Luddite accusations here hahaha

          The wildest thing about those sorts of accusations is that the literal Luddites themselves were basically taking the position “it sucks when the rich bastards own all the machines and screw over all the skilled laborers, because otherwise these machines are actually pretty cool and if they were ours instead that would make all this much better” and then engaging in sabotage as a form of a class warfare. “The workers should take this for themselves and also OpenAI should be redacted in minecraft” is basically the modern equivalent of that.

          couching it in ‘nice’, FOSS-adjacent, communist-adjacent language.

          Yeah the reactionary bent in the FOSS scene is shitty and I feel like there’s a broader point to be raised there that’s related to a different point I bring up all the time, about these sort of libertarian chauvinists who are at odds with big business and other reactionary institutions because those are standing between them and things that they personally want, where their entire worldview just revolves around cynical self-interest and they just happen to be the (at least comparatively) little guy in that scenario.

          And just as expected, a lot of the ideological “everyone should have free access to these tools” stuff is a shallow lie for the open source AI scene, where for all that there is absolutely a ton of work being done just for the sake of making better tools and sharing them there’s also an entire ecosystem of circling grifters trying to monetize and enclose that work as much as they can while still blending in. There’s also a huge chunk of grifters hoping to win big through getting some startup cash and maybe being able to sell out to a big tech company or win some big corporate contract.

          but I think you’d agree that once AI is completely normalised, it’s the AAA company which could hire an artist but wants to cut corners that will turn to AI, rather than the small dev who is more likely to stick to their ideological guns

          Honestly I’d say it’s a toss up: AAA companies have absurd budgets and employ small armies of artists and techs to the point that they can afford to be indulgent and try to compete with each other on quality as a prestige thing, but they can also just as easily cannibalize themselves and chase the minimum viable product they can get away with; similarly indie studios can be extremely dedicated and indulgent within their resources, or they can be running on a shoestring budget and trying to get by with stock assets anywhere they can’t cover with their own personal labor. The reasons why or why not may be different, but I don’t see the products of AI as meaningfully distinct from stock asset libraries there - better in some ways, potentially worse in others, and ultimately dependent on how and why they’re used.

          it still (as it stands) relies on the work of people outside those teams, who must necessarily have their work fed to the AI in order for it to be useful - we can talk about the degrees of modification here, and we can draw comparisons to things like collage and sampling, and perhaps in that intentionality is some sort of answer but i also can’t lie and say it feels good to me. Maybe there’s some path to genuinely ethical AI, maybe an actually novel AI comes through and this is just no longer a concern whatsoever, but from here there still seems to be a lot of work to do before we approach that point

          The key thing there is to think about how that meshes with proprietary models trained on licensed material or material owned by the company in question: if openAI or google or adobe pay someone like reddit or imgur or deviantart or whoever for the right to train on content they host, does that make the end result more ethical? If Disney has a model trained on its own properties, is that an ethical generator? If a huge company were to hire a bunch of artists to produce enough material for it to be trained, would that machine be ethical? Of course not, because these would all be controlled by the corporations and put to the same ruinous ends.

          And that’s what the IP angle is for: it’s a way for big media hosts and property holders to attach themselves to the bubble and extract wealth from it through licensing fees or other agreements, while at the same time laundering the effects. Because that’s the conclusion of that that they’re angling for: the idea that properly licensed training data does make the model ethical and ok regardless of its use. The labor crushing machine is fine and dandy as long as property rights are respected and the right people get to own it, that’s what the whole media push about “AI stealing art” is for.

          So I just reject it out of hand. The corporations are trying to enclose all of human culture and turn it into a neat little commodity to exploit, and that enclosure relies on property rights. Simply not taking part doesn’t stop them, and seizing upon tools outside of their control to try to compete doesn’t help them. If there is stolen surplus value embedded in the very being of those tools, I don’t think that affects the ethicality of using them once they are already made. All that matters is how and why you are using it.