I don’t know exactly where to start here, because anyone who claims to know the shape of the next decade is kidding themself.
Broadly:
AI will decocratize creation. If technology continues on the same pace that it has for the last few years, we will soon start to see movies and TV with hollywood-style production values being made by individual people and small teams. The same will go for video games. It’s certainly disruptive, but I seriously doubt we will want to go back once it happens. To use the article’s examples, most people prefer a world with street view and Uber to one without them.
That’s putting millions of people out of a job with no real replacement. The ones that aren’t unemployed will be commanding significantly smaller salaries.
We’re seriously at a crossroads, with the people driving the ship desperately trying to steer into “Kill all the unproductives after we automate their work” territory
It’s actually not as easy as you think, it “looks” easy because all you seen is the result of survivorship bias. Like instagram people, they don’t post their failed shots. Like seriously, go download some stable diffusion model and try input your prompt, and see how good the result you can direct that AI to get things you want, it’s fucking work and I bet a good photographer with a good model can do whatever and quicker with director.(even with greenscreen+etc).
I dab the stable diffusion a bit to see how it’s like, with my mahcine(16GB vram), 30 count batch generation only yields maybe about 2~3 that’s considered “okay” and still need further photoshopping. And we are talking about resolution so low most game can’t even use as texture.(slightly bigger than 512x512, so usually mip 3 for modern game engine). And I was already using the most popular photoreal model people mixed together.(now consider how much time people spend to train that model to that point.)
Just for the graphic art/photo generative AI, it looks dangerous, but it’s NOT there yet, very far from it. Okay, so how about the auto coding stuff from LLM, welp, it’s similar, the AI doesn’t know about the mistake it makes, especially with some specific domain knowledge. If we have AI that trained with specific domain journals and papers, plus it actually understand how math operates, then it would be a nice tool, cause like all generative AI stuff, you have to check the result and fix them.
The transition won’t be as drastic as you think, it’s more or less like other manufacturing, when the industry chase lower labour cost, local people will find alternatives. And look at how creative/tech industry tried outsource to lower cost countries, it’s really inefficient and sometimes cost more + slower turn around time. Now, if you have a job posting that ask an artist to “photoshop AI results to production quality” let’s see how that goes, I can bet 5 bucks that the company is gonna get blacklisted by artists. And you get those really desperate or low skilled that gives you subpar results.
It’s like the google dream with dogs for the hand one. lol, I’ve seen my fair share of anatomy “inspirations” when I experiment the posing prompts. (then later learn there are 3D posing extensions.) If it’s a uphill battle for more technical people like me, it would be really hard for artists. The ones I know that use mid journey just think it’s fun and not something really worth worrying about. A good hybrid gaping tools for fast prototype/iteration with specific guidance rules would be neat in the future.
ie. 3D DCC for base model posing and material selection/lighting -> AI generate stuff -> photogrammetry(pretty hard to generate cause AI doesn’t know how to generate same thing from different angle, lol) to convert generated images back to 3D models and textures-> iterate.
There are people working on other part like building replacement or actor replacement, I bet there are people working on above as well.
Yup. We should start preparing ideas for how we’re going to deal with that.
One thing we can’t do is stop it, though. Legislation prohibiting AI is only going to slow the transition down a bit while companies move themselves to other jurisdictions that aren’t so restrictive.
Generative design is already a mature technology. NASA already uses it for spaceship parts. It’ll probably be used for bridges when large-format 3D printers that can manage the complexity it introduces.
It’s still just a tool for engineers though. Half of the job is determining what the design requirements are, another quarter is figuring out what general scheme (i.e. water vs air cooling) works best to meet those requirements. Things like this are great, but all they really do is effectively connect point A to point B in order to free up some man-hours for more high-level work.
There’s a guy at this maker-space I work out of who’s been using ChatGPT to do engineering work for him. There was some issue with residue being left in the parking lot on the pavement and came forward saying it had to do with “ChatGPT giving him a bad math number,” whatever the hell that means. This is also not the first time he’s said something like this, and its always hilarious.
It will shift a lot of human effort from generative to review. For example the core role of an engineer in many ways already is validation of a plan. Well that will become nearly the only role.
That assumes that the classes of problems that AI’s can solve remains stagnant. I don’t think that’s a good assumption, especially given that GPT4 can already self-review and refine its output.
It will take a very long time for people to believe and trust AI. That’s just the nature of trust. It may well surpass humant in always soon, but trust will take much more time. What would be required for an AI designed bridge be accepted without review by a human engineer?
I don’t know exactly where to start here, because anyone who claims to know the shape of the next decade is kidding themself.
Broadly:
AI will decocratize creation. If technology continues on the same pace that it has for the last few years, we will soon start to see movies and TV with hollywood-style production values being made by individual people and small teams. The same will go for video games. It’s certainly disruptive, but I seriously doubt we will want to go back once it happens. To use the article’s examples, most people prefer a world with street view and Uber to one without them.
The same goes for engineering.
That’s putting millions of people out of a job with no real replacement. The ones that aren’t unemployed will be commanding significantly smaller salaries.
I seriously doubt this technology will pass by without a complete collapse of the labor market. What happens after is pretty much a complete unknown.
We’re seriously at a crossroads, with the people driving the ship desperately trying to steer into “Kill all the unproductives after we automate their work” territory
I think its fair to assert that society will shift dramatically. Though the climate will have as much to do with that as AI.
It’s actually not as easy as you think, it “looks” easy because all you seen is the result of survivorship bias. Like instagram people, they don’t post their failed shots. Like seriously, go download some stable diffusion model and try input your prompt, and see how good the result you can direct that AI to get things you want, it’s fucking work and I bet a good photographer with a good model can do whatever and quicker with director.(even with greenscreen+etc).
I dab the stable diffusion a bit to see how it’s like, with my mahcine(16GB vram), 30 count batch generation only yields maybe about 2~3 that’s considered “okay” and still need further photoshopping. And we are talking about resolution so low most game can’t even use as texture.(slightly bigger than 512x512, so usually mip 3 for modern game engine). And I was already using the most popular photoreal model people mixed together.(now consider how much time people spend to train that model to that point.)
Just for the graphic art/photo generative AI, it looks dangerous, but it’s NOT there yet, very far from it. Okay, so how about the auto coding stuff from LLM, welp, it’s similar, the AI doesn’t know about the mistake it makes, especially with some specific domain knowledge. If we have AI that trained with specific domain journals and papers, plus it actually understand how math operates, then it would be a nice tool, cause like all generative AI stuff, you have to check the result and fix them.
The transition won’t be as drastic as you think, it’s more or less like other manufacturing, when the industry chase lower labour cost, local people will find alternatives. And look at how creative/tech industry tried outsource to lower cost countries, it’s really inefficient and sometimes cost more + slower turn around time. Now, if you have a job posting that ask an artist to “photoshop AI results to production quality” let’s see how that goes, I can bet 5 bucks that the company is gonna get blacklisted by artists. And you get those really desperate or low skilled that gives you subpar results.
Somehow the same artist:
It’s like the google dream with dogs for the hand one. lol, I’ve seen my fair share of anatomy “inspirations” when I experiment the posing prompts. (then later learn there are 3D posing extensions.) If it’s a uphill battle for more technical people like me, it would be really hard for artists. The ones I know that use mid journey just think it’s fun and not something really worth worrying about. A good hybrid gaping tools for fast prototype/iteration with specific guidance rules would be neat in the future.
ie. 3D DCC for base model posing and material selection/lighting -> AI generate stuff -> photogrammetry(pretty hard to generate cause AI doesn’t know how to generate same thing from different angle, lol) to convert generated images back to 3D models and textures-> iterate.
There are people working on other part like building replacement or actor replacement, I bet there are people working on above as well.
Yup. We should start preparing ideas for how we’re going to deal with that.
One thing we can’t do is stop it, though. Legislation prohibiting AI is only going to slow the transition down a bit while companies move themselves to other jurisdictions that aren’t so restrictive.
I can’t wit to drive over a bridge where the contruction parameters and load limits were creatively autocompleted by a generative AI
Generative design is already a mature technology. NASA already uses it for spaceship parts. It’ll probably be used for bridges when large-format 3D printers that can manage the complexity it introduces.
It’s still just a tool for engineers though. Half of the job is determining what the design requirements are, another quarter is figuring out what general scheme (i.e. water vs air cooling) works best to meet those requirements. Things like this are great, but all they really do is effectively connect point A to point B in order to free up some man-hours for more high-level work.
There’s a guy at this maker-space I work out of who’s been using ChatGPT to do engineering work for him. There was some issue with residue being left in the parking lot on the pavement and came forward saying it had to do with “ChatGPT giving him a bad math number,” whatever the hell that means. This is also not the first time he’s said something like this, and its always hilarious.
It will shift a lot of human effort from generative to review. For example the core role of an engineer in many ways already is validation of a plan. Well that will become nearly the only role.
That assumes that the classes of problems that AI’s can solve remains stagnant. I don’t think that’s a good assumption, especially given that GPT4 can already self-review and refine its output.
It will take a very long time for people to believe and trust AI. That’s just the nature of trust. It may well surpass humant in always soon, but trust will take much more time. What would be required for an AI designed bridge be accepted without review by a human engineer?
We’ll probably see sooner or later.