Thats not a good criticism of societies priorities, because those things are not mutually exclusive and making a show on netflix is much easier than curing cancer.
That’s a nice strawman, there. Why don’t you ask the direct question: Are advances in laundry and dish washing technology easier or harder than in image generation?
You made no argument as to why automating art should actually be easier – all you provide is an assertion. Consider the billions sunk into buying shovels from nvidia, how much progress could’ve been made on the laundry front?
Then, your whole line of inquiry is missing the point: It’s not so much about relative complexity of the tasks, but the fact that we do enjoy the one, but not the other. What does it say about a society when we focus on automating what we like to do at the expense of stuff that we don’t like to do?
Or, differently put: How much advancement in AI do we need to make to have it ask us what a dabbling beginner artist can already ask us, and that is why the hell are we making this choice, and not the other? Art is the science of choice, give it some respect.
What you’re doing there is not being smart, being all so enlightened rational MINT brained, what you’re doing is going to extreme lengths to ignore a simple question. Was that choice conscious?
Why don’t you ask the direct question: Are advances in laundry and dish washing technology easier or harder than in image generation?
Image generation, being purely software, is far easier than automating physical tasks. There’s very little risk of danger, you can iterate much faster, and costs are lower. Not really a clear analog, but Boston dynamics spot robot is like 75k, whereas most image generation models are downloadable for free. Once you start acting in the physical world things get expensive and hard.
Consider the billions sunk into buying shovels from nvidia, how much progress could’ve been made on the laundry front?
Automating laundry would’ve also required this, as the shovels are for general machine learning. In fact as far as I can tell, these gpus aren’t being bought for image generation, but large language models now.
There’s very little risk of danger, you can iterate much faster, and costs are lower. Not really a clear analog, but Boston dynamics spot robot is like 75k, whereas most image generation models are downloadable for free.
I was thinking more development costs. The product price of the robot I’m sure can be brought down significantly with sufficient automation.
Automating laundry would’ve also required this, as the shovels are for general machine learning.
Those LLM fucks are working towards AGI, I don’t think my washing machine needs to be sentient to do a good job. Or read reddit and conclude that it should add glue to the detergent.
Engineering-wise improvements in laundry are way easier to deal with because the problem space is well-defined. Want your clothes to come out folded? Sure, not easy, but at least we know what behaviour we’re looking for and how to assess it.
Like I said, not really a clear analog, however I’m hoping this shows the difference working in the real world makes when it comes to R&D costs. Idk how accurate this website is, but we’re talking about literal billions in R&D. Versus most stable diffusion models are trained up by a grad student or something - and subsequently released for free potentially as a reflection of the r&d price.
I don’t think my washing machine needs to be sentient to do a good job.
But you will need them to be able to recognize the type of clothing and orientation in a 3d space before manipulating it, and that’s where training models comes in.
Versus most stable diffusion models are trained up by a grad student or something
Most SD models are fine-tunes of stuff StabilityAI produces. Training those things from scratch is neither cheap nor easy, PonyXL is the one coming closest as there’s more of its own weights in there than of the SDXL base model but that wasn’t a single person enterprise. One lead dev yes but they had a community behind them.
If it was that easy people wouldn’t be griping about StabilityAI removing any and all NSFW content from their dataset so the community has to work for months and months to make a model erm “usable”.
I wasn’t able to find exact r&d costs for stability ai, however all the numbers mentioned in this article are at least an order of magnitude lower than Boston dynamics r&d costs.
Again, I assert that working in the physical realm brings along with it far more r&d costs on the basis that there’s far more that can go wrong that has to be accounted for. Yes, as you described, the problem itself is well defined, however the environment introduced countless factors that need to be accounted for before a safe product can be released to consumers.
As an example, chess is potentially the most well defined problem as it is a human game with set rules. However a chess robot did grab and break a child’s finger a while back. There’s a reason why we don’t really have general manipulator arms in the house yet.
There is no threat of physical harm in image generation.
The robot had been publicly used for over a decade with grandmasters and this is the first time an accident like this has occurred.
I think that’s the problem, there. A decade ago they only just began figuring out how to integrate physical feedback, that robot might not even have any of it but it’s vital for fine motor skills and any kind of reactivity. The hardware part isn’t the difficult stuff, well it might be difficult but not research difficult, the control software is. Really, hardware isn’t the problem (that thing is human-operated).
But anyhow I think all this is kinda missing the point of the quip. It’s neither about laundry nor furry porn, but the more general “automating what we like doing vs. automating what we don’t like to do”. And that might have a deeper reason buried in silicon valley culture and psychology: Going for AGI is the big stuff. Impressing people is the big stuff. Dazzling people with hype is the big stuff. Throwing techbro pipe dreams at problems is the big stuff. Building cars with sub 10 micron tolerance is the big stuff. Solving everyday problems using proven methods? Try getting VC funding for that. Silicon valley doesn’t do anything short of “this will bring about the singularity”.
Or, differently put: Forget about doing the laundry. How about having to work in a washing machine factory to be able to afford following your creativity in your off time while our tech is more than good enough to run a washing machine factory with just a small team of supervisors. Even the software needed for automatic inspection of parts can nowadays be gotten off the shelf, no assembly or QA workers necessary.
That kind of stuff has a gigantic ROI – alas, it’s also quite long-term, short-term it’s cheaper to hire human workers, not to mention human workers in the global south. But it’s going to get cheaper as more and more stuff will become standard engineering practice.
The hardware part isn’t the difficult stuff, well it might be difficult but not research difficult, the control software is.
That’s exactly the “ai” aspect of this, which is why I’ve been saying it’s a harder problem. Your control software has to account for so much because the real world has many unforeseen environmental issues.
Solving everyday problems using proven methods? Try getting VC funding for that.
Fwiw, one of my friends works at John Deere helping with automation by working on depth estimation via stereo cameras, I believe for picking fruit. That’s about as “automate menial labor” as you can get, imo. The work is being done, it’s just hard.
Going for AGI is the big stuff. Impressing people is the big stuff. Dazzling people with hype is the big stuff.
I think you’ve introduced some circular thinking in here - spot and atlas bots from boston dynamics do dazzle people. In fact I’d argue that with all the recent glue memes, people are disillusioned with agi.
At the end of the day, manipulating things in the real world is simply harder than manipulating purely digital things.
Once we’ve established that, then it makes sense why digital image generation is tackled first before many physical things. As a particularly relevant example, you’ll notice that there’s no robot physically painting (not printing!) the images generated by image generation, because it’s that much harder.
Ultimately, however, I don’t think society is ready for robots that do our menial labor. We need some form of UBI, otherwise the robots will in fact just be terking our jerbs.
…I mean they may recruit in the valley and have offices there but they’re not part of the business/VC culture there. And neither seeks trillions of dollars to sink into silicon.
Ultimately, however, I don’t think society is ready for robots that do our menial labor. We need some form of UBI, otherwise the robots will in fact just be terking our jerbs.
UBI is going to come one way or the other, question being whether we will have to fight neofeudal lords first.
And it won’t be Musk or any of the valley guys ketamine and executing on evil plans doesn’t mix, it might not even be American, it might be… wait, yes. A Nestle+Siemens merger buying up Boston Dynamics and other companies having actually useful products. Nestle has evilness nailed down, Siemens is a big investment bank (with attached household appliance factory) and still miffed that laws forbid them from bribing foreign officials, and the rest provide the tech.
The Chinese are another possibility, as in they have the capacity, though I can’t quite see tankies actually going for the abolishment of work.
As a particularly relevant example, you’ll notice that there’s no robot physically painting (not printing!) the images generated by image generation, because it’s that much harder.
I don’t think it’s that hard. The robot would be a slightly more involved plotter or a standard 6-axis arm, to train the model you don’t need to hook it up to the robot you could hook it up to a painting program, we’re quite good at simulating oil paint, including brush angle and rotation and everything, graphics tablets can detect those things and programs have been making use of it for quite a while. Might not go the whole way but far enough to only need fine-tuning once you hook up the robot.
Why don’t you ask the direct question: Are advances in laundry and dish washing technology easier or harder than in image generation?
Harder, obviously.
You made no argument as to why automating art should actually be easier
Apologies, I assumed people had common sense, but you’ve made it clear I was incorrect. My bad
Consider the billions sunk into buying shovels from nvidia,
Shovels?
Then, your whole line of inquiry is missing the point: It’s not so much about relative complexity of the tasks, but the fact that we do enjoy the one, but not the other. What does it say about a society when we focus on automating what we like to do at the expense of stuff that we don’t like to do?
Its actually like talking to a brick wall.
I need you to understand that people dont sit in meetings and say “should we casually create robots that can do all our daily chores and free the working class from the chains of capitalist oppression, or do we automate furry porn???” We are not automating art at the expense of laundry.
We a re doing both
and I hate that you’re making me repeat this over and over because you have tumblr levels of reading comprehension. Its orders of magnitude easier to white a computer program than it is to build a fucking robot.
What you’re doing there is not being smart, being all so enlightened rational MINT brained
The absolute. fucking. Irony.
simple question. Was that choice conscious?
Youre still missing the ENTIRE FUCKING POINT of my comment. There isnt a choice, this isnt 1 or the other. For the love of god learn to read.
Consider the billions sunk into buying shovels from nvidia,
Shovels?
“When there’s a gold rush, be the one who sells shovels”.
“should we casually create robots that can do all our daily chores and free the working class from the chains of capitalist oppression, or do we automate furry porn???”
Why not? Why aren’t we doing it? Why aren’t you doing it, why are you reinforcing us not doing it by putting other perspectives down as “braindead artist takes”?
There isnt a choice, this isnt 1 or the other.
Maybe not a binary choice, but there’s definitely a priority. People question that priority. Is that question, in your perspective, valid? Would it hurt humanity if we replaced, say, 0.1% of pointless meetings with meetings discussing robots vs. furry porn?
Why not? Why aren’t we doing it? Why aren’t you doing
For the exact same reason YOU nor anyone else is curing cancer, or sending people to Mars, or letting us live forever or creating usable fusion power plants.
And I have told you those reasons MULTIPLE FUCKING TIMES. But your head is so far up your own arse it’s like you are physically incapable of reading anything that goes against your dog shit, braindead world view.
So I will reiterate for the umpteenth time:
We are doing
It’s really fucking difficult so it takes time.
Do you understand this time or do you need me to draw you a fucking picture?
I need you to understand that people dont sit in meetings and say “should we casually create robots that can do all our daily chores and free the working class from the chains of capitalist oppression, or do we automate furry porn???”
There you’re saying we aren’t doing it.
We are doing
Now you’re saying we are doing it.
But I think we should set that aside for a second. The actual question I am interested in getting an answer for from you is “should we be doing it”.
whether we should be thinking more actively about our priorities and choices.
I can actually feel my braincells fucking killing themselves from every interaction I have with you.
So instead of explaining the same fucking thing for the FITH FUCKING TIME to someone that is either a troll or someone with less mental capacity than a cold bowl of soup im just moving on.
That’s a nice strawman, there. Why don’t you ask the direct question: Are advances in laundry and dish washing technology easier or harder than in image generation?
You made no argument as to why automating art should actually be easier – all you provide is an assertion. Consider the billions sunk into buying shovels from nvidia, how much progress could’ve been made on the laundry front?
Then, your whole line of inquiry is missing the point: It’s not so much about relative complexity of the tasks, but the fact that we do enjoy the one, but not the other. What does it say about a society when we focus on automating what we like to do at the expense of stuff that we don’t like to do?
Or, differently put: How much advancement in AI do we need to make to have it ask us what a dabbling beginner artist can already ask us, and that is why the hell are we making this choice, and not the other? Art is the science of choice, give it some respect.
What you’re doing there is not being smart, being all so enlightened rational MINT brained, what you’re doing is going to extreme lengths to ignore a simple question. Was that choice conscious?
Image generation, being purely software, is far easier than automating physical tasks. There’s very little risk of danger, you can iterate much faster, and costs are lower. Not really a clear analog, but Boston dynamics spot robot is like 75k, whereas most image generation models are downloadable for free. Once you start acting in the physical world things get expensive and hard.
Automating laundry would’ve also required this, as the shovels are for general machine learning. In fact as far as I can tell, these gpus aren’t being bought for image generation, but large language models now.
I was thinking more development costs. The product price of the robot I’m sure can be brought down significantly with sufficient automation.
Those LLM fucks are working towards AGI, I don’t think my washing machine needs to be sentient to do a good job. Or read reddit and conclude that it should add glue to the detergent.
Engineering-wise improvements in laundry are way easier to deal with because the problem space is well-defined. Want your clothes to come out folded? Sure, not easy, but at least we know what behaviour we’re looking for and how to assess it.
Like I said, not really a clear analog, however I’m hoping this shows the difference working in the real world makes when it comes to R&D costs. Idk how accurate this website is, but we’re talking about literal billions in R&D. Versus most stable diffusion models are trained up by a grad student or something - and subsequently released for free potentially as a reflection of the r&d price.
But you will need them to be able to recognize the type of clothing and orientation in a 3d space before manipulating it, and that’s where training models comes in.
Most SD models are fine-tunes of stuff StabilityAI produces. Training those things from scratch is neither cheap nor easy, PonyXL is the one coming closest as there’s more of its own weights in there than of the SDXL base model but that wasn’t a single person enterprise. One lead dev yes but they had a community behind them.
If it was that easy people wouldn’t be griping about StabilityAI removing any and all NSFW content from their dataset so the community has to work for months and months to make a model erm “usable”.
I wasn’t able to find exact r&d costs for stability ai, however all the numbers mentioned in this article are at least an order of magnitude lower than Boston dynamics r&d costs.
Again, I assert that working in the physical realm brings along with it far more r&d costs on the basis that there’s far more that can go wrong that has to be accounted for. Yes, as you described, the problem itself is well defined, however the environment introduced countless factors that need to be accounted for before a safe product can be released to consumers.
As an example, chess is potentially the most well defined problem as it is a human game with set rules. However a chess robot did grab and break a child’s finger a while back. There’s a reason why we don’t really have general manipulator arms in the house yet.
There is no threat of physical harm in image generation.
I think that’s the problem, there. A decade ago they only just began figuring out how to integrate physical feedback, that robot might not even have any of it but it’s vital for fine motor skills and any kind of reactivity. The hardware part isn’t the difficult stuff, well it might be difficult but not research difficult, the control software is. Really, hardware isn’t the problem (that thing is human-operated).
But anyhow I think all this is kinda missing the point of the quip. It’s neither about laundry nor furry porn, but the more general “automating what we like doing vs. automating what we don’t like to do”. And that might have a deeper reason buried in silicon valley culture and psychology: Going for AGI is the big stuff. Impressing people is the big stuff. Dazzling people with hype is the big stuff. Throwing techbro pipe dreams at problems is the big stuff. Building cars with sub 10 micron tolerance is the big stuff. Solving everyday problems using proven methods? Try getting VC funding for that. Silicon valley doesn’t do anything short of “this will bring about the singularity”.
Or, differently put: Forget about doing the laundry. How about having to work in a washing machine factory to be able to afford following your creativity in your off time while our tech is more than good enough to run a washing machine factory with just a small team of supervisors. Even the software needed for automatic inspection of parts can nowadays be gotten off the shelf, no assembly or QA workers necessary.
That kind of stuff has a gigantic ROI – alas, it’s also quite long-term, short-term it’s cheaper to hire human workers, not to mention human workers in the global south. But it’s going to get cheaper as more and more stuff will become standard engineering practice.
That’s exactly the “ai” aspect of this, which is why I’ve been saying it’s a harder problem. Your control software has to account for so much because the real world has many unforeseen environmental issues.
Fwiw, one of my friends works at John Deere helping with automation by working on depth estimation via stereo cameras, I believe for picking fruit. That’s about as “automate menial labor” as you can get, imo. The work is being done, it’s just hard.
I think you’ve introduced some circular thinking in here - spot and atlas bots from boston dynamics do dazzle people. In fact I’d argue that with all the recent glue memes, people are disillusioned with agi.
At the end of the day, manipulating things in the real world is simply harder than manipulating purely digital things.
Once we’ve established that, then it makes sense why digital image generation is tackled first before many physical things. As a particularly relevant example, you’ll notice that there’s no robot physically painting (not printing!) the images generated by image generation, because it’s that much harder.
Ultimately, however, I don’t think society is ready for robots that do our menial labor. We need some form of UBI, otherwise the robots will in fact just be terking our jerbs.
Illinois
Massachusetts.
…I mean they may recruit in the valley and have offices there but they’re not part of the business/VC culture there. And neither seeks trillions of dollars to sink into silicon.
UBI is going to come one way or the other, question being whether we will have to fight neofeudal lords first.
And it won’t be Musk or any of the valley guys ketamine and executing on evil plans doesn’t mix, it might not even be American, it might be… wait, yes. A Nestle+Siemens merger buying up Boston Dynamics and other companies having actually useful products. Nestle has evilness nailed down, Siemens is a big investment bank (with attached household appliance factory) and still miffed that laws forbid them from bribing foreign officials, and the rest provide the tech.
The Chinese are another possibility, as in they have the capacity, though I can’t quite see tankies actually going for the abolishment of work.
I don’t think it’s that hard. The robot would be a slightly more involved plotter or a standard 6-axis arm, to train the model you don’t need to hook it up to the robot you could hook it up to a painting program, we’re quite good at simulating oil paint, including brush angle and rotation and everything, graphics tablets can detect those things and programs have been making use of it for quite a while. Might not go the whole way but far enough to only need fine-tuning once you hook up the robot.
Thats not what a strawman is.
Harder, obviously.
Apologies, I assumed people had common sense, but you’ve made it clear I was incorrect. My bad
Shovels?
Its actually like talking to a brick wall.
I need you to understand that people dont sit in meetings and say “should we casually create robots that can do all our daily chores and free the working class from the chains of capitalist oppression, or do we automate furry porn???” We are not automating art at the expense of laundry.
We a re doing both
and I hate that you’re making me repeat this over and over because you have tumblr levels of reading comprehension. Its orders of magnitude easier to white a computer program than it is to build a fucking robot.
The absolute. fucking. Irony.
Youre still missing the ENTIRE FUCKING POINT of my comment. There isnt a choice, this isnt 1 or the other. For the love of god learn to read.
“When there’s a gold rush, be the one who sells shovels”.
Why not? Why aren’t we doing it? Why aren’t you doing it, why are you reinforcing us not doing it by putting other perspectives down as “braindead artist takes”?
Maybe not a binary choice, but there’s definitely a priority. People question that priority. Is that question, in your perspective, valid? Would it hurt humanity if we replaced, say, 0.1% of pointless meetings with meetings discussing robots vs. furry porn?
Be bold, ask that question.
For the exact same reason YOU nor anyone else is curing cancer, or sending people to Mars, or letting us live forever or creating usable fusion power plants.
And I have told you those reasons MULTIPLE FUCKING TIMES. But your head is so far up your own arse it’s like you are physically incapable of reading anything that goes against your dog shit, braindead world view.
So I will reiterate for the umpteenth time:
We are doing
It’s really fucking difficult so it takes time.
Do you understand this time or do you need me to draw you a fucking picture?
There you’re saying we aren’t doing it.
Now you’re saying we are doing it.
But I think we should set that aside for a second. The actual question I am interested in getting an answer for from you is “should we be doing it”.
I think I would have an easier time explaining this to my dog.
That’s my entire fucking point. That we are in fact trying to do it. But because it’s a complicated problem, we haven’t done it yet.
Do you understand what I’m saying???
That we are trying to do those things, right? It’s not like people working on AI image generation mean we can’t also work on automating other things.
Like can you understand that the 8 billion or so humans on this planet are capable of working on at least 2 things at once?
As I’ve said multiple times: yes and we already are.
Do you understand the word “yes”? Do you need me to explain it to you a couple more times??
Not what I was asking. I was asking whether:
That is, whether we should be thinking more actively about our priorities and choices.
I can actually feel my braincells fucking killing themselves from every interaction I have with you.
So instead of explaining the same fucking thing for the FITH FUCKING TIME to someone that is either a troll or someone with less mental capacity than a cold bowl of soup im just moving on.
So you don’t want to answer the question, got it. You could’ve been straight-forward with that, I’d have respected it.