No, I do mean literally your family. Not because I’m trying to be mean to you, I’m just trying to highlight you’d agree with a contract when you think the price does not apply to you… But in reality the price will apply to someone, whether they agree with the contract and enjoy the benefits or not
It’s the exact same situation with real life with the plane manufacturers. They lobby the government to allow recalls not to be done immediately but instead on the regular maintenance of the planes. This is to save money but it literally means that some planes are put there with known defects that will not be addressed for months (or years, depending on the maintenance needed)
Literally, people who’d never have a loved one in one of those flights decided that was acceptable to save money. They agreed, it’s ok to put your life at risk, statistically, because they want more money
Then it’s not a fair question. You’re not comparing 40k vs 20k, you’re comparing 40k vs literally my family dying (like the hypothetical train diversion thing), that’s fear mongering and not a valid argument.
The risk does not go up for my family because of self-driving cars. That’s innate to the 40k vs 20k numbers.
So the proper question is: if your family was killed in an accident, what would be your reaction if it was a human driver vs AI? For me:
human driver - incredibly mad because it was probably preventable
AI - still mad, but supportive of self-driving improvements because it probably can be patched
The first would make me bitter and probably anti-driving, whereas the second would make me constructive and want to help people understand the truth of how it works. I’m still mad in both cases, but the second is more constructive.
Yes, it’s a thought experiment… Not a fair question, just trying to put it in perspective
Anyone who understands stats would agree 40k death is worse than 20k but it also depends on other factors. All things being equal to today, the 20k proposition is only benefit
But if we look into the nuance and details emerge, the formula changes. For example, here it’s been discussed that there may be nobody liable. If that’s the case, we win by halving death (absolutely a win) but now the remaining 20k may be left with no justice… Worse, it absolutely creates a perverse incentive for these companies, without liability exposure, to do whatever to maximize profit
So, not trying to be a contrarian here… I just want to avoid the polarization that is now the rule online… Nothing is just black and white
But they’d get restitution through insurance. Even if nobody is going to jail, there will still be insurance claims.
I agree that there is nuance here, and I think it can largely be solved without a huge change to much of anything. We don’t need some exec or software developer to go to jail for justice to be served, provided they are financially responsible. If the benefits truly do outright the risks, this system should work.
Tesla isn’t taking that responsibility, but Mercedes seems to be. Drivers involved in an accident where the self-driving feature was engaged have the right to sue the manufacturer for defects. That’s not necessarily the case for class 2 driving, since the driver is responsible for staying alert and needs to be in contact with the steering wheel. With class 3, that goes away, so the driver could legitimately not be touching the wheel at all when the car is in self-driving mode. My understanding is the insurance company can sue on their customer’s behalf.
So the path forward is to set legal precedent assigning fault to manufacturers to get monetary compensation, and let the price of cars and insurance work out the details.
But they’d get restitution through insurance. Even if nobody is going to jail, there will still be insurance claims.
And that’s where I’m aiming at… If Mercedes decides, like Ford did before them, that it’s cheaper to pay out the insurance claims they lose instead of fixing their bugs then innocent people will have to die so Mercedes can keep up their profit margins.
That’s exactly the point I’m trying to make
You seem to argue that, on the unproven premise that current AI is better than human drivers, we should let corporations test it out in the real world even if they are not criminally liable ever. For me, that’s a bad deal.
Now, imagine we go down this rabbit hole… It’s already 10x cheaper to lobby USA politicians to limit Mercedes liability than it would be for them to actually start paying wrongly death claims
In Texas, if you doctor shows up drunk for surgery and leaves you quadriplegic or kills you, the biggest liability exposure has been limited to 250k
I love tech and I do believe science, knowledge and the tech it can produce could improve our lives in unimaginable ways… But as long as our approach to it continues to be profit over people, socialise the risk - privatize the profit and corporation being citizens in all aspects except liability, we will never get there
You seem to argue that, on the unproven premise that current AI is better than human drivers, we should let corporations test it out in the real world even if they are not criminally liable ever
I’m arguing on the assumption that it is proven.
Until it’s proven, the driver takes the responsibility if the corporation doesn’t, and insurance costs should reflect that. There are reasons I don’t own a car equipped with self-driving features, and this is one of the big ones, it’s unproven.
But as long as our approach to it continues to be profit over people, socialise the risk
We’ve gotten really far with prioritizing profit, but I agree that socializing the risk is a big problem. However, criminal acts generally require motive, so we’re unlikely to see actual jail time without provable, malicious intent.
So I think we should do the next best thing: fine them. Increase the fines for each infraction in a given year until the problem is fixed. Force them to continue to improve.
The proposition is stupid. If you told me that ALL future accidents will be prevented if I agree to kill my family, I would still not do it, that’s just a bad faith trolley problem. Let’s alone just recuding it by half.
I reduced it to a more realistic experiment, where my family migth be killed, with the same probability as any other.
If you told me that ALL future accidents will be prevented if I agree to kill my family, I would still not do it
That is exactly the point… Anyone would be 100% happy taking any proposition as long as they don’t have to pay the cost. I was just trying to highlight that
In this case, it was all about liability… We have not even come close to prove the current driverless tech is actually better than people’s skills… We all know that automated driving should be safer but we have no clue if we are even taking the right steps.to get there
No, I do mean literally your family. Not because I’m trying to be mean to you, I’m just trying to highlight you’d agree with a contract when you think the price does not apply to you… But in reality the price will apply to someone, whether they agree with the contract and enjoy the benefits or not
It’s the exact same situation with real life with the plane manufacturers. They lobby the government to allow recalls not to be done immediately but instead on the regular maintenance of the planes. This is to save money but it literally means that some planes are put there with known defects that will not be addressed for months (or years, depending on the maintenance needed)
Literally, people who’d never have a loved one in one of those flights decided that was acceptable to save money. They agreed, it’s ok to put your life at risk, statistically, because they want more money
Then it’s not a fair question. You’re not comparing 40k vs 20k, you’re comparing 40k vs literally my family dying (like the hypothetical train diversion thing), that’s fear mongering and not a valid argument.
The risk does not go up for my family because of self-driving cars. That’s innate to the 40k vs 20k numbers.
So the proper question is: if your family was killed in an accident, what would be your reaction if it was a human driver vs AI? For me:
The first would make me bitter and probably anti-driving, whereas the second would make me constructive and want to help people understand the truth of how it works. I’m still mad in both cases, but the second is more constructive.
Seeing someone go to jail doesn’t fix anything.
Yes, it’s a thought experiment… Not a fair question, just trying to put it in perspective
Anyone who understands stats would agree 40k death is worse than 20k but it also depends on other factors. All things being equal to today, the 20k proposition is only benefit
But if we look into the nuance and details emerge, the formula changes. For example, here it’s been discussed that there may be nobody liable. If that’s the case, we win by halving death (absolutely a win) but now the remaining 20k may be left with no justice… Worse, it absolutely creates a perverse incentive for these companies, without liability exposure, to do whatever to maximize profit
So, not trying to be a contrarian here… I just want to avoid the polarization that is now the rule online… Nothing is just black and white
But they’d get restitution through insurance. Even if nobody is going to jail, there will still be insurance claims.
I agree that there is nuance here, and I think it can largely be solved without a huge change to much of anything. We don’t need some exec or software developer to go to jail for justice to be served, provided they are financially responsible. If the benefits truly do outright the risks, this system should work.
Tesla isn’t taking that responsibility, but Mercedes seems to be. Drivers involved in an accident where the self-driving feature was engaged have the right to sue the manufacturer for defects. That’s not necessarily the case for class 2 driving, since the driver is responsible for staying alert and needs to be in contact with the steering wheel. With class 3, that goes away, so the driver could legitimately not be touching the wheel at all when the car is in self-driving mode. My understanding is the insurance company can sue on their customer’s behalf.
So the path forward is to set legal precedent assigning fault to manufacturers to get monetary compensation, and let the price of cars and insurance work out the details.
And that’s where I’m aiming at… If Mercedes decides, like Ford did before them, that it’s cheaper to pay out the insurance claims they lose instead of fixing their bugs then innocent people will have to die so Mercedes can keep up their profit margins.
That’s exactly the point I’m trying to make
You seem to argue that, on the unproven premise that current AI is better than human drivers, we should let corporations test it out in the real world even if they are not criminally liable ever. For me, that’s a bad deal.
Now, imagine we go down this rabbit hole… It’s already 10x cheaper to lobby USA politicians to limit Mercedes liability than it would be for them to actually start paying wrongly death claims
In Texas, if you doctor shows up drunk for surgery and leaves you quadriplegic or kills you, the biggest liability exposure has been limited to 250k
I love tech and I do believe science, knowledge and the tech it can produce could improve our lives in unimaginable ways… But as long as our approach to it continues to be profit over people, socialise the risk - privatize the profit and corporation being citizens in all aspects except liability, we will never get there
I’m arguing on the assumption that it is proven.
Until it’s proven, the driver takes the responsibility if the corporation doesn’t, and insurance costs should reflect that. There are reasons I don’t own a car equipped with self-driving features, and this is one of the big ones, it’s unproven.
We’ve gotten really far with prioritizing profit, but I agree that socializing the risk is a big problem. However, criminal acts generally require motive, so we’re unlikely to see actual jail time without provable, malicious intent.
So I think we should do the next best thing: fine them. Increase the fines for each infraction in a given year until the problem is fixed. Force them to continue to improve.
If there are 20k deaths vs 40k, my family is literally twice as safe on the road, why wouldn’t I take that deal?
Read the proposition… It’s a thought experiment what we were discussing
The proposition is stupid. If you told me that ALL future accidents will be prevented if I agree to kill my family, I would still not do it, that’s just a bad faith trolley problem. Let’s alone just recuding it by half.
I reduced it to a more realistic experiment, where my family migth be killed, with the same probability as any other.
Oh the depth of reasoning in social media
That is exactly the point… Anyone would be 100% happy taking any proposition as long as they don’t have to pay the cost. I was just trying to highlight that
In this case, it was all about liability… We have not even come close to prove the current driverless tech is actually better than people’s skills… We all know that automated driving should be safer but we have no clue if we are even taking the right steps.to get there
But I am paying the cost. I accept that my family might be killed in an accident, with the same probability as anyone else.
If that’s your point, that a stupid point, and you should do better.
Again if you are not willing to engage in a discussion where there is more nuance than black vs one, move along
Blacknor white, as in “kill your family without consideration of probability (aka grey zones)”?
I’m tired of explaining what a thought experiment is and the point I was trying to discuss… You can just disagree and move on with our lives
Have a great week ahead bud