People on Asia are less willing to kill the elderly because they don't have a Republican party.
Everything in the article groups Japan and China together, but on the question of whether to prioritize passengers over pedestrians they are on the extreme opposite sides. (I'm reading that chart correctly, aren't I?)
I have no insight into this -- I just thought it was weird.
The results showed that participants from individualistic cultures, like the UK and US, placed a stronger emphasis on sparing more lives given all the other choices--perhaps, in the authors' views, because of the greater emphasis on the value of each individual.
People who put more emphasis on the value of each individual are more prone to turn them into aggregate numbers for moral purposes?
People who put more emphasis on the value of each individual are more prone to say
- 4 people = 4 units of human life
- 2 people = 2 units of human life
- 4 is greater than 2
- therefore, kill the 2 people
And less prone to factor in the merits of the people to society (e.g. 1 doctor = 4 puny non-doctors).
What's their net worth?
This is super interesting data! It is a shame that the main thrust of the article is focused on inferring cultural differences from the survey results versus having a couple, even very broad questions on why people made their choices. the inferences seem to be in spite of significant outliers within the groups they created (japan vs China as mentioned in 2).
Summary I guess is cool data set creates some data. Lazy framework fitting stereotypes is overlaid. Then the articles summarize that half-proposed result as a result. It is a common problem for me when I see this type of write up.
This is super interesting data! It is a shame that the main thrust of the article is focused on inferring cultural differences from the survey results versus having a couple, even very broad questions on why people made their choices. the inferences seem to be in spite of significant outliers within the groups they created (japan vs China as mentioned in 2).
Summary I guess is cool data set creates some data. Lazy framework fitting stereotypes is overlaid. Then the articles summarize that half-proposed result as a result. It is a common problem for me when I see this type of write up.
If these are self driving cars then the first person they should run over is Elon Musk
It's behind a paywall, but in the paper they clustered countries based on how similarly they answered. There are some things that aren't surprising--most of the Anglophone countries are clustered together--but a few that are: the Anglophone outlier is not Ireland but New Zealand, which is grouped with Cyprus (nowhere near Greece or Turkey); China and a few other Asian outliers are grouped with the Eastern European nations instead of with the rest of Asia. I dunno if there's anything meaningful here.
My favorite autonomous vehicle trolley problem is "should this robot car hit a school bus carrying 28 children, or drive itself off a cliff thereby only endangering its single passenger?"
That's not a problem. That's an opportunity for car rental companies.
"Would you like to ensure you are fully covered for any damage to the vehicle? Only $19.95 a day."
"Would you like to rent a GPS navigation unit? $3.95/day."
"Would you like the Vanity Upgrade? $42.50/day and the car's ethics module will be informed that you are traveling with 6 children, one of whom has a 15% chance of curing cancer if she is grows up. The others have tapeworms that are endangered species."
"If you slip me a twenty, I will be sure my fingers don't slip and type 'Ann Coulter carrying Pauly Shore's baby.'"
9.last: Your basic commie has no regard for human life.
Huge regard for pumpkin spice lattes, though.
Virtually all the scenarios I see discussed in which self-driving cars would have to make these decisions seem highly unlikely.
ISTM that if they are driving at a speed appropriate to road conditions - which they are going to be doing much more consistently than a human driver - these scenarios become vanishingly rare, and usually reduce to clobbering another car or a child (which is an easy decision).
Counterpoint: in the world of action, what do something like 100% of respondents do when they are actually driving a car? Brake.
In the world of the future, what will something like 100% of robots have behind when they are actually driving? This.
@16 Yeah, so robotic cars are moving slower (near perfect appreciation of road conditions) and thus have to brake less. So the truly catastrophic scenarios should be greatly reduced.
Or, everybody says robot cars are so great we can increase road speed until things are as unsafe for the people in the cars as they are now, but less safe for those in other cars.
Or that. In which case these extended trolley problems are incidental to the economic case for trading a few more lives for extra speed.
As I noted before: how many lives could be saved if the person-hours and funds devoted to wittering about trolley problems could be devoted instead to actually improving road safety (or for that matter vaccination or something)?
21: But all of that philosophical work led to the Good Place episode "The Trolley Problem", which brought me more joy than any mere road or vaccine.
Facial recognition + databases of personal information, including financial and insurance information + liability laws + machine learning + amoral optimization strategies = car's "choice" is whatever minimizes payouts.