There are 12.5 deaths per billion vehicle miles driven. This is the statistic that self-driving cars have to beat in order to be "safer." Based on this, Uber has had about 3 million test miles (under mostly safer than average road conditions, but I'll leave that aside). They're now at 333 deaths per billion miles driven. Gonna have to go a few more days without a death before they're "safer."
This is stupid enough to actually cause me physical pain.
The backup driver thing makes me nuts. It is obviously not going to happen that anyone is going to be consistently paying close enough attention to the road to intervene in an emergency if they're not really driving -- that's impossible and pointless as a safety mechanism. Yes for short intense test periods, but not for real driving around.
Also, I think it's important to note that this happened where the speed limit was 35 -- the woman wasn't walking her bike across a high-speed highway in the middle of nowhere. If the speed limit was that low, it was because it was a populated area where pedestrians were likely.
1: Walk me through what makes it so stupid?
The video seems to show an objectively difficult situation for a human driver; it also seems to show the car not reacting at all, even when the woman stepped into the light (and it should have been able to see her earlier with non-visible-light sensors). So I'd say there's some pretty serious questions about WTF went wrong with Uber's tech. Weirdly, this is not one of the esoteric edge cases folks have been worrying over. Rule #1 for a self-driving AI is, "Detect a thing in the road and try not to hit it."
Am I supposed to be writing a poem?
The Uber cars are very common here. I rarely go somewhere without seeing one.
Wait - Uber self-driving cars? or just Uber cars in general are common?
The Uber self-driving cars. They have a very visible roof-mounted turbine-looking thing.
I watched the video. I didn't find the "looking down" bit incriminating. She was glancing around, and I don't think could have done anything anyway. I did not get a "not paying attention" vibe out of that, and she wouldn't have seen anything in time to react BECAUSE THE CAR WAS DRIVING TO FAST FOR THE RANGE OF THE HEADLIGHTS.
Seriously, by the time you could see the pedestrian it was too late. The car should have A) been going slower and B) maybe used lidar or some shit so it could see in the dark.
A couple of weeks ago, one was in the paper because it hit a pothole so hard the computer said "Fuck this noise" and just stopped.
I found this to be very good, and the links to some of the tweets showing that intersection were particularly enlightening.
3: Extrapolating from deaths per 3 million to deaths per billion without including a confidence interval?
The backup driver thing makes me nuts.
I think the backup driver thing started when they were testing on highways and the idea wasn't, "Oh, shit, that truck jack-knifed! Do something!" It was, "There's some congestion up ahead and you'd better take over." Since then, the human backup has become a totem of not handing our lives entirely to our new AI overlords.
Sample size.
Also, why limit it to self-driving Uber cars? Why not include all the mileage (which is significant) from all the other self-driving cars? Well, obviously, because you want to make the numbers as scary as possible.
You could use exactly the same argument to say "yes, at one fatality in three million miles driven, Uber is clearly far more dangerous than human-driven cars. But, with zero fatalities in more than five million miles, Waymo's self-driving cars are equally clearly far safer".
4: I think the video overstates the difficulty for a human driver. First, it increases the contrast between brightly lit and less well-lit parts of the scene compared to human vision -- there are streetlights, so it wasn't as if she stepped out of pitch-blackness into headlights. And a person driving would have had much better peripheral vision than the constrained frame of the video makes clear. It's not sure that a person would have avoided the crash, but I think it's likely they could have.
Again, remember that it's a 35 zone, which to me means 'this is an area where you're very conscious of things possibly happening', for a human driver. Where the speed limit's higher, cruising along on mental autopilot is certainly a thing that happens, but where you're at a slow speed in a built-up area, you should be watching the sides of the road.
2 And the Uber was speeding. It was also an area a human driver would instinctively be on higher alert with an adjacent bike and pedestrian lane. What kind of sensor was it using anyway? Surely LIDAR would have picked her up.
Yeah, a human can't be made responsible to slam on the breaks at the last second. That's the car's job. The human is for dealing with weird and temporary changes to traffic patterns.
12: Isn't the point that the sample size is way too small to say that it's safer until you hit the point where deaths start happening? And when they do start happening, while it's overstated to draw solid conclusions at this point, this does seem early for a first death if Uber cars were safer.
Since then, the human backup has become a totem of not handing our lives entirely to our new AI overlords.
It's oft-said around here that sexual harrassment training videos make perfect sense when you remember that they're designed primarily to protect the company from liability and their effect on preventing sexual harassment is incidental. Seems like this is the same phenomenon.
Wait, Uber cars don't have lidar?
I don't know if they have lidar or not but lidar could and should have prevented this specific accident from happening.
15: Speeding is funny -- 38 in a 35 is 'speeding', but it's also normal competent human driving. If there are going to be autonomous cars on the road and they're not going to exceed the speed limit, that's going to change the experience for everyone else a whole lot.
Wait, Uber cars don't have lidar?
Yeah, funny story about that.
http://www.businessinsider.com/a-judge-just-banned-ubers-former-head-of-self-driving-cars-from-lidar-related-work-2017-5
19: Looking at the video, my first thought was that if this isn't the kind of a collision that a computer isn't better at avoiding than a human is, somebody programmed the computer incorrectly. There didn't look to be enough time for a person to have reacted after the pedestrian was in the headlights, but I don't know why the computer couldn't have seen her earlier. It doesn't seem like it would take very exotic technology.
Self-driving cars surely don't rely on posted speed limits to determine safe driving speeds, right?
I don't have a problem with humans or AI doing 38 in a 35 during daylight hours, but if you are driving at a speed where you can't slam on your breaks the second something appears in your headlights and stop in time, that's speeding regardless of the speed limit.
Were the headlights working right? Because that didn't seem like as much warning as I would expect at 38 MPH.
20: I don't recall ever hearing about the policy on that. It would be weird to specifically program a computer to break the law. But it's also true that human traffic is habitually 5-10mph over the limit. On congested highways I'm sure it makes sense to drive with the flow; on empty suburban streets, not so much.
I do recall a Waymo anecdote about 4-way stops. The cars don't have to break the law exactly, but they had to be programmed to be a little aggressive or humans would just continually steal their right of way.
The self-driving vehicle defense force has been maddening for years. And now they're all like "Sigh, how interesting that you have now noticed that cars are dangerous. If we got upset about every death caused by you puny human drivers, what a better world this would be".
The maddening part is that their base premise is that the current situation could not get any worse. Like the charter school fans who have convinced themselves that public school teachers are the worst teachers imaginable, so replacing them with any random group of humans will be an improvement. Not only that, but so much of an improvement that it's worth breaking up existing schools and their community structure, shuffling kids around all the time, constantly changing the rules and incentive structure... Gotta do something! It's an emergency!
24, 25: See my 13 -- I think visibility looks much worse on video than it is in real life. You can see streetlights in the video, but they don't look as if they're illuminating anything -- if you were actually there I think you would have been able to see her by the streetlights much earlier.
but I don't know why the computer couldn't have seen her earlier
Its possible the image detection algorithm was thrown off by the combination of human+bicycle+low light. Like maybe it misclassified the reflection from the bicycle spokes as road markings or something.
I think visibility looks much worse on video than it is in real life.
Right, but to the computer driving the thing, the video visibility is what there is.
21: Huh?
From what I've read on the recent lawsuit, Google (Waymo) might have its own better/cheaper proprietary version while Uber's attempts to do better than its off-shelf solution Velodyne spurred the lawsuit, but that's not speaking to 14/19.
If Waymo wins, the company would most likely seek an immediate injunction against Uber from using its stolen LIDAR technology. This won't derail Uber's autonomous driving tests that much, considering the ride-hail company says it buys its LIDARS off-the-shelf from Velodyne. It would only affect the LIDAR that Uber is trying to produce on its own, and source indicate that Uber is already trying to pivot away from the designs brought over by Levandowski.
28 makes a lot of sense. You notice this in general on footage from dashboard cameras at night - the contrast is much higher than it appears to the human eye.
30: OK then, in for a penny in for a pound: It's like the invasion of Iraq. And the overthrow of Qaddafi. Things can't get any worse in those countries, right? The only question is how MUCH better the new government, if there is one, will be.
But on the other hand, a human driver might have seen the pedestrian but also assumed that the pedestrian had seen the oncoming car on the otherwise empty road and wasn't just going to walk straight in front of it, because that's not normally what humans do. Or at least not what most humans do. It may be the accepted way to cross roads in Arizona - you just stride boldly out into the traffic and expect it to stop for you.
31: I honestly don't know, but I doubt this? Even not counting non-optical sensors, if the optical input to the car is as bad as that footage compared to what a person can see, it would seem insane to rely on it. I mean, seriously, they'd be crazy not to at least have a much broader field of view.
36: Well... sometimes people do, and you can really predict very well from their body language if they're going to. Human drivers make this sort of judgment all the time.
27, 30: Ned is my new favorite commenter.
37: the link says that the cars have seven cameras, including close and far field forward cameras. It's even possible that the video is from a separate camera that isn't used for driving at all (like the interior camera). I don't know.
I find it appalling that the victim was smeared almost instantly. Why won't Uber release their data? This has got to be the best documented car accident in human history. Let's have at that data. Full transparency and all that.
Transparent humans are hard to see.
21 Then too bad for Uber but their robot cars aren't safe for our streets.
Caltrops, people. Get some and defend yourselves.
It's even possible that the video is from a separate camera that isn't used for driving at all
This is my assumption.
Which, to make it clear, makes me think that this wasn't an impossible situation where the car couldn't possibly have perceived the victim. This looks to me like something a human driver could probably have handled fine, and an autonomous car should have been able to handle too -- the pedestrian should have been perceptible to its sensors -- except that something went wrong.
45 Which makes it all the more maddening that Uber and the authorities are all like "nothing to see here, move along."
The speed limit has been reported elsewhere as 40 or even 45.
Here's Arizona news saying 35. I mean, it could still be wrong.
I watched the video, and 1) if your "self-driving" car can't handle that, then it doesn't belong on the road and 2) the "driver" could have prevented the accident, if that was in fact the driver's job. If you're looking down most of the time, you're not going to see the flash or movement, or whatever it happens to be, that alerts your brain that there's something on the road.
Basically, I think these things are ridiculous and I can't believe they're legally on the roads.
21: You answer your argument in 12.2 here. You focus on Uber cars because Uber isn't using the same technology as everyone else, except to the extent that it is stealing technology. The company is, in general, a bad actor that appears to be less interested in public safety and general human decency than other corporations. (Although it's certainly possible that Uber is just less good at concealing its sociopathy.)
Per LB's 17, I'm wondering if you are arguing that it is unreasonable, at this stage, to monitor Uber's safety record at all.
A naive person could over-read Atrios' point the way you insist that we must, and Atrios doesn't offer us the more precise statistical methods that are available to evaluate evidence when data are sparse.
But, making allowances for the normal parameters of blog snark, Atrios' point seems completely sound. I would characterize that point as: "Uber's safety record doesn't look good so far."
As for Bitcoin did you see that there's child pornography on the blockchain now? So all the bitcoin bros are pedos now.
You answer your argument in 12.2 here. You focus on Uber cars because Uber isn't using the same technology as everyone else, except to the extent that it is stealing technology.
But Atrios doesn't conclude "we should take Uber off the road". He concludes "we should take self-driving cars off the road". It's cherry-picking.
And Uber, like all the others, is using a combination of radar, machine vision and lidar.
Some say the world will end in fire
All I'm saying is maybe the Democrats should try doing what I want
Remember 2004? I was right.
Self driving cars, I live in a city.
I would characterize that point as: "Uber's safety record doesn't look good so far."
Yes, but wait until humans start mixing meth and vodka for breakfast.
As for Bitcoin did you see that there's child pornography on the blockchain now?
There is also a picture of Tank Man, which means it should be banned in China.
This seemed right to me, from a former Googler:
Something that surprised me a few years ago was how a bunch of companies were suddenly making self-driving cars. I *knew* how many years Google's cars had been tested for before they got anywhere near a public road; how did others move so fast? I'm increasingly suspecting, "recklessly."
But Atrios doesn't conclude "we should take Uber off the road". He concludes "we should take self-driving cars off the road".
This is not what the linked material says. And I read a lot of his stuff on the subject and I don't recall him ever having said this.
Also, there is a certain series of digits in pi that translates to a pornographic Tank Man jpeg.
I think the solution is for self-driving cars to actually be self-driving flying cars. Collision avoidance is much easier when you are in the air and there are fewer things to bump into. And you really don't wan't flying cars to be human-piloted anyway.
Basically, I think these things are ridiculous and I can't believe they're legally on the roads
See, that's what I think about a frighteningly large number of human drivers. Human beings (not all of you, you folks are great, I'm sure) suck at driving. And self-driving cars are far less likely to try to run me down on purpose, which has happened to me twice in the past year.
This is not what the linked material says.
Maybe I misread him.
"It's going to take a long time to make the stats for robot cars look better than the stats for human cars after one death.
There are 12.5 deaths per billion vehicle miles driven. This is the statistic that self-driving cars have to beat in order to be "safer."
Based on this, Uber has had about 3 million test miles (under mostly safer than average road conditions, but I'll leave that aside). They're now at 333 deaths per billion miles driven. Gonna have to go a few more days without a death before they're "safer."
I concede that he doesn't actually say that they should be taken off the road. He could simply be arguing that they're much more dangerous than humans but they should be allowed on the road none the less.
43: 21 is misleading. Uber has lidar. Its systems' quality or people's ethics may be at issue, certainly.
He could simply be arguing that they're much more dangerous than humans but they should be allowed on the road none the less.
Or he could be arguing that anyone who says they're 'safer' than human drivers doesn't have support for it yet. Which is what he literally seems to me to be saying, and which seems solidly reasonable.
The companies could all rent out that empty section of California City that Mythbusters used and pay people to live, drive, walk around, and be potential targets.
For the purposes of this sort of discussion, I tend to put technological problems in two baskets: Information-related, and everything else. And my assumption is the non-information-related problems (travel to Mars! nuclear fusion!) are going to take longer to solve than is commonly thought, while the information-related ones (GPS navigation, for instance) turn out to be much easier than one might expect.
Atrios is probably right that the developers are making sunnier projections than are justified, but contra Atrios, I can't imagine that we won't have real self-driving cars in wide use in, say, 20 years.
But if there weren't already plenty of good reasons to stay out of Pennsylvania, Moby's 5 is the clincher.
63 Well their software sucks then. As ogged and others have pointed out she should have been easily detectable, even to a human driver and especially to a robot car equipped with lidar. There is no excuse for this and Uber should be made to release all of the data related to this incident. That they haven't shows to me they have something to hide.
but contra Atrios, I can't imagine that we won't have real self-driving cars in wide use in, say, 20 years
Of course, we will! Also, walking across the street will be illegal and if a person gets run over by a self-driving car, the person's heirs will be responsible for any damage to the vehicle.
I live about a mile from where that pedestrian was hit. That stretch of road is definitely not one that you drive expecting to encounter pedestrians or jaywalkers, and while the speed limit is 35 the design of the road and the nature of the area is such that most functional adults will be doing 45-50. That video scares the shit out of me, because I'm not sure *I* would be able to cleanly avoid someone slow-walking their bike across that street in the dark away from any crosswalks or streetlights.
That said, I don't have LIDAR eyes and it definitely seems like any street legal self-driving system should be able to handle this situation better than this one did.
If the problem per 61 is inadequate enforcement of laws against driving cars into people:
A) this, like many of our problems, is something that is unusually bad in the United States, not an inevitable property of human nature
B) is that going to get better when there isn't even a person or entity that can imaginably be held responsible for the incident except Uber Technologies, Inc.? It doesn't seem like an improvement, incentives-wise.
Some say the world will end with bots,
Some say with tweets.
With automation like as not
I hold with those who favor bots.
But if again we press delete,
I've seen enough of Putin's lies
To say that for disruption tweets
This KPI
Would also meet.
I want to dodge you so hear me out
I want to show you what autonomous driving's all about
Darling tonight
Now I've got you in my sights
With these lidar eyes
One scan of you and I can't disguise
I've got lidar eyes
I feel the repulsion between you and I
43- Nah, the real defense is to design your streets based on cattle paths so that it's too hard a puzzle for self-driving cars to solve. I've never seen any test cars in this city.
this, like many of our problems, is something that is unusually bad in the United States, not an inevitable property of human nature
It's definitely something that can and has been overcome in other countries, but it also seems like a kind of infrastructure-development trap countries can fall in and stay in. Per the WHO, the US is ranked 120th in road traffic fatalities per population. Those over double our rate are mostly low-income countries, but also Thailand, Iran, Saudi Arabia, South Africa, Vietnam, Brazil, Malaysia. Those 110%-199% of our rate include Russia, China, India, Egypt, Mexico, and even Korea.
I rode in a self-driving car four years ago and the main reason the backup driver was there was because there was no guarantee the car could make it around it's city street loop on its own. Testing has expanded since then, but millions of miles of self-driving cars are the same miles over and over.
Anyway, the car I rode in kept dropping out if self-driving mode, not always for obvious reasons, and the default action if no one took over was to slow down to a stop and flash the hazard lights. So someone needed to be ready to take over, and there was an alert sound when the car switched modes. I don't know what the backup driver would have done if it looked like the car would cause an accident.
Since this was part of demonstrations for the public, they talked a lot about safety and how the car was set to default to slow down and/or stop if there was any doubt about a situation. We could see some of the LIDAR data and if the car's model indicated a pedestrian might step off the road or another car might change lanes in front, it would slow in anticipation.
This was not an Uber car.
I meant its, not it's, and that many of the millions of miles driven by self-driving cars are the same miles, chosen for testing. Probably shouldn't write long comments on a phone.
That video scares the shit out of me, because I'm not sure *I* would be able to cleanly avoid someone slow-walking their bike across that street in the dark away from any crosswalks or streetlights.
The lack of crosswalks and streetlights was also part of the problem here. Sure, Uber fucked up, but so did the urban planning people for the City of Tempe.
What are the odds of records being revealed showing that Uber cards had some of the safety features described in 77 but found they limited performance so the made the engineers take them out?
69 is the whole commie truth. If I made a robot car and set it loose in town, I'd be arrested for some kind of recklessness in a second, no matter how much testing I'd done. But if I had a billion dollars and rolled out 10,000 cars, it'd be "and what should be the mandatory minimum for ambulating, sir?"
Now that ogged has endorsed it, I'll claim 69.
Caltrops, people. Also, I would totally support firebombing the fuck out of these cars unoccupied of course.
Caltrops is a new word for me. The book I just finished reading used the word "cantrips" a lot, which was also new, and which I surely will confuse with caltrops. I kept misreading it as catnip, which at least is an old word for me.
Also do you just put the caltrops down as you're walking to cross the street and then pick them up afterwards? What's the movie where they toss sand into the abyss so that the invisible path will be revealed by where the sand lands? Is it going to be like that going forward?
85: You've never played Dungeons & Dragons?
86: Indiana Jones and the Last Crusade.
I'll totally pay to see Indiana Jones vs the self driving cars if it ever gets made.
Heebie has clearly never planned the getaway portion of a heist.
No, you're thinking of pylons. I know those are different.
Nylons in a heist, pylons in the streets afterwards. As they say.
Indiana Jones and the Tempe of Doom
76: remember that your road fatality rate is going to consist of three things:
how frequent are road accidents?
how likely are road accidents to cause injury?
how likely are those injuries to be fatal?
Now, the first is going to be affected by vehicle design, road design, driver skill, and a few geographical/cultural things like weather and drunkenness. The second is going to be vehicle design and motorbike helmets and so on. And the third is going to be all about emergency services response times and quality of health care.
So it's quite possible that, say, Vietnam has safe roads and excellent drivers but just terrible ambulance coverage. Etc.
Speed is going to impact all three. Which means you can't write-off the impact of driver skill on #2 or #3.
GPS beacons will be mandatory for all bicyclists. And heck, why not pedestrians too.
I own a self-driving car company and to my mind that video is absolutely egregious. There multiple failsafes that should have been in place to prevent that from happening. Road design there is certainly horrible as well, but that car should have never ever been on the road with the safety thresholds set so that something like that was possible.
It's not terribly surprising that it's Uber but that's not because their technology team isn't good.
AMA!
The thing about reduction in road fatalities is that the US used to be a road-fatality leader, but in the past 30 years or so Europe and Japan have had drops that have massively outpaced the US. So whatever the factors are (my view - most of the difference is stricter driver education and licensing requirements) it's not some immutable feature of US geography. But all driving is massively safer now, in all rich countries, than it was even in 1990.
"used to be a leader" in the sense of "used to be the safest."
99: Hiya! What's the movie that I always confuse with GalaxyQuest?
I own a car. So, pretty much the same thing.
Thanks! That was driving me crazy when I made the cylon joke. But those aren't cylons, I think.
I think the problem is that Starship Troopers sounds like it should be the funny dopey movie, like Super Troopers. GalaxyQuest sounds more or less just generically scifi.
61 and 68 are exactly what Ned is talking about in 27. "Human drivers are so terrible that literally anything would be better."
Sadly, no.
I wish I'd played 102 a little more straight, for the humor of it.
Anyway, aside from everything else--and everything else is a lot--the failure of the robot car ever to apply its brakes is completely devastating to any claim that they are safer right now.
Yes. Based on the video, I can't say I would have missed that pedestrian, but I'm sure I would have hit the brake.
109: Exactly. A non-distracted driver probably spots the ped/cyclist as a potential issue before she leaves the curb* and is, on some level, prepared to react, and at least brakes and swerves when she surprisingly crosses the road. Even if avoidance isn't possible, maybe it's a glancing blow or the speed is 28 and not 38, and the person survives.
*I'm rolling with the assertion that human vision on that stretch of road is better than the video's; IMO the cyclist would have been visible, even if not clearly, from some distance
To the extent that being in Arizona counts as "living."
46 is probably underplaying the general sense that cops seem to have that anyone who gets hit by a car while walking (or, God forbid, on a bike) might represent a tragedy but should have known they were taking their life into their hands by not driving.
108- Actually I'm curious what happened after- did the robot car continue on its merry indifferent-to-organic-life way until the human driver overcame her shock and took over manually, or did it at least realize there had been an anomaly and alert the backup driver to take control?
I wish 99 were real. If so, I would ask:
1) What sort of requirements does your insurance company have for you in order to insure you? Is it just standard liability stuff or do they insist on psychological screening of your engineers or something more off-the wall?
2) What processes do you have in place to avoid creating the soap dispenser problem of your tech not recognizing darker-skinned people? What about other (literal or figurative) blind spots your tech might have, like whether your car might need to have an "scrupulous driving" mode that could be used by drivers who suspect they will be vulnerable to (racially motivated) traffic stops by police if their car is speeding even a little?
3) How do you feel about the fact that the most fervent evangelizers of your technology are at best off-putting and at worst enraging to a good chunk of the public? Do you feel it is something you have responsibility for addressing, from a business survival/success standpoint?
(N.b. ned speaks for me in this thread)
I wish 99 were real. If so, I would ask:
1) What sort of requirements does your insurance company have for you in order to insure you? Is it just standard liability stuff or do they insist on psychological screening of your engineers or something more off-the wall?
2) What processes do you have in place to avoid creating the soap dispenser problem of your tech not recognizing darker-skinned people? What about other (literal or figurative) blind spots your tech might have, like whether your car might need to have an "scrupulous driving" mode that could be used by drivers who suspect they will be vulnerable to (racially motivated) traffic stops by police if their car is speeding even a little?
3) How do you feel about the fact that the most fervent evangelizers of your technology are at best off-putting and at worst enraging to a good chunk of the public? Do you feel it is something you have responsibility for addressing, from a business survival/success standpoint?
(N.b. ned speaks for me in this thread)
It's real, and those are good questions that I can't answer immediately, but I will in a bit.
Surely not---it doesn't say anything about parking.
112. I bike pretty often in suburban DC-- the two times that I've been hit which resulted in police interaction, I've found the cops to be perfectly reasonable. Both times the drivers were cited, one of which went to traffic court where I was a witness.
So maybe things are improving a little. Still way too many ghost bikes around.
I have a question for Robot Abraham Lincoln--given this accident, which seems like it should be a situation where a computer-assisted car could perform better than a human, do you think we're going to see wider deployment of LIDAR for collision warnings before there are autonomous cars? (I'm also curious as to why Tesla is only using cameras for their stuff, but I assume the answer is because Tesla is a gigantic dog and pony show.)
99. Why not focus on truck convoys with only the lead truck human driven or late-night bulk deliveries when driving like a proverbial grandmother would be OK?
Robot cars with human passengers in busy places seem like a risky, low-profit nightmare-- the robots can't eject hostile drunks, monitor for needle use or other crime in the car, or clean or even report vomit. How does cost-benefit for market segments work?
the robots can't eject hostile drunks
Is that a feature or a bug?
What processes do you have in place to avoid creating the soap dispenser problem of your tech not recognizing darker-skinned people?
I would be surprised if this were an issue; there's not much visual difference overall between a black pedestrian and a white one, because most of both of them will be covered in clothes. (Also it's using senses other than just visual input.)
Robot cars with human passengers in busy places seem like a risky, low-profit nightmare-- the robots can't eject hostile drunks, monitor for needle use or other crime in the car, or clean or even report vomit.
And yet we manage with robot trains. The Docklands Light Railway is not some sort of Judge Dredd soup of sleaze, drugs and bodily fluids.
99. Actually, rethinking-- for delivering goods, anything more valuable than a load of shitty pizza and frappucinos seems like an invitation to thievery.
How are there more use cases than convoys of semis? Maybe also golf carts in low-traffic areas for unhurried passengers, so assisted living or recreation spots.
126. Other pax report serious problems on trains. Robot cars are mobile public restroom stalls.
124: A FEATURE, YOU DUMB FUCK. I OUGHTTA KICK THE CRAP OUT OF YOU.
Why not focus on truck convoys with only the lead truck human driven or late-night bulk deliveries when driving like a proverbial grandmother would be OK?
I also had this question but figured it was more of a marketing implementation than getting-the-bugs-out question. I am not particularly scared of a world where there is a dedicated robot lane on the interstate and trucking is no longer one of the deadliest professions.
130. I am afraid. Truck driver is the most common occupation in many rural counties, so this is a scheme to transfer yet more wealth from the countryside to the cities.
Other pax report serious problems on trains.
Specifically on driverless trains? More serious problems than on human-driven trains?
for delivering goods, anything more valuable than a load of shitty pizza and frappucinos seems like an invitation to thievery.
Congratulations, you've just proved why postmen are impossible. (Or at the very least why they need to be armed.)
Robot cars are also part of the panopticon, and current models assume users have a credit card. Sure, you can wreck a car's interior but they'll know who you are.
I guess that's true. It's annoying when the reason not to do something is that we're generally completely unwilling to take the most basic steps (as a society) to mitigate the effect on the people who would suffer. I'm sure the truckers realized this and voted for Clinton.
Robot cars are also part of the panopticon, and current models assume users have a credit card. Sure, you can wreck a car's interior but they'll know who you are.
Similar for the hubless sharebikes that now litter Seattle and just arrived in Oakland. (Warded against theft; not necessarily vandalism, from recent experience.)
136: Bikes don't have an interior that's hard to get into without permission. More analogous would be exterior damage--waiting until you see a driverless car at a stop light and spray painting it, or hitting it with a baseball bat or brick. We have a pretty strong taboo against pedestrians damaging cars on the road; I would be surprised if driverless cars have much of an effect on that.
I maintain bikes do have an interior, but that it's really hard to get into even if you have permission.
1) What sort of requirements does your insurance company have for you in order to insure you? Is it just standard liability stuff or do they insist on psychological screening of your engineers or something more off-the wall?
Psychological screening? No. Not at all. There are fairly sophisticated safety and testing standards for software that gets into actual production series automobiles, from both a functionality and a process/development standpoint. There are also sort of patchwork state regulations about what bars you need to clear to test autonomous vehicles; one of the reasons you see so much testing in Arizona and Nevada is that they have a much lower regulatory bar to clear, to their great detriment
2) What processes do you have in place to avoid creating the soap dispenser problem of your tech not recognizing darker-skinned people? What about other (literal or figurative) blind spots your tech might have, like whether your car might need to have an "scrupulous driving" mode that could be used by drivers who suspect they will be vulnerable to (racially motivated) traffic stops by police if their car is speeding even a little?
As ajay pointed out above, it's not necessarily the biggest problem with AVs that have sensors other than vision -- lidar doesn't see in color, so a lidar-based detection system probably won't be racist. However, it's very much the case that training set bias can lead to unexpected and unwelcome behavior. Some companies in the space have a tightly geography-linked approach (e.g. they will get really good at Mountain View or Singapore before even attempting anywhere else) in part because there is so much that's unique to specific circumstances.
3) How do you feel about the fact that the most fervent evangelizers of your technology are at best off-putting and at worst enraging to a good chunk of the public? Do you feel it is something you have responsibility for addressing, from a business survival/success standpoint?
Oh, it's terrible. Honestly, one of the reasons I'm in this business is because I can see very clearly that there are some subset of people interested who have no idea what kind of dystopia they're trying to create. I think it's super important to have vehicles that are as close as possible to perfectly safe and which can play well with everybody else on the road, including pedestrians and bicycles and other vehicle types. It's one of the reasons the Uber video is so infuriating.
given this accident, which seems like it should be a situation where a computer-assisted car could perform better than a human, do you think we're going to see wider deployment of LIDAR for collision warnings before there are autonomous cars?
Maybe? There are inexpensive-ish laser sensors that are already deployed. That said, for an advanced driver assist collision warning feature, radar is going to be just as good as lidar (and probably substantially better than inexpensive lidar) and orders of magnitude less expensive. The "good" lidars -- the Velodynes that Moby sees on Uber cars in Pittsburgh -- run about $100,000 each.
I'm also curious as to why Tesla is only using cameras for their stuff
See above; lidars with enough resolution to meaningfully supplant cameras for visual tasks are much too expensive (and fragile) to put into a production vehicle. For what it's worth, Teslas also have radars. Also, there is a substantial community in the AV world who thinks that cameras should be sufficient -- they're actually excellent sensors, if difficult to interpret. Raquel Urtasun at Uber is a big proponent of this (I know, I know) as are some of the other players (big and small) like AutoX and MobilEye.
A person who can distinguish Storm Troopers from Starship Troopers from Super Troopers is surely the kind of person known to be an avid biker in big city traffic.
Why not focus on truck convoys with only the lead truck human driven or late-night bulk deliveries when driving like a proverbial grandmother would be OK?
There are definitely companies focusing on both of these applications. Truck platooning is actually more difficult than it looks; if you keep the trucks a safe distance from each other then you have to deal with people merging in. It doesn't make the problem as much easier as you'd expect. One of the other problems with automated trucks is that you still need a driver for safety and for the last mile. Safety drivers are per se problematic, as was seen in the Tempe crash, but also if you have a driver in the truck it's hard to figure out where the value is. One idea that people are exploring is if you can use the highly automated highway driving to reduce the exhaustion level of the driver, thus allowing longer shifts. (That this would probably be awful for truck drivers is generally not part of the conversation; if I was in a job that involved driving a vehicle I'd be thinking really hard about how to get rolling now with labor activism.)
The last-mile delivery application is also something people are very interested in -- a company called Nuro.ai which is doing that just came out of stealth mode -- but the big problem is what you do when you actually arrive. Do you deliver boxes to the street outside somebody's house? Do they have to come down the stairs to carry their groceries in? Again, it's not really clear how you get the benefit of the autonomy. But people are trying.
Maybe also golf carts in low-traffic areas for unhurried passengers, so assisted living or recreation spots.
This, again, is definitely something people are working on. There's a very good case to be made that limited speed, geofenced applications like retirement communities and college campuses are where you're going to see the first large-scale deployments, and certainly those locations are home to some of the most practically-oriented current pilots.
If I got a cheap, inflatable sex doll, taped some weight to it, and put clothing on so it looked like a person, could I give the people riding in the Uber robot car a heart attack by throwing it in front of them on a blind corner?
Wouldn't that work for people in any car?
Yes but you'll be five years older in five years whether or not you go back to car-driver-scientist school. Stop and think about it.
I think we can all agree that the most important part of this thread was heebie outing herself as someone who had never played D&D.
Thanks to Robot President (Stephen Byerley?) for the comments.
Said the bathroom floor to the hostile drunk.
McMaster out, Bolton in. No need now to worry about death by anything so slow or uncertain as Uber.
155: I was gonna say. I guess Atrios is right: No self-driving cars, unless they are invented by super-intelligent cockroaches.
A thing I honestly don't get about the video of the driver: we've had computer vision systems that are reallly good at eye-tracking for years now. Given the problems we know human backup drivers will have in the normal course of operations, why tf aren't these cars designed to safely suspend operations when the human backup driver is distracted?
The last-mile delivery application is also something people are very interested in
My good friend is worried that autonomous vehicles are going to undo all of the policy work he has done to bring down VMT. He worries that people will come to live in small infill apartments and send an autonomous vehicle out a few miles to the warehouse where they keep their surfboards or large items. He thinks that almost no one is thinking about the climate change impacts of eliminating almost all human costs of vehicle travel. (If you are going to point to someone to show that indeed, people are concerned about this, there's a fair chance that you'll end up pointing to him.)
157: Honestly, why don't regular cars tell the driver to look at the road if they aren't doing so?
158: I think it is much more likely they'll live in giant exurban estates with all their large items and then commute to work in autonomous vehicles. I agree that it won't help climate change.
"You got your white flight in my accelerated carbon emissions."
"No, you got your accelerated carbon emissions in my white flight."
I'm very much an autonomous-vehicle skeptic, but AFAIK we haven't yet had one smash into a bridge.
My good friend is worried that autonomous vehicles are going to undo all of the policy work he has done to bring down VMT. He worries that people will come to live in small infill apartments and send an autonomous vehicle out a few miles to the warehouse where they keep their surfboards or large items. He thinks that almost no one is thinking about the climate change impacts of eliminating almost all human costs of vehicle travel.
I'm dubious about the likelihood of this exact scenario, but it seems like widespread adoption of electric vehicles would at least mitigate it. (Depending on how the power is generated, of course.)
I didn't find the "looking down" bit incriminating.
Dude, at rough glance she looks down for about four seconds. That's a fucking criminal thing to do in a car. Even at a moderate speed like 40 mph she's still covering hundreds of feet.
Christ, the article says she looked away for six seconds. That's an insanely unsafe thing to do.
155 is disastrous news. It'll be war with Iran or North Korea, or, what's most likely, both.
I think the problem is that Starship Troopers sounds like it should be the funny dopey movie
It is!
158
That ship is sailing already. Per capita VMT increased drastically from 2014-2016, and is now midway between its 2005 peak and its 2014 trough. It has stalled recently, but I give it better than even odds it'll reach a new record in the next 5 years.
Halford's comment from a 2014 thread on self-driving cars is very on point.
I'm very pro self-driving cars in principle and will gladly be ferried around narnia is one because I can't drive. I think non-drivers and handicapped people find this idea more appealing. I have had my license, ever, and thus been a driver, ever. I was an OK driver for four months? however, given that I will be living here in the land of left-hand drive, truly challenging driving test, and no car for years more (like it or no), I don't feel like I could ever learn to be a confident driver so late in life. I mean, driving is in some sense easy because every asshole does it. on the other hand, a lot of people are terrible drivers.
so, I'm inclined to say the car should have some sensors that make it better than humans at avoiding pedestrians, and shouldn't be allowed on the road until it is. BUT, if I have not been on that very road in tempe, I have been on one very like it. everyone is going 45-50 for real, and the idea of pushing your bike across eight lanes not in a crosswalk is actually pretty nuts. even the crosswalks are probably dangerous. arizona having lax standards for self-driving car testing is of a piece with its nutsball roads that seem indifferent to human life generally. phoenix has the craziest freeways I have ever been on by far.
this is not a sarcastic question but a real one born of ignorance: what would you have done to fix the situation, actual drivers? it seems like slamming on the brakes would be too late to do any good. see her coming while she's still on the median and slam on the brakes then? or accelerate madly at that point and get past her? zip into the adjacent lane for a second and just get around her? but then there might be a car there, right? what's the solution that everyone thinks they could implement that the car failed to do? (it seems odd that the car did the same thing an inattentive or bad human driver would do; choke, basically--register the existence of the obstacle and then just plough through an actual human being.)
"More analogous would be exterior damage--waiting until you see a driverless car at a stop light and spray painting it, or hitting it with a baseball bat or brick."
But this virtually never happens. My street is full of driverless cars and none of them have been spray painted or hit with bricks, even in the middle of the night.
"I guess it's just that most people aren't murderers..."
My good friend is worried that autonomous vehicles are going to undo all of the policy work he has done to bring down VMT
If I were trying to design policy that would reduce VMT even in an age of autonomous vehicles, it would look much the same as current urbanist approaches to increasing pedestrian/walking safety in city; narrower travel lanes, increased real-estate devoted to non-vehicle uses, lower speed limits, reduced parking requirements. Interestingly, these are also conditions that make autonomous vehicles much more likely to be dramatically safer than humans, even in the near-term. Autonomous vehicles do much better in situations where the amount of uncontrolled interaction with other cars is limited, and are perfectly good at following a tightly constrained track, and of course never get frustrated by things like low speeds or having to wait behind another car. One of the problems in Tempe, as has I believe been noted above, is that the vehicle was trying to behave in a plausibly human-like way on a road designed so that human behavior is per se unsafe for vulnerable road users.
170.3
First, most humans would pay more attention than the imperfectly programmed car and the inattentive backup driver in the Tempe case.
Second, for perspectives on what humans would do in a similar situation, try Moral Machine. Lots of Trolley Problems.
Third, cars with drivers are getting more hazard detection and warning equipment, like "forward collision warning" and "lane deviation warning" systems. My car has both but without even better "obstacle detection" (as recounted over and over above) cars won't detect humans in their path. The car in this case either had bad hardware or bad software or both. My opinion is that long before we have widespread use of self-driving cars we will have widespread hazard/obstacle/human/puppy detection in human-driven cars. It's one of the few things that the aging Baby Boomers are causing that's really good for society.
I knew trolley problems would come up. This is a complete red herring; most drivers never face anything remotely like a trolley problem. The number of actual trolley-problem accidents ever recorded probably doesn't even make it into double figures. As in: there are more trolley problems in that MIT study than there have ever been on the roads of America. Historically, drivers on US roads have been more likely to be strafed by a passing Zero than they are to face a kill three kids/swerve into two old people dilemma.
Yes. The answer to the trolley problem is "brake". There's a reason why driving instructors don't teach an "emergency swerve" into five orgasmic rats or whatever but do teach an emergency stop.
170.3: As I said way upthread, I believe that a human driver would have spotted the ped-cyclist on the median from some distance (at least 1/8 mile) and registered it as an unusual thing, heightening awareness. At that point, a good driver bleeds a little speed, but even a poor (but not awful) driver keeps an eye on the person as someone who is in a place where she is likely to, at some point, try to cross (a median is not a sidewalk).
For either driver, as soon as she steps into the road, brakes are applied and possibly lanes changed. As I said, without an IRL reënactment, it's impossible to say for sure that complete avoidance was possible, but I feel pretty certain that her death was. If the car gets under 25 mph, the fatality likelihood drops by 50% or more.
Oh yeah--and honking. In a car with an automatic transmission, panic stopping while honking the horn isn't really hard or unusual. Yeah, my God, I hadn't even thought about that. As soon as she steps off the curb, why isn't the car's horn sounding?
omg do we have to teach the robots to honk?
You can teach a robot to honk, but I'm still going to give it the finger if it honks at me for not starting quick enough when the green light comes on.
I had completely missed this accident last month in Pittsburgh. I know that intersection very, very well (in fact, I made the left turn that the human driver did just last Sunday to pick up some ground lamb from Salem's), so I can picture exactly what happened. And it seems like, once again, a situation where the Uber simply didn't do anything to prepare for the possibility of another car doing something unexpected.
If you see an oncoming car with a left turn signal on, you have to at least consider that they're going to, you know, turn. You read the cues, you judge the speed (are they slowing enough to wait for me to pass, or are they hustling to get ahead of me?), you maybe lift your foot off the gas, you definitely prepare to brake. It just seems like the Uber did none of these (this is aside from the turn signal issue, which seems like a bit of a red herring).
To be clear, the human driver almost certainly screwed up, even if you concede that the signaling by the Uber was misleading. But, again, we have a story of the Uber just barreling ahead with no apparent caution.
FWIW, when I interact with these things, whether as cyclist or driver, I assume that they are very, very stupid. I most definitely don't assume that they're better drivers than hoi polloi.
My car's horn inexplicably stopped making a loud noise; it's now just the faintest of beeps. TBH it's a bit of a safety issue, but mostly I'm just glad that I'm at no risk of pissing anyone else off a la 181.
Especially when cycling on country roads, drivers will often honk as they pass to make sure I know they're there, but A. it's often so loud as to be startling, and B. some of them are being assholes, but you never know for sure.
Just in case, I give the finger every time. You can't be too careful these days.
For either driver, as soon as she steps into the road, brakes are applied and possibly lanes changed.
You're suggesting that the response should be to immediately swerve into a different lane? No, it shouldn't. Immediately slow, yes. But only change lane if you have checked first that there's no one coming up behind. As 178 says, there's a reason why driving instructors don't teach the "emergency swerve".
re: 178
And I think most competent drivers probably have a pretty good sense at any time whether swerving is likely to be a safe option. If I'm driving at any speed, I'll be flicking a look at the driver side mirror regularly, to check if I am able to abort into the next lane in the event of something stupid.
And if I'm approaching a pedestrian, a bike, another car that's driving erratically, I'll be doing that even more regularly. Similarly, if I'm approaching another car, and I can see that they are about to come up behind a slower moving vehicle, and thus there's a higher than zero chance they'll swerve out, because I'm in or approaching their blind spot, I'll be constantly checking for an exit into another lane, and if there's not, I'll either speed up to clear them before I think they might pull out, or slow down, to ensure I have stopping distance.
None of that is advertising above average competence, I think that's the sort of thing that competent drivers are constantly doing, even if they aren't necessarily consciously thinking of it all the time.
So, if I was on that road, and assuming reasonable visibility,* I'd know,** before the person pushing the bike got anywhere near my lane, whether I was going to have to brake, pull right (or left in the US), or what.
* which may not be a fair assumption
** assuming not super-tired, or surrounded by a lot of other erratic drivers, or whatever.
185 basically a long winded way of saying what JRoth already said in 179.
155: Was just disappointed to find out that Bolton's only 70. I keep hoping for natural death for the fuckers, but they're too young and rich.
If Trump dies in office, we don't have to pretend to be sad, do we?
184: checking the mirror for traffic was one of the things done as soon as the per was spotted on the median. Total situational awareness.
And I'm not joking even a tiny, little bit. Any time I see something I don't like, my eyes go immediately to the mirror, even if there's just one lane. If there's a car behind me, I may pump my brakes just so they'll see the lights and back off a bit.
Maybe a poor driver doesn't do exactly this, but I think that almost any driver looks in the mirror if they think things might get weird. It's subtle, not a full-on alert, but it's updating your awareness.
Of course, I thought people knew how to position their mirrors, so what do I know?
And now I've repeated much of team's 185 in turn. It's like a waltz of pwnage and redundancy.
One thing I am vaguely wondering about regarding 182 is what you might call epistemic confidence.
If you are surrounded by unpredictable actors, however good your estimate of their current vector in three-dimensional space it would be rational to put less weight on your prediction of where they will end up than if they were, for example, billiard balls or rocks.
This isn't even a human/machine interaction issue; other self-driving cars may have different control laws, may have a different subset of the available information, may hit a patch of black ice or aquaplane, or the interaction between SDCs might give rise to some sort of emergent weirdness, so you won't be able to rely on their behaviour even when everything works fine.
At some point the error term you need to account for that is going to spend a lot of the gains from the cool sensors and fast processing.
narrower travel lanes, increased real-estate devoted to non-vehicle uses, lower speed limits, reduced parking requirements. Interestingly, these are also conditions that make autonomous vehicles much more likely to be dramatically safer than humans
Isn't increased pedestrian & bike traffic exactly the situation automated cars are really bad at handling (short of the "geotag them all" solution mentioned above)
a median is not a sidewalk
Actually in this case, according to a daylight picture someone posted, the place where the biker was crossing was a nicely landscaped path inviting pedestrian usage but with a sign that said "don't cross here."
Funny horn story, I was driving a rental car on a left-driving Caribbean island with crazy blind switchbacks so practice was to honk going around every sharp curve. Heading back to the airport at the end of the trip, I honked around one curve and the horn wouldn't stop when I let go. So I tried pulling the piece in the middle of the steering wheel thinking it was stuck down and the whole center piece popped off in my hand (fortunately not the whole steering wheel) so I returned the car to the rental company at the airport with the horn stuck on and handed them the extra piece of steering wheel. They were quite understanding, apparently such things are common (this was our second car from them, the first overheated in traffic the previous day.)
pull right (or left in the US)
FYI, she was coming from the left. If I'm not mistaken, the situation was this: divided road, four lanes all in one direction. Curb lane #1, median lane #4. Uber was in 2, she crossed 4 and 3 to reach 2.
And actually, saying it like that, I think she was behaving less irresponsibly than before: she was about 70% of the way across when she was hit. Meaning that, had the car merely taken its "foot" off the gas when she stepped into the road, she would have reached the curb lane safely; if it had tapped the brakes, she'd have been mounting the curb by the time it passed. It's still risky to ever assume a car will slow, especially at night, but this is so damn far from the original bullshit claim that she just burst in front of the Uber that I can feel anger rising.
I knew trolley problems would come up. This is a complete red herring; most drivers never face anything remotely like a trolley problem.
I once got rear-ended when I was sitting at a stoplight - my car totaled. It was raining, the roads were slick, and the other driver explained, "I couldn't stop and I didn't want to hit the mailbox."
Isn't increased pedestrian & bike traffic exactly the situation automated cars are really bad at handling (short of the "geotag them all" solution mentioned above)
Yes and no. One of the reasons low-speed environments like college campuses seem promising for early deployments is that at low speeds autonomous vehicles can just err on the side of stopping. Lidar (for instance) is extremely good at telling you that something is in front of the car, even if it isn't always good at telling you what it is. If you are going 20mph on a road with a single travel lane and you detect something in the road in front of you, but you can't tell what it is, you stop. The discomfort for passengers is low, the chance of getting catastrophically rear-ended is low. It's a very different situation than going 40mph on a multi-lane road.
193: Somebody might have mailed a baby.
A couple months ago--it must have been December or January as we started talking about Ladybird--I ran into a senior Uber engineer in a bar. I don't know what code he works on. At some point we got into a discussion where he was defending the Agile methodology to me. He said its origins came from a Bell Labs study which showed that coders worked harder if they thought they were being experimented on; that is, consult changing irrelevant things (like light levels) outside of the workers' control led to constantly high productivity.
It struck me as a deeply amoral argument. No concern for externalities.
You could probably turn it into a trolley problem.
196. Yeah, that's a pretty old study, but IIRC the conclusion was not that people worked harder if they thought they were being experimented on (Why would they? If I thought I was being experimented on I'd either try to keep my work patterns as regular as possible if I was in favour of the research, or make them vary randomly if I was opposed to it.) The takeaway from the Bell Labs work was that people work harder if they think somebody is paying attention to their working environment, because they feel more valued.
"I couldn't stop and I didn't want to hit the mailbox."
There you are. Most drivers wouldn't recognise a trolley problem because they plainly ascribe no value whatever to the lives of others. Kill one, kill five, swerve and kill two... who cares?
198: yes, that's right as far as I recall.
The takeaway from the Bell Labs work was that people work harder if they think somebody is paying attention to their working environment, because they feel more valued.
People are so emotionally needy, especially engineers.
I had no idea this many self-driving cars were already on the road.
Are they doing well with snow, ice, mud, and potholes yet?
198: I cared less about the truth of the historical explanation than his interpretation of it.
Actually in this case, according to a daylight picture someone posted, the place where the biker was crossing was a nicely landscaped path inviting pedestrian usage but with a sign that said "don't cross here."
Thanks for clarifying.
Any time I see something I don't like, my eyes go immediately to the mirror, even if there's just one lane.
Two weeks after the accident described in 193, in my new (used) car, I was motionless waiting to make a left turn and got rear-ended again! This time, the other guy got the worst of it. My bumper was displaced a few inches, but he had trouble getting out of his car.
Those experiences taught me to be very conscious about what was going on behind me, and I think I saved myself at least one accident after that by hitting the gas at a stoplight when a nitwit behind me clearly had failed to recognize that I was stopping.
202: It actually made the papers a couple weeks ago because an Uber approached a big pothole* and didn't just stop, but actually refused to reëngage. The driver had to bring it back to the facility.
They are ubiquitous enough here that I don't notice them when walking (when cycling or driving, I'm more attuned to vehicles).
*it's been an awful pothole season this year. Bitter cold for 10 days after Xmas, and then a zillion freeze/thaw cycles since. This week, after the biggest snow of the year, we're having 3-4 straight days of highs above freezing and lows of 20 or below. Oh yeah, and in February we had all-time record rain, washing out a lot of patches.
203. I can see that. It would have been a "finish my beer and go home" moment for me.
The not wasting the beer is what makes you awesome. No Uber jerk can take that away from you!
There's no need to go overboard. Bars are big enough places (usually) that you can talk to somebody else.
The way society deals with car crashes is by looking at the last few seconds before the collision into order to establish who made the last mistake and so apportion legal responsibility. But if your goal is to reduce the number of casualties rather than assign blame, then this process is hopeless. A collision is the result of the whole system of transportation and infrastructure, and if you only look at the last few seconds then you'll miss nearly all the reasons for the collision. For example, the Ladbroke Grove rail crash in 1999 was "caused" by a train driver passing a red signal but the failings went much deeper than this one error--the investigation criticised signal visibility management, driver training, normalization of deviance, the decision not to implement Automatic Train Protection, and the inappropriate use of cost-benefit analysis to defend that decision. The inquiry led to a complete reorganization of rail safety institutions.
So in the Uber crash, the immediate cause (the last mistake before the crash) was the robot's failure to spot the pedestrian and apply the brakes. But the failings must go much deeper than that: why did the victim try cross the road at that point? why were attractive-looking sidewalks constructed on the median? why was this attempted to be remedied by putting up a tiny "no pedestrians" sign instead of, say, digging up the sidewalks and planting bushes? what kind of internal recklessness at Uber led to the deployment of these cars on public roads at this stage of development? why are the police so quick to blame the victim? etc.
Isn't anyone considering that the AI that secretly controls the world needed that woman killed, and used Uber to do it? Wake up, sheeple!
Per 179, here's dashcam footage of the Tempe accident site with less blown out lighting levels, and it's increasingly clear that Uber and the cops were both engaged in spinning this to sound less like an appalling, plausibly criminal failure than it actually was.
212 was me. Stupid anonymous windows.
Yeah, I feel weird having strong opinions about normal driving conditions because I don't drive much, but the video as released looked pretty unbelievably dark for a road in a built-up place. I've driven in conditions with visibility that bad, but it was a cloudy night on a back road with trees and no streetlights -- never an ordinary road.
I think of ordinary roads as being lined with trees and not having street lights.
Only where deliberately planted. But outside the city here, even in some of the suburbs, streetlights are kind of rare and trees are everywhere. North Carolina was similar, except with different types of trees.
But a multilane road is almost always going to have lights, right? No lights is a two-lane road kind of thing. (I'm asking, rather than making a strong claim. I last drove a car... last September.)
Streetlights correspond pretty well with sidewalks, though. If you're on, say, Glass Run Road you aren't going to get many streetlights but also basically no pedestrians.
176. I brought up the Trolley Problems site not because I thought it was a match for what happened in Tempe, but because it was (a) interesting and (b) something that the nerds who write self-driving car software probably think about and spend more time on than "boring" stuff like recognizing pedestrians.
179. The lane change/swerve might work if and only if you are a disciple of JRoth and have your mirrors properly pointed, and if you look in them. It is incredibly easy to miss cars to the right and especially to the left (reverse that for the UK), even if your side mirrors are aligned. One thing self-driving cars should be able to do much better and faster than humans is have 360 situational awareness 100% of the time. That makes the Uber thing an incredible fail.
196. I think that guy at Uber is full of shit. You can read about the origins of Agile (and Scrum) online. Of course, what he said is probably pretty cynical and damning vis a vis Uber's self-driving cars effort. "Yeah, we're gonna say we're doing agile, but really we're just poking the monkeys to keep them interested."
220: Usually at an intersection or ramp, there are lights. Or if there is a stretch with businesses.
I brought up the Trolley Problems [as] something that the nerds who write self-driving car software probably think about and spend more time on than "boring" stuff like recognizing pedestrians
This is definitely not the case. Detection (and prediction, in a limited sense) is a not-yet-solved problem of vital, ongoing importance to practical self-driving vehicles. Trolley problems, insofar as anybody actually working on practical implementations cares about them at all, are a curiosity. There are VCs that care about them, but not very serious ones.
something that the nerds who write self-driving car software probably think about and spend more time on than "boring" stuff like recognizing pedestrians.
That's funny because the philosophers who are desperately arguing about how self-driving car designers need to think about trolley problems are doing so because they think that the nerds are not going to be interested in important things like ethical philosophy at all and so they desperately need to be philosophosplained to about how killing people is wrong.
222: The Uberist quote sounds like a reference to the Hawthorne effect (meaningless interventions seem to have a transient positive effect on productivity) but I have never heard anyone cite it as a source for Agile, not least because Hawthorne wrote in 1958.
it seems like slamming on the brakes would be too late to do any good
How can someone possibly know that? Or more to the point, how can someone in the moment discard the possibility that it might do any good?
It might not prevent a collision, but even a tap on the brakes at the last half-second could make the difference between injury and death.
The problem here is Uber, as a company, not self-driving cars in principle or even in practice. Uber's basic approach to business is "let's be 'disruptive' by ignoring regulations, treating people like shit, and cornering the market by running at a massive loss on the backs of VC funding; once we've sufficiently fucked things up for everyone else we'll figure out how to monetize this thing properly." That is, needless to say, a horrible business model for developing something like a self-driving car. So I'm happy to say that Uber shouldn't be allowed to put self-driving cars on the road, but I'm still very pro- self-driving cars.
The real challenge isn't the software/hardware one (again: human drivers suck); it's is setting up and enforcing the right regulatory/liability regime so that Uber can't pull this kind of shit. That's a big challenge, but I am more confident that that is possible than I am confident that there's a way to make human drivers suck less.
Why would recognizing pedestrians be boring? It's a potentially neat computer vision problem, assuming you give extra weight to pedestrians over other similarly sized road obstructions. And the only reason you'd do that is if you're concerned about trolley problem-type stuff.
229 before seeing 224, where "trolly problem type-stuff" is a stand in for all ethical questions. Computer vision in general is interesting; if you feel you need to determine people from non-people that's more interesting--and actually, thinking harder, yes, of course you do, because people are mobile and unpredictable in ways that other objects (mostly) aren't. The set of spacetimes they could at some point co-occupy with the car is larger than other things.
I wouldn't call 'not hitting pedestrians' an ethical problem, exactly. That is, it's an ethical issue, but the thinking you need to do about it isn't ethical thinking. The ethical answer is easy -- "Don't hit the pedestrian." All the problem solving is technical, not ethical.
You don't get into ethical problem solving until you get into the implausible trolley situations, mostly.
People on the sidewalks do get hit by drivers leaving the road to avoid hitting a car. It's not common, but it does happen. Of course, you don't have time to do any ethical reasoning if you are driving. Even if you did, you would only have estimated probabilities of the outcomes for each choice, not a firm number killed each way.
Yes, that's fair. And I underestimated the degree of interesting technical problems besides recognition. That something is there is just the first question, and whether it's a person (or an animal) is second. Then, can you determine possible intention from their eyes, head direction, or other body language. Or even lacking that, build a set of coordinates that they could reasonably be expected to be in (assuming they're not Usain Bolt is reasonable), which the car needs to avoid. But it'd be better if you could put stronger likelihood weights from inferred intentionality.
And I think the ethical portion of the problem might be bigger for robots than people. "Don't hit the pedestrian" is a good model for people, who are either going to see a pedestrian or not. A robot has to programmed with what a pedestrian is and it seem possible to me that you'd need some kind of ethical grounding for how certain it has to be a blip is a person before it avoids that blip at the cost of hitting another blip with another set of probabilities of being something you aren't supposed to hit.
Ostensibly, the trolley problem (in this context) is about whether the software should put the driver at risk to avoid harming more than one pedestrian (say). In other words, it's a "should we design the car to protect vulnerable road users even if that puts the driver at more risk?" Once we put it like that a) the 'trolley problem' is a red herring, and b) the issue is not about self-driving cars at all. We've clearly accepted, over the past few decades, that car design can prioritize the safety of drivers over the safety of other road users. That's just what an SUV is: something much more likely to hurt a pedestrian (or another driver) but a bit less likely to hurt the occupants. Now, if we want to have that conversation, fine, but if we do let's make it about whether we should ban Escalades and F-250s from city streets, not about software.
Oh, sure, everything is weighted probabilistically. Although humans do that too. In safe conditions I will break or swerve to avoid animals. Sometimes those animals end up just being leaves. Usually I'm pretty good at recognizing humans*.
* might be lizard-people wearing human skins
226. Actually, according Jim Coplien, he was inspired to get into his "Engineering Patterns" research by a paper written in 1934. Agile was (for him) an application of his Patterns work.
Why would recognizing pedestrians be boring?
Especially since one of them might be your boss or mother in law, so you need to be careful about giving them the finger.
232: This is where the trolley problem concept is actually valuable and applicable: it's understandable why a human driver would mow down peds rather than collide with another car, but that's actually the wrong decision, because occupants of the other vehicle are much less likely to die than pedestrians (with very few exceptions). Self-driving cars should actually use their steely, rational determination to hit other cars and avoid pedestrians at (basically) all costs.
Humans in the moment aren't capable of that, but computers should be.
I don't have time to look this up, so it could be wrong, but I don't think the tradeoff posited in 235 is accurate, because SUVs and light trucks are not actually safer than modern smaller cars (because light trucks/SUVs are more prone to roll over, and thus cause a fatality). At highway speeds where driver fatalities occur the extra cladding on an Escalade isn't going to create benefits worth the extra cost of rolling over. It's better thought of (I think) as a "light trucks and SUVs are bad" issue than a "ethical tradeoff" issue.
229. Why would recognizing pedestrians be boring?
Because it's been done, that's why. Where's the fun in that? Since the Hawthorne Effect software development model is in vogue at Uber, they switch them to trolley problems every now and then to make sure they don't get bored.
Because it's been done, that's why.
Has it. Really. That's just a solved problem in programming, then?
242: the tradeoff I had in mind is the one that people think they're making when they buy a vehicle, not the one they're actually making. And people buy Escalades etc... because they feel safer, thereby making the rest of us less safe.
In fact, they are safer, in one sense: if you're going to get in an crash, it's better to be in an Escalade than in a Civic. (Probably true you're more likely to get in a crash in the first place in the Escalade: people aren't great at that kind of reasoning, though.)
I'm pretty sure that light trucks and SUVs are much safer wrt roll-overs and such than they were even 15 years ago, as well.
it's understandable why a human driver would mow down peds rather than collide with another car
I would be dubious that the drivers are actually doing this. What they're doing is getting out of the way of the oncoming car. They're not making a conscious decision that they would rather definitely kill three pedestrians than hit another car. That's why the trolley problem is stupid in general and particularly stupid in this case; because it involves the kind of certainty about outcomes that you never have in reality, and because it assumes perfect knowledge and time to make a conscious decision. In reality the driver's thinking SHITACAR and has made the decision in less than a second.
Isn't the goal of AI/autonomous vehicles research to create vehicles that will use their own superior judgment to determine if it is ethical to run over a pedestrian?
Pedestrian detection is more interesting than 93% of all programming problems. It's not really a solved problem, it's of obvious significance, and it is intellectually challenging.
: if you're going to get in an crash, it's better to be in an Escalade than in a Civic.
I don't believe that even this is true, given the roll-over risk. Could be wrong.
242. The study articles I've found (at Consumer Reports and at Access magazine) agree with you.
However, the latter one points out that the database doesn't include whether the drivers (or passengers) were wearing seat belts. A rollover if you aren't wearing a seat belt is much more dangerous than if you are, I would suspect. Since SUVs are rollover-prone, non-seat belt use may inflate the danger. Alternatively, maybe SUVs have terrible roll cages and so it wouldn't matter much as the roof would scrunch you.
I mean, assuming highway speeds. Obviously at lower speeds you're safer, but basically all modern cars these days are extremely safe (for the person inside the car) at non-highway speeds.
The way society deals with car crashes is by looking at the last few seconds before the collision into order to establish who made the last mistake and so apportion legal responsibility. But if your goal is to reduce the number of casualties rather than assign blame, then this process is hopeless.
That's how we assign individual responsibility for insurance and judicial purposes, but I feel compelled to note we do usually have non-liability-based mechanisms for analyzing and acting on systemic problems - in the US, it's the NHTSA, for example. It might not be optimal, but it has done a lot of good over the years.
(Probably true you're more likely to get in a crash in the first place in the Escalade: people aren't great at that kind of reasoning, though.)
That's what Malcom Gladwell said (14 years ago). Summarized here.
Bottom line is that the benefits of an SUV's size and weight in an accident are more than offset, much more than offset, by the higher risk of getting into an accident in the first place due to less precise and responsive handling, longer braking speed (AWD notwithstanding), and fewer signals to the driver of poor road conditions. And that's despite the marginal advantage of visibility due to the height of SUVs, itself offset by a higher risk of rollover.
by looking at the last few seconds before the collision into order to establish who made the last mistake and so apportion legal responsibility
I dunno that it matters much, but this is not actually how legal responsibility gets apportioned.
249: I'm thinking more in terms of collisions between vehicles-- the kind of crashes we typically see on surface roads, not highways. In those, safer in the Escalade. (An Escalade hitting a Civic head-on decapitates the Civic driver.) But it could be true that an SUV is more dangerous in a crash because more prone to roll over *in any given crash*. If so, we'd need to know the relative rates of roll-overs to other kinds of crashes to know which was safer in a crash. And then we'd need to know their respective crash rates. That's a lot of math: I'd rather just ban the goddamned things.
the trolley problem is stupid in general
At the risk of being pedantic-- the trolley problem and the identical outcome transplant surgeon problem were devised to show that utilitarian reasoning is completely inconsistent with respect to moral instincts. The problem is part of reasoning against broadly applied utilitarianism, as the Banach-Tarski paradox is intended to show problematic aspects of the axiom of choice rather than to provide a recipe for doubling available fruits.
246. less than a second
251. assuming highway speeds
At 35mph you travel about 15m in a second. Not much time to calculate anything, unless you are a computer and your pedestrian-recognition software actually works and maybe not enough even then. You are probably safer but the pedestrian is still gonna die. If it's car on car the crumple zones are probably going to mitigate the danger somewhat.
If you explain the trolley problem in great detail, you will be killed by the audience. If you don't explain it, a fat man will fall from a bridge and kill three pedants.
I wouldn't call 'not hitting pedestrians' an ethical problem, exactly.
It really depends on the pedestrian. Can self-driving cars be taught to recognize Trump, and if so, what should they do when they see him crossing the street?
Admittedly, that's not a trolley problem. It's more of a do-you-strangle-Hitler-in-the-crib problem. But still.
259 also makes me reflect on the fact that I once passed within strangling distance of John Bolton. How many lives will I have on my conscience for my failure to act?
How come everybody wants to kill baby Hitler instead of raising him decently and maybe giving him a scholarship to art school?
(Stolen from SMBC)
I ran into a senior Uber engineer in a bar
Sorry your LIDAR wasn't working.
People who harp for too long on the trolley problem are forced after death to act it out in person over and over.
Oops, right episode, wrong clip. This one.
Also in that clip the trolley briefly passes a movie theater where the marquee has "Strangers Under a Train" and "Bend It Like Bentham".
I had no idea this many self-driving cars were already on the road.
Are they doing well with snow, ice, mud, and potholes yet?
When Uber announced they were testing them in Phoenix one of the reasons they gave was the relative lack of those things. (Also pedestrians, which, well.) Another reason was surely lax regulation, but they didn't mention that one.
Uber: being the change we want to see.
Uber was really bad at this.
Waymo, formerly the self-driving car project of Google, said that in tests on roads in California last year, its cars went an average of nearly 5,600 miles before the driver had to take control from the computer to steer out of trouble. As of March, Uber was struggling to meet its target of 13 miles per "intervention" in Arizona, according to 100 pages of company documents obtained by The New York Times and two people familiar with the company's operations in the Phoenix area but not permitted to speak publicly about it.
Many have commented that it seems weird for Uber to be in the self-driving car technology business in the first place. My theory is Uber's entire self-driving car operation is an effort to discredit the idea of self-driving cars, to get them outlawed before they threaten Uber's real business (hiring people who already own cars to use them as taxis and give Uber a cut of the proceeds).
227: I don't, in fact, know this, because I don't know how to drive. it just looked like she'd get hit anyway, but maybe it would be at a much slower speed and she would have lived?
||I have been praying for a pro-Cobra Kai comeback film for almost 35 years, and now it seems that the Lord has granted my prayers. They're really making a return of Zabka/Johnny Lawrence series with Zabka/Johnny as the dispossesed hero. I have that strange feeling when you get something you've wanted for so long and now you don't know of it can match the fantasy that your hopes have built up in your head over the years.
https://www.facebook.com/rottentomatoes/videos/10156106052852357/
|>
270: I thought Uber's stock prise is only as high as it is because of the (probably false) promise of self driving cars and future profits.
Cutting wages by replacing workers with robots or cutting wages by calling your employees "contractors." Get yourself stock in a company that can do both.
I think it came out during the Waymo-Uber trial, or in the preparation leading up to the trial, that one of the big reasons Uber jumped into self-driving cars was because they saw them as a looming threat to their business model, which even if they're not directly paying drivers still requires drivers who get paid. I was a little surprised it sounded like a defensive move, given Uber's usual offensive and anti-labor image. They may have overestimated how quickly self-driving cars would become a real threat.
I thought Uber's stock prise is only as high as it is because of the (probably false) promise of self driving cars and future profits.
They jumped in when that became a Thing, but I think the original intention that could still be viable is to drive taxi companies and enough other competitors out of business and then jack up prices to rake it in.
Update: Uber is deliberately worse than the competition.
I saw where they have withdrawn from their permit to test in California.
Think of all the money they saved on the LIDAR bill.
And everybody has a share.
Just to be clear, 246 is exactly what I meant by "understandable" in 241. "Why" in this case simply means the reason it happens, not the reasoning done by a thinking person.
264-266: Heh. A college friend of mine is a philosophy professor and loves that show. (We may complain about Facebook but I wouldn't know any part of that without it.) I like the show too but lack the professional perspective.