::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact

Uber Self-Driving Crash

November 7th, 2019
tech, transit  [html]
Content warning: discussion of death

A year and a half ago an Uber self-driving car hit and killed Elaine Herzberg. I wrote at the time:

The dashcam video from the Uber crash has been released. It's really bad. The pedestrian is slowly walking their bike left to right across a two lane street with streetlights, and manages to get to the right side of the right lane before being hit. The car doesn't slow down at all. A human driver would have vision with more dynamic range than this camera, and it looks to me like they would have seen the pedestrian about 2s out, time to slow down dramatically even if not stop entirely. But that doesn't matter here, because this car has LIDAR, which generates its own light. I'm expecting that when the full sensor data is released it will be very clear that the system had all the information it needed to stop in time.

This is the sort of situation where LIDAR should shine, equivalent to a driver on an open road in broad daylight. That the car took no action here means things are very wrong with their system. If it were a company I trusted more than Uber I would say "at least two things going wrong, like not being able to identify a person pushing a bike and then not being cautious enough about unknown input" but with Uber I think they may be just agressively pushing out immature tech.

On Tuesday the NTSB released their report (pdf) and it's clear that the system could easily have avoided this accident if it had been better designed. Major issues include:

  • "If we see a problem, wait and hope it goes away." The car was programmed to, when it determined things were very wrong, wait one second. Literally. Not even gently apply the brakes. This is absolutely nuts. If your system has so many false alarms that you need to include this kind of hack to keep it from acting erratically, you are not ready to test on public roads.

  • "If I can't stop in time, why bother?" When the car concluded emergency braking was needed, and after waiting one second to make sure it was still needed, it decided not to engage emergency braking because that wouldn't be sufficient to prevent impact. Since lower-speed crashes are far more survivable, you definitely still want to brake hard even if it won't be enough.

  • "If I'm not sure what it is, how can I remember what it was doing?" The car wasn't sure whether Herzberg and her bike were a "Vehicle", "Bicycle", "Unknown", or "Other", and kept switching between classifications. This shouldn't have been a major issue, except that with each switch it discarded past observations. Had the car maintained this history it would have seen that some sort of large object was progressing across the street on a collision course, and had plenty of time to stop.

  • "Only people in crosswalks cross the street." If the car had correctly classified her as a pedestrian in the middle of the road you might think it would have expected her to be in the process of crossing. Except it only thought that for pedestrians in crosswalks; outside of a crosswalk the car's prior was that any direction was equally likely.

  • "The world is black and white." I'm less sure here, but it sounds like the car computed "most likely" categories for objects, and then "most likely" paths given their categories and histories, instead of maintaining some sort of distribution of potential outcomes. If it had concluded that a pedestrian would probably be out of the way it would act as if the pedestrian would definitely be out of the way, even if there was still a 49% chance they wouldn't be.

This is incredibly bad, applying "quick, get it working even if it's kind of a hack" programming in a field where failure has real consequences. Self-driving cars have the potential to prevent hundreds of thousands of deaths a year, but this sort of reckless approach does not help.

(Disclosure: I work at Google, which is owned by Alphabet, which owns Waymo, which also operates driverless cars. I'm speaking only for myself, and don't know anything more than the general public does about Waymo.)

Comment via: facebook, lesswrong

Recent posts on blogs I like:

I Gave a Talk About Construction Costs

Two years ago, I gave a talk at NYU about regional rail, and as promised, uploaded slides the next day for discussion. Yesterday I gave another such talk, about construction costs. But here there are two things to upload: the slides, and the data table. I…

via Pedestrian Observations November 20, 2019

Pieces of time

My friend used to have two ‘days’ each day, with a nap between—in the afternoon, he would get up and plan his day with optimism, whatever happened a few hours before washed away. Another friend recently suggested to me thinking … Continue reading →

via Meteuphoric November 11, 2019

Wild animal welfare in Hans Christian Andersen

Continuing the theme of wild animal suffering in children’s lit… Hans Christian Andersen’s stories involve a lot of suffering of both human and animal varieties. “The Ugly Duckling” takes a brief detour from describing the duckling’s repeated social humil…

via The whole sky November 7, 2019

more     (via openring)

More Posts:


  ::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact