When a self-driving car is involved in a fatal accident, who is at fault? Is it the "safety-driver," the company behind the car, or the lawmakers that allowed the self-driving car to hit the road in the first place?
When Uber's autonomous car fatally struck 49-year-old Elaine Herzberg in Arizona on March 18th, this previously-abstract question rapidly came into stark focus.
Now, an exclusive report by tech outlet The Information gets us closer to an answer. The AV's software reportedly "decided" to ignore the object in front of it leading up to the crash. That is, it "saw" the woman, and made the decision "it didn't need to react right away".
The report states that, to remedy an overpowering number of "false positives" - hindrances in the road that pose no real threat, like a piece of cardboard - the threshold of Uber's software was "tuned" so low, that even a grown woman with a bicycle did not trigger an immediate response.
In a comment sent to The Information and to The Verge, Uber declined to get into specifics.
But if The Information is right, this is pretty bad news for the industry as a whole. In fact, as far as self-driving car technology is concerned, it's really bad news: the on-board software saw the woman crossing the road, and decided not to take action.
Answers to other questions the crash brings up - "Is it too early for fully autonomous vehicles? Is the technology not ready?" - seem to be "yes" (even though it was the software's tuning, determined by humans, that caused the crash).
But what about the "safety driver," who was behind the wheel during the crash?
According to the New York Times, Uber's robotic vehicle project "was not living up to expectations months before [the crash]."
And to make Uber's problems worse, the company decided to cut the number of "safety drivers" from two per vehicle down to one around October of last year.
The company even reduced the number of safety sensors before the crash from seven to just a single LIDAR sensor on the roof.
There's no question that Uber's fatal crash in Arizona was a huge setback for autonomous vehicle testing, and this most recent report, if substantiated, could push it even further.
Uber will likely keep testing its autonomous vehicles in controlled environments until it can prove, without a shadow of a doubt, that they are able to drive safely in the real world.
This article was originally published by Futurism. Read the original article.