Autonomous self-driving vehicles have a lot of barriers to cross before they can become a reality on our roads, but while a lot of the focus has been placed on technological factors and the safety of driverless cars, there are still a whole lot of questions about their theoretical legality.
This awesome photo taken by Facebook user Zandr Milewski has been doing the rounds on the Internet this week, as it neatly captures a rare and puzzling sight: one of the test vehicles in Google's Self-Driving Car Project, seemingly pulled over, with a traffic officer standing over the vehicle and asking a few questions of the human passenger (who is required by law to be in the vehicle, in case something goes wrong and the car requires driving intervention).
The photo didn't go unnoticed by Google either, which swiftly provided an explanation for what you can see in the picture. "Driving too slowly? Bet humans don't get pulled over for that too often," wrote a representative on the Google Self-Driving Car Project site. "We've capped the speed of our prototype vehicles at 25 mph [40 kmh] for safety reasons. We want them to feel friendly and approachable, rather than zooming scarily through neighbourhood streets."
According to Google, it may have been this purposefully slow speed - in addition to the vehicle's unusual appearance - that captured the policeman's attention in this instance, as opposed to anything illegal or dangerous taking place.
"Like this officer, people sometimes flag us down when they want to know more about our project," says Google. "After 1.2 million miles [more than 1.9 million km] of autonomous driving (that's the human equivalent of 90 years of driving experience), we're proud to say we've never been ticketed!"
Whatever happened here didn't result in a ticket, and that statistic is an unmistakably impressive driving record - 1.9 million km without a single ticket for a traffic violation. But that doesn't mean we shouldn't be asking questions about things like this. After all, if a traffic violation were to occur, who would be at fault?
We know that the theoretical safety of self-driving cars has the potential to save hundreds of thousands of lives, and the environmental advantages could be amazing, cutting as much as 90 percent of vehicle-produced emissions, according to one study.
But apart from refining the technology, we also need to come to grips with how we feel about these machines making life-or-death decisions on the road. And aside from the puzzling legal implications of a minor traffic violation, there's much more challenging moral territory to consider.
In a study conducted this year, researchers examined when if ever self-driving vehicles should be programmed to kill. It sounds like a ridiculous proposition - after all, surely the answer is never?
But what about a theoretical moral dilemma, where a self-driving car, due to unforeseeable circumstances, is caught in an emergency situation and needs to make a split-second decision: does it intentionally crash itself into a wall, likely killing its single occupant? Or should it instead plough into a crowd of pedestrians and likely kill many more?
It's thought-provoking stuff. In that study, surveys suggested people generally agreed that the car should minimise the overall death toll, even if that meant killing the occupant - and perhaps the owner - of the vehicle in question.
These are confronting questions we have to answer! And soon. But in the meantime, let's just be grateful the technology at least holds such a truly remarkable driving record.