At first, the news that software engineers are teaching robots to disobey their human masters does sound slightly troubling: should we really allow the artificial intelligence systems of the future to say no to us? But once you think it through, you can see why such a feature might actually end up saving your life.
Consider a robot working on a car production line: most of the time, it would simply follow the instructions it's been given, but if a human should get in harm's way, the machine needs to be clever enough to stop what it's doing. It needs to know to override its default programming to put the human at less risk. It's this kind of functionality that a team from the Human-Robot Interaction Lab at Tufts University is trying to introduce.
The team's work is based around the same 'Felicity conditions' that our human brains apply whenever we're asked to do something. Under these conditions, we subconsiously run through a number of considerations before we perform an action: do I know how to do this? Can I physically do it, and do it right now? Am I obligated based on my social role to do it? And finally, does it violate any normative principle to do it?
If robots can be conditioned to ask these same questions, they'll be able to adapt to unexpected circumstances as they occur.
It's the last two questions that are the most important to the process. A robot needs to decide if the person giving it instructions has the authority to do so, and it needs to work out if the subsequent actions are dangerous to itself or others. It's not an easy concept to put into practice, as anyone who's ever watched 2001: A Space Odyssey will know (if you haven't, watch "Open the pod bay doors").
As one of their demo videos below shows, the computer scientists are experimenting with user-robot dialogues that allow for some give and take. That means the robot can provide reasons for why it won't do something (in this case it says it will fall off a table if it walks forward) while the operator can offer extra reasons for why it should (the robot will be caught once it reaches the edge).
Researchers developing self-driving cars are also grappling with a similar moral dilemma: if a driverless car is caught in an emergency situation where it threatens the lives of several people outside, does it intentionally crash itself into a wall to kill its single occupant, or should it plough into a crowd of pedestrians and potentially kill many more?
"Surveys suggested people generally agreed that the car should minimise the overall death toll, even if that meant killing the occupant - and perhaps the owner - of the vehicle in question," Pete Dockrill reported for ScienceAlert this month.
A paper exploring these kinds of ideas has now been published online (and actually takes its name from A Space Odyssey). While it's true that teaching robots to disobey humans could potentially be dangerous, it seems clear that having them do blindly whatever they're told could in fact carry more of a risk.
"Humans reject directives for a wide range of reasons: from inability all the way to moral qualms," says the paper. "What is still missing… is a general, integrated set of architectural mechanisms in cognitive robotic architectures that are able to determine whether a directive should be accepted or rejected over the space of all possible excuse categories (and generate the appropriate rejection explanation)."
Watch an adorable robot fret sweetly about falling off a table below: