A researcher at Ransselaer Polytechnic Institute in the US has given three Nao robots an updated version of the classic 'wise men puzzle' self-awareness test… and one of them has managed to pass.
In the classic test, a hypothetical King calls forward the three wisest men in the country and puts either a white or a blue hat on their heads. They can all see each other's hats, but not their own, and they're not allowed to talk to each other. The King promises that at least one of them is wearing a blue hat, and that the contest is fair, meaning that none of them have access to any information that the others don't. Whoever is smart enough to work out which colour hat they're wearing using that limited information will become the King's new advisor.
In this updated AI version, the robots are each given a 'pill' (which is actually a tap on the head, because, you know, robots can't swallow). Two of the pills will render the robots silent, and one is a placebo. The tester, Selmer Bringsjord, chair of Rensselaer's cognitive science department, then asks the robots which pill they received.
There's silence for a little while, and then one of the little bots gets up and declares "I don't know!" But at the sound of its own voice it quickly changes his mind and puts its hand up. "Sorry, I know now," it exclaims politely. "I was able to prove that I was not given the dumbing pill."
You can see the adorable test in the footage from Motherboard below:
It may seem pretty simple, but for robots, this is one of the hardest tests out there. It not only requires the AI to be able to listen to and understand a question, but also to hear its own voice and recognise that it's distinct from the other robots. And then it needs to link that realisation back to the original question to come up with an answer.
But don't panic, this isn't anything close to the type of self-awareness that we have as humans, or the kind that the AI Skynet experiences in The Terminator, when it decides to blow up all humans.
Instead, the robots have been programmed to be self-conscious in a specific situation. But it's still an important step towards creating robots that understand their role in society, which will be crucial to turning them into more useful citizens. "We're talking about a logical and a mathematical correlate to self- consciousness, and we're saying that we're making progress on that," Bringsjord told Jordan Pearson at Motherboard.
Right now, the main thing holding AI back from being truly self-aware is the fact that they simply can't crunch as much data as the human brain, as Hal Hodson writes for New Scientist: "Even though cameras can capture more data about a scene than the human eye, roboticists are at a loss as to how to stitch all that information together to build a cohesive picture of the world," he says.
Still, that doesn't mean that we don't need to be careful when creating smarter AI. Because, really, if we can program a machine to have the mathematical equivalent of wants and desires, what's to stop it from deciding to do bad things?
"This is a fundamental question that I hope people are increasingly understanding about dangerous machines," said Bringsjord. "All the structures and all the processes, informationally speaking, that are associated with performing actions out of malice could be present in the robot."
Bringsjord will present his research this August at the IEEE RO-MAN conference in Japan, which is focussed on "interactions with socially embedded robots".
Oh, and in case you were wondering, the answer to the King's original 'wise men test' is that all the participants must have been given blue hats, otherwise it wouldn't have a been fair contest. Or at least, that's one of the solutions.