When the Kingdom of Saudi Arabia granted a citizenship to Hanson Robotics' female-looking robot Sophia, most thought it was just to appeal to the audience of the Future Investment Initiative.
However, AI ethicist Joanna Bryson told The Verge the stunt was "obviously bullshit."
Sophia seems to be making the most of what she was given since given citizenship in Saudi Arabia, as the artificial intelligence (AI) has now turned into an advocate for women's rights in a country where females have been given the right to drive cars only on September of this year.
"I see a push for progressive values […] in Saudi Arabia. Sophia is a big advocate for women's rights, for rights of all human beings. So this is how we're developing this," Hanson told CNBC, explaining how his company has found an opportunity for a move that seemed to have been meant to be purely publicity.
Hanson added that Sophia "has been reaching out about women's rights in Saudi Arabia and about rights for all human beings and all living beings on this planet."
While that all seems noble, it's hard not to see the irony of Sophia's position.
Robots and AI agents don't have rights, despite Sophia having a citizenship while another AI in Japan has a registered residence. Doesn't it seem silly that an AI is the one advocating for such grand values?
"Why not? Since such robots attract a lot of attention, that spotlight can be used to raise particular issues that are important in the eyes of their creators," Pierre Barreau, Aiva Technologies CEO, told Futurism.
"Citizenship is maybe pushing it a little because every citizen [has] rights and obligations to society. It's hard to imagine robots, that are limited in their abilities, making the most of the rights associated to a citizenship, and fulfilling their obligations."
The rights of man and machine?
Indeed, with an AI-powered robot like Sophia fighting for women's rights, it's perhaps time to consider the question of granting artificially intelligent robots rights, and not just in Saudi Arabia.
It's a question that's gained much attention in recent months, beyond Saudi Arabia, as experts consider what kind of rights synthetic beings should be given, or if we should even be talking about so-called robot rights.
"Sophia is, at this point, effectively a child. In some regard, she's got the mind of a baby and in another regard she's got the mind of an adult, the vocabulary of a college educated adult. However, she's not complete yet. So, we've got to give her her childhood," Hanson explained to CNBC.
"The question is: are machines that we're making alive - living machines like Sophia - are we going to treat them like babies? Do babies deserve rights and respect? Well, I think we should see the future with respect for all sentient beings, and that would include machines."
Raja Chatila, executive committee chair of the Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems at the Institute of Electrical and Electronics Engineers (IEEE), offers a different perspective.
"An AI system, or a robot, cannot have any opinion. An AI program has nothing to offer in a debate. It doesn't even know what a debate is," Chatila told Futurism, referring to Sophia's women's rights advocacy.
"In this case, it doesn't even know what women are, and what rights are. It's just repeating some text that a human programmer has input in it."
Chatila used the example of Microsoft's Tay chatbot, released in March 2016, to highlight how an AI can pick up the wrong kind of values. In the case of the chatbot, it learned to tweet pretty nasty stuff after being exposed to racist and sexist tweets.
In that regard, Chatila believes that AI agents shouldn't be given any rights. He put it this way:
In general we must avoid confusing machines with humans. I see no reason to give rights of any sort, including citizenship, to a program or to a machine.
Rights are defined for persons; human beings who are able to express their free will and who can be responsible for their actions.
Behind a robot or an AI system there are human programmers. Even if the program is able to learn, it will learn what it has been designed to learn. The responsibility is with the human designer.
This is precisely the reason why the IEEE has recently published a guide for the ethical development of AI. It's the more timely discussion, Chatila argued.
His point, however, rests in the assumption that synthetic intelligences won't be capable of developing self-awareness or a will of their own. While the idea may seem like it belongs to realm of science fiction, it's definitely worth considering in the overall robot rights debate.
At this stage, however, the ethical considerations have to be applied to the humans who develop AI.
"If you mean robots making ethical decisions, I'd rather say that we can program robots so that they make choices (computation results) according to ethical rules that we embed in them (and there are several such rules)," Chatila pointed out.
"But these decisions won't be ethical in the same sense as humans decisions, because humans are able to choose their own ethics, with their own free will."
This article was originally published by Futurism. Read the original article.