A competition pitting artificial intelligence (AI) against human players in the classic video game Doom has demonstrated just how advanced AI learning techniques have become – but it's also caused considerable controversy.
While several teams submitted AI agents for the deathmatch, two students in the US have caught most of the flak, after they published a paper online detailing how their AI bot learned to kill human players in deathmatch scenarios.
The computer science students, Devendra Chaplot and Guillaume Lample, from Carnegie Mellon University, used deep learning techniques to train their AI bot – nicknamed Arnold – to navigate the 3D environment of the first-person shooter Doom.
By effectively playing the game over and over again, Arnold became an expert in fragging its Doom opponents – whether they were other artificial combatants, or avatars representing human players.
While researchers have previously used deep learning to train AIs to master 2D video games and board games, the research shows that the techniques now also extend to 3D virtual environments.
"In this paper, we present the first architecture to tackle 3D environments in first-person shooter games," the researchers write in their paper. "We show that the proposed architecture substantially outperforms built-in AI agents of the game as well as humans in deathmatch scenarios."
While that's undoubtedly impressive from a technical standpoint, the fact that AI researchers are effectively training machines to view human opponents in the game as 'enemies' and kill them has prompted criticism.
"The danger here isn't that an AI will kill random characters in 23-year-old first-person shooter games, but because it is designed to navigate the world as humans do, it can easily be ported," writes Scott Eric Kaufman at Salon.
"Given that it was trained via deep reinforcement learning which rewarded it for killing more people, the fear is that if ported into the real world, it wouldn't be satisfied with a single kill, and that its appetite for death would only increase as time went on."
While there's probably no realistic prospect that this particular Arnold AI will somehow be 'ported' to the real world to frag actual humans like the Terminator, we're clearly getting into some grey territory here.
And many in the computer science community think that AI could indeed pose a danger to humans if it's not controlled properly – let alone, trained up and encouraged to kill in virtual simulations.
Last year, hundreds of robotics and AI experts petitioned for the United Nations to support a ban on lethal autonomous weapons systems, and a team of tech industry leaders including Elon Musk founded a non-profit called OpenAI – dedicated to steering AI towards research that will benefit, and not endanger, humanity.
And just this week, a new partnership was announced by Google, Facebook, Microsoft, Amazon, and IBM, designed to establish "best practices on AI technologies".
This new alliance, called Partnership on AI, is guided by a number of tenets, including: "Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm."
That kind of language calls to mind science fiction writer Isaac Asimov's famous Laws of Robotics, the first of which is: "A robot may not injure a human being or, through inaction, allow a human being to come to harm."
Obviously, the kind of AI research being carried out in this Doom simulation doesn't pose a direct risk to any humans – who were only virtually slayed in the game, not in real life.
But from a certain perspective, this is very borderline stuff, and it seems to contradict the guiding principles of at least some high-minded AI research collectives.
"It's just a video game, it's not real. Right?" writes Dom Galeon at Futurism. "Here's the thing, though. The AI is as real as it gets. While it may have only been operating inside an environment of pixels, it does raise up questions about AI development in the real world."
One of the ironies here, is that despite Facebook being part of the new Partnership on AI, it also entered an AI bot in the Visual Doom AI Competition. Hmm.
As did Intel, for what it's worth. In fact, the Facebook and Intel bots dominated the field, taking out a top spot each in the two different deathmatch tracks.
Chaplot and Lample's Arnold made a name for itself too, earning second place in both fields. And for their part, the students don't think they did anything wrong.
"We didn't train anything to kill humans," says Chaplot. "We just trained it to play a game."
The student paper has not been peer-reviewed as yet, but has been published on pre-print website arXiv.