Artificial intelligence (AI) has the potential to save lives by predicting natural disasters, stopping human trafficking, and diagnosing deadly diseases. Unfortunately, it also has the potential to take lives.
Efforts to design lethal autonomous weapons - weapons that use AI to decide on their own whether or not to attempt to kill a person - are already underway.
On Wednesday, the Future of Life Institute (FLI) - an organization focused on the use of tech for the betterment of humanity - released a pledge decrying the development of lethal autonomous weapons and calling on governments to prevent it.
"AI has huge potential to help the world - if we stigmatize and prevent its abuse," said FLI President Max Tegmark in a press release.
"AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way."
One hundred and seventy organizations and 2,464 individuals signed the pledge, committing to "neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons."
Signatories of the pledge include OpenAI founder Elon Musk, Skype founder Jaan Tallinn, and leading AI researcher Stuart Russell.
The three co-founders of Google DeepMind (Demis Hassabis, Shane Legg, and Mustafa Suleyman) also signed the pledge. DeepMind is Google's top AI research team, and the company recently saw itself in the crosshairs of the lethal autonomous weapons controversy for its work with the US Department of Defense.
In June, Google vowed it would not renew that DoD contract, and later, it released new guidelines for its AI development, including a ban on building autonomous weapons.
Signing the FLI pledge could further confirm the company's revised public stance on lethal autonomous weapons.
It's not yet clear whether the pledge will actually lead to any definitive action.
Twenty-six members of the United Nations have already endorsed a global ban on lethal autonomous weapons, but several world leaders, including Russia, the United Kingdom, and the United States, have yet to get on board.
This also isn't the first time AI experts have come together to sign a pledge against the development of autonomous weapons. However, this pledge does feature more signatories, and some of those new additions are pretty big names in the AI space (see: DeepMind).
Unfortunately, even if all the world's nations agree to ban lethal autonomous weapons, that wouldn't necessarily stop individuals or even governments from continuing to develop the weapons in secret.
As we enter this new era in AI, it looks like we'll have little choice but to hope the good players outnumber the bad.
This article was originally published by Futurism. Read the original article.