First there was gunpowder. Then, there were nuclear arms. Today, Artificially Intelligent weapons are being called the third revolution in warfare.
It’s an issue that seems akin to videogames and science fiction. Seen and sensationalized in movies like the Terminator and Blade Runner, Artificial Intelligence is a phrase that rings to most ears like a quip or a punchline. In the real world, however, people aren’t laughing.
With science progressing at a breathtaking speed, some of the world’s most powerful leaders are concerned that technological advancements may become unmanageable. If they do, it’s possible that technology may assume the control that humans cannot.
Contrary to popular belief, there will be no robotic revolution, no technological takeover – at least, not yet. For the time being, it’s a matter of morality and, beyond that, a matter of security.
In the defence sector, however, the stakes are only heightened.
With equipment of huge quantities and caliber, Artificial Intelligence in the military seems to be a recipe for disaster. Yet, powerhouses like the United States, South Korea and Russia – among others – are already building the armies of the future.
At the 50th Anniversary of the Sea-Air-Space Exposition, Secretary of the US Navy Ray Mabus announced plans to develop and deploy autonomous-drone weapons. Manufacturers in South Korea, conversely, have created the Super aEgis II, a gun that can identify, track and shoot targets. Finally, the Russian Platform-M automatic combat machine can destroy targets in automatic or semi-automatic control systems.
When human life is at stake, many consider human mediation and rationale to be a critical part of the equation. Such concerns have caused world-renowned experts to create an open letter that restricts the use of AI weapons.
“If any major military power pushes ahead with AI weapon development,” the letter warns, “a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”
The letter goes on to explain that AI weapons are perfectly poised to become the future of modern warfare. Requiring no hard-to-obtain raw materials, they can be mass-produced and distributed on black markets around the world. AI’s ability to select and eliminate people meeting pre-set criteria makes them the perfect weapons for terrorists and dictators looking to wipe out a specific group of the populace.
The letter’s endorsers include Stephen Hawking, Noam Chomsky and Steve Wozniak – although the latter has recently had a change of heart. At the Freescale technology forum in Austin, Wozniak dismissed the concern as an issue of the far-off future, saying, “They’ll be so smart by then that they’ll know they have to keep nature, and humans are part of nature. I got over my fear that we’d be replaced by computers. They’re going to help us. We’re at least the gods originally.”
As with any debate, there is merit to both sides. Should AIs replace soldiers on the battlefield, there is the potential for reduced human casualties. In the same vein, a machine that takes action for humans can reduce psychological harm in soldiers.
Ultimately, it all comes down to ethics. Is it possible to create a moral robot? Who will take the blame when and if AI weaponry makes a fatal mistake – the manufacturer, the developer, or the robot itself? Will AIs prove to be an asset or a liability to their human creators?
All of these questions will necessitate answers eventually. Yet with experts warning against an impending arms race, the answers had better come through sooner rather than later.