In the past, humans turned new technologies into weapons. Given the strategic advantages A.I. possesses, we can predict that someone will use A.I. as a weapon. This presents a security paradox: a sense of insecurity occurs because of what we assume to be the future actions of others.
What is most unique to the peace-through-security perspective is the way these security themes generally manifest in their counter-images, as fears of insecurity. There are several main fears featured repeatedly in academic and popular literature surrounding A.I.
The first is that A.I. will be used as a weapon. One needs only to look at the invention of dynamite to be reminded of humanity’s tendency to weaponize technology. Drones were originally developed for surveillance use yet were quickly turned into weapons. Since the first lethal drone strike in 2002, their use in warfare has grown substantially. Those in favor of the use of these technologies see it as making warfare safer (for those who have the drones) and opponents of these technologies fear that it is dehumanizing warfare and thereby lowering the bar for engagement in armed conflict. Drones quickly slipped out of sole control of the state, changing the logic of military security as non-state forces in guerrilla operations repurpose hobby drones.
One major fear is in augmenting these “dumb” weapons with A.I. so that they no longer require a human operator for target selection and engagement. Development is already underway. AI safety experts warn that this will make the weapons exponentially more dangerous.
A.I. enhanced weapons present novel security considerations qualitatively different from conventional and nuclear weapons. The first is their specificity: a bullet cannot decide who to hit, but a smart drone could. The second is the ease of the proliferation. Nuclear weapons require difficult to acquire materials and specialized knowledge to build and deploy them. Once drones augmented with A.I. come to exist, their designs may be easily copied and mass-produced. Such weapons will be appealing to totalitarian regimes, warlords and terrorists. While they could be used as weapons of mass destruction, they could also be used as weapons of highly specific destruction designed to target specific individuals or groups.
A.I. will not necessarily have to augment drones or other robots to be used as weapons. Digital weapons have crossed from the virtual world to cause destruction in the physical. In 2010, the Stuxnet worm, a malicious computer code, was perhaps the world’s first digital weapon. In Iran, this worm infected computer systems that controlled the fast-spinning centrifuges used in nuclear enrichment and caused them to destroy themselves. A cyber weapon augmented with A.I. would be significantly more dangerous.