A.I. and Security: An AI Arms Race
Another set of fears surrounding A.I. stems not from the technology itself, but from the ways in which it may be pursued. The financial, military and political incentives make an A.I. “arms race” seem inevitable, as countries, companies and other actors compete to be the first to develop it.
This race has already appeared on the global stage, given the Chinese Government stated that their country intends to become a world leader in A.I. by 2030. Russia’s Vladimir Putin recently pointed out the vast importance of A.I. research when he said, “Whoever becomes the leader in [the A.I.] sphere will become the ruler of the world.”
The current lack of norms, regulations and international consensus, combined with strong profit incentives, will lead to a “race to the bottom,” in which proper safeguards are eschewed and corners cut to develop A.G.I. quickly, rather than safely.
So far, movements to address these dynamics manifested in efforts to ban the use of autonomous weapons and weaponized A.I. or to regulate its development. Support for these efforts has come from experts at leading companies interested in the development of A.I. Elon Musk (founder of Open A.I. and CEO of Tesla and SpaceX) and Mustafa Suleyman (cofounder of DeepMind) have joined 155 other experts in releasing an open letter to the United Nations Convention on Certain Conventional Weapons. They state their fears that commercial technologies developed by A.I. and robotic companies may be repurposed to develop autonomous weapons, and they call for international regulation of A.I.
Progress has been made towards consensus among experts to denounce potentially dangerous developments in A.I. Recently, Korea Advanced Institute of Science and Technology (KAIST) and their partner Hanwha Systems announced a joint project to develop A.I. technology which their own public relations firm described as being “able to search for, and eliminate, targets without human control.” Such technology would be “the third revolution in the battleground after gunpowder and nuclear weapons.”[4,5] A letter calling for a boycott of both the university and the defense contractor was signed by more than 50 leading academics working on A.I., in an attempt to cut them out of the community of experts.
Efforts to outright ban A.I. research are unlikely to succeed. The technology offers too much potential, both for tremendous world-wide change and for profit. A more skillful approach to dealing with the development of A.I. is to work with the dynamics that are shaping it. This requires an understanding of how technology has shaped our evolution, societies and cultures. Most importantly, perhaps, is that it also requires imagination. Envisioning a symbiotic development of A.I. and humankind demands that we fuse a scientific literacy with a cautious hope.