Our Recent Posts

Archive

Tags

A.I. and Security: Threat from the Unknown


At its core A.I will be a deeply transformative technology. Current fears about the future development of A.I. come from the intersections of models of predicting the future, the limitations of those models and our own projections onto technologies.


Much of the discussion surrounding the emergence of A.I. is built upon models from studying the past. Central to many A.I. researchers’ predictions is Moore’s Law, the observation in electronics that the number of transistors in a circuit doubles every two years. This trend appears to be relevant well beyond electronics and shows up in fields as neuroimaging and genetic sequencing. These types of trends predict the future of technology.


All models of envisioning the future are based on one, or several, trends carried out to their logical conclusions. By their nature these models leave out everything else. The significance of the effect of “everything else” is the unknown. The combination of our models along with our relationship to the unknown set the stage for our projections of the future


The imagined futures among A.I. experts range widely. A.I. may be an “oracle” computer that answers our questions. Or it could become a techno-deity governing a utopia. It may exterminate the human race or keep some of us around in a zoo. It could be harnessed by a totalitarian state as the ultimate surveillance tool forever halting human progress, or it may hybridize with humans resulting in something entirely new.


If these scenarios feel familiar, it’s because there is something distinctly human about them. They all seem to commit a kind of anthropomorphic fallacy in which we assume how an A.I. may act based on how humans would. More likely what happens with A.I. will be something that was not expected. Fear then comes not from a direct threat but from an unintended consequence.


An example is the often-cited “Paper Clip” scenario of Philosopher at the Future of Humanity Institute, Nick Bostrom [1]. An A.I., given the seemingly mundane task of increasing the number of paper clips develops a super intelligent A.I. who creates a way to convert all existing matter into paper clips, thereby causing total destruction of the universe.


Part of the gravity surrounding A.I. is based on the extent that it forces us to face that which is unknowable. From a modern perspective, this presents a deep sense of insecurity, as the world should be, by definition, understandable. A technology that not only confronts us with our limitations but simultaneously proposes something that is without limits presents a mirror for us to confront ourselves. We may fear destruction because we see our own capabilities for destruction.


 

Contact

Follow

©2018 BY NOAH TAYLOR, PHD.