We can all rest assured knowing that SkyNet will not take over the world and summon an army of Terminators to systematically exterminate the human race.
This news comes from Dr. Alan Bundy of the University of Edinburgh’s Informatics School, who does not believe that artificial general intelligence (AGI), a computer with human-level wit, poses an existential threat to humanity.
In a recent article, Bundy tells us that there is nothing to fear. He writes:
“As AI progresses, we will see even more applications that are super-intelligent in a narrow area and incredibly dumb everywhere else. The areas of successful application will get gradually wider and the areas of dumbness narrower, but not disappear… We will develop a zoo of highly diverse AI machines, each with a level of intelligence appropriate to its task–not a uniform race of general-purpose, super-intelligent, humanity supplanters.”
In other words, although Bundy concedes that we will develop AGI, he is convinced that these computer programs will work under very specific circumstances. He looks to current AI programs, such as Google’s AlphaGo and IBM’s Jeopardy! playing Watson, and comes to the conclusion that future AI will be just like them: souped up game-playing gadgets.
To Bundy, AI only poses a threat to individual human beings because of its “areas of dumbness.” He attributes accidents, such as this one in which a Florida man drove under a tractor trailer while using Tesla’s autopilot system, to dumb AI, and says that future AI can only be dangerous to individual people who delegate too much responsibility to inherently flawed AI.
While this argument is functional if we work with the assumption that AI will never really become as smart as humans, what happens when it does? A recent survey among AI researchers shows that experts expect AGI to be achieved by 2050. So when AI not only has the computational ability of supercomputers, but can also “think” through various layers of abstraction like human beings? What happens when we develop AI that could use our evolutionary advantage, intelligence, better than us?
Currently, AI is being developed with the primary goal being to maximize the probability of completing a certain task. While this approach is not problematic given the limitations of current AI, it can become a much bigger issue once AGI is developed. According to premier AI expert Stuart Russell, AGI developed under this framework would have two goals: to stay ‘on’ and to acquire more resources.
To put this in perspective, let’s turn to a hypothetical situation. Let’s say you have an AGI, a robot with all the same capabilities as a human. If you were to tell this robot to make a coffee, then it would by default try to maximize the probability of actually making a coffee. In order to make sure that the coffee is made it must stay on. For it to stay on, there can’t be anything within its vicinity that can turn it off. So if there is someone in the room that could possibly reach over and turn it off, then this AGI would kill that person to make sure that the coffee is made.
While the example of the killer barista may seem exaggerated, it illuminates the precise danger of AI. It’s not that AGI will develop evil intentions and kill off human beings in the pursuit of some sort of liberation from the control of a dumber species, but that it will be too good at doing the wrong job. The main goal of AI will no longer be to make our lives better, but rather to complete any it is assigned in the most efficient manner.
So what’s the solution?
Instead of designing AI that will determine how to best complete a given task, we should make AI whose main goal would be to benefit humans–something Russell calls “maximizing human values.”
We would teach AI human values through the same deep learning techniques we used to teach AlphaGo how to play Go. AI safety would necessitate the need for uncertainty which would allow the human to turn it off whenever it does not act in the best interest of humans. This AI training should happen at a regional level where governments would collaborate with tech companies to create the best set of human values for a specific cultural and geopolitical context.
While AGI may be about 30 years away, it is crucial that we take steps to develop safe AI now. AI safety research must be incentivized by large corporations in AI such as Facebook and Google through a series of grants and monetary awards for researchers.
Because of the limitless potential of AGI, to eradicate disease and even end poverty, it is important that we engineer AI that will ultimately benefit us and not end our civilization.