Robots with machine-guns, killing any human in sight, because the robots know they are more intelligent than humans.
You have probably heard about this theory, you may even consider this is something that will inevitably happen, in the near future, unless we do something about it.
However, everything about that sentence is wrong. Let’s debunk it.
AI versus General AI
Let’s define some terms and concepts first. The purpose of an AI (Artificial Intelligence) is to solve one or more specific tasks. These tasks are usually complex and they need to be solved fast (the use of the terms ‘complex’ and ‘fast’ here are meant to do a comparison with the human aptitudes).
For example, we train an AI program to play chess at a professional level. That AI can easily beat any of our best chess champions, however it cannot win any other kind of game (cards for example). Why? Because it was given only one task to solve – win at chess.
General AI, on the other hand, is designed to solve any task it encounters. It needs to study the task, find solutions, improve them and doing this while it adapts to any other challenges being thrown at it.
We currently don’t have any General AIs. These will most probably be built by other AIs. We can use recursion to allow an AI to improve itself. At some point the intelligence of one if these AGIs will surpass the intelligence of humans. The fear is that at that point the AGI will start to destroy the human race.
We can think that one of these systems can make a huge number of simulations and see if its existence is being put in danger by humans. But ‘danger’ is not something a software can feel. It cannot ‘feel’ anything, really. It is just really good at solving tasks. If it doesn’t have this particular task given by someone – to destroy the human race – it has no incentive to do this.
It may develop a conscience, but we should not try to jump to conclusions here, and we should definitely not try to compare it with human conscience, or at least what we understand by conscience (since we are still studying this topic).
It will definitely not going to build an army of robots with guns. It is such an ineffective approach anyway, slow and with many vulnerabilities.
It will probably use a more simple and fast solution, like a biological attack or cutting access to supplies :)
The ant hill
However, at some point the AGI will lose sight of us, humans. We will be insignificant to it, possessing no threat. It will probably see us as we see ants in a forest.
In out treks through forests, we step on ants. It is not something we do with malice. There is no hatred or revenge involved here. From the perspective of the ants, however, things may look different.
If we need to build a pool, and an ant hill is in our way, well, too bad for the ants… We were only working on a task.
We cannot use ‘evil’ to describe an AI or AGI. But this does not nullify any danger here.
There are many ways in which the explosion of AI can blow back in our faces unless we hold the upper hand and know how to be more intelligent when it comes to safety. And I refer to intelligence here in the context of a power play.
We have lions in captivity under out control not because we are more powerful than lions, but because we are smarter.
If you find this topic interesting, I recommend the following book by Max Tegmark: Life 3.0 (Being Human in the Age of Artificial Intelligence).
Header image copyright by Crowe H.