Artificial intelligence: Stephen Hawking is wrong (sort of)
The revered physicist, Stephen Hawking, issued some warnings last week about artificial intelligence. His analysis follows the lines taken by most techy types, or at least was represented as such by the technology journalists that reported on it. This story basically predicts the creation of some all-powerful machine that will be smarter than a human, able to replicate itself and thus able to out-compete humans or relegate us to the status of slaves. A super-smart version of a human, in fact. The reason we have such a machine-based vision is because the people who talk about artifical intelligence also like building machines. They therefore believe that the future of artifical intilligence will be machine-based (Hal from Space Odyssey etc.)
But this is not how artifical intelligence is going to arrive. In fact it has already arrived.
The selfish algorithm
Forget the technologists’ machine-based vision. We should be looking to the biologists and geneticists. The future of artificial intelligence will be built genetically – indeed in the same way we humans are built. The genetic code for artificial intelligence will be written in algorithms. Billions upon billions of algorithms, each one of which will be responsible for determining some micro-function of society. And given that the key feature of algorithms is that they learn and can thus evolve, adapt and react it is quite possible they will start to shape the society they contribute to in a way which will ensure their own continuation – in the same way that genes shape the behaviour of the organism of which they are a part in order to ensure the survival of the genes (rather than the organism). The selfish algorithm in fact.
This isn’t so much about intelligence – the algorithms won’t necessarily posses intelligence or even come together in a way that might produce an entity we would recognise as being intelligent. Instead it is more a question of replacing or surplanting human intelligence and decision making and thus control. Algorithms will replace what it is we used human intelligence to do and human society will be relegated to the status of being a host for algorithms, in the same way in which the human body is really just a host for genes.
This is basically the nightmare scenario and there is not some super all-controlling machine, or ridiculous robot, at the heart of it. And it is a nightmare that is stealing upon us. There are already millions of algorithms out there which are starting to shape our world. The introduction of Big Data and the internet of things is only going to add exponentially to their number. Within a few years there will be almost nothing which happens that isn’t based upon something an algorithm has determined for us. And if the algorithms start to take control, we will not be able to see it, in the same way that until recently we have not been able to see the way our genetic code controls our own destiny. In fact we will probably not be looking for it, because we will be looking instead for the emergence of ‘the machine’.
Professor Mark Bishop has challenged Hawking’s analysis, but this critique is based on the idea that AI plus human will always be better than AI on its own. This may be true, but algorithms essentially hollow-out the idea of human intelligence: they replace the human requirement to understand why things are happening in order to understand or control what is happening. This suggests that the triumph of human plus algorithm over the algorithm on its own might be a phyric victory – at least for humanity as a whole, if not necessarily for the individual humans who believe themselves to be in control of the algorithms. Who (or what) is controlling whom being the key question.
The only means of control we will have is to make transparent the algorithmic code, in the same way that we have now made transparent our genetic code. Algorithms behave best in the open, when they operate in the dark they have a trendency to get up to bad things (see high-frequency trading for a good example). We underestimate the power of algorithms at our peril and we have to build transparency into the model from the start: build our own social algorithmic genome project as we go along, rather than try and uncover it in retrospect.