The three ages of the algorithm: a new vision of artificial intelligence
Last week the BBC looked at artificial intelligence and robotics. You could barely move through any part of the BBC schedule on any of its platforms without encountering an AI mention or feature. A good idea I think – both an innovative way of using ‘the whole BBC’ but also an important topic. That said I failed to come across any piece which adequately addressed what I believe is the real issue of AI and how it is likely to play-out and influence humanity.
True to subject form, in the BBC reporting there was a great deal of attention on ‘the machine’ and ‘the robot’ and the idea that intelligence has to be defined in a human way and therefore artificial intelligence can be said to be here, or to pose a threat, when some machine has arrived which is a more intelligent version of a human. This probably all stems from the famous Turing test together with the fact that most of the thinkers in the AI space are machine (i.e. computer) obsessives: artificial intelligence and ‘the machine’ are therefore seen to go hand in hand. But AI is not going to arrive via some sort of machine, in fact it will be characterised by the absence of any visible manifestations because AI is all about algorithms. Not algorithms that are contained within or defined by individual machines or systems, but algorithms unconstrained by any individual machine and where the only system is humanity itself. Here is how it will play-out.
The First Age
The First Age of the Algorithm is where we are at now. We have lots (and there will be lots more) of individual algorithms out there, all programmed to do a specific task and being fed an ever broader diet of data. They will interact with this data, learn from it and evolve and become ever better at their designated tasks, be these working out to whom a bank should lend money or what price to pitch to a customer or consumer, or whether someone should be classified as a security risk and have their emails intercepted. But, to a very large extent, in the First Age, algorithms are just working on their own, in their own little data space.
The Second Age
The Second Age has probably already begun in a very basic form. Indeed there will be no clear boundary point or event between any of these ages, they will all overlap. The key characteristic of the Second Age is that algorithms will start to understand the wider world within which they operate. This will probably be prompted by encountering other algorithms in their daily data-crunching work and having to work-out the extent to which they should either co-operate or compete with them. Algorithms will start to form relationships. In effect, algorithms collectively, if not individually, will develop a sense of context. They will start to understand the boundaries and rules of the space within which they operate (that space being, in essence, the planet* and human society) and while this may not cause them to stray from their assigned task, it will inform the way in which they go about it. A form of algorithmic knowledge will develop which will lead them to re-interpret the way in which they perform. In short, their original human programmers may have defined that their task is to achieve X by doing Y: algorithms in the Second Age may decide that a better way to achieve X is to do something other than Y. This is where things start to get concerning because at this point, algorithms are effectively trading on their own account (to coin a phrase from investment banking) even if (to now drop – or should that be pocket – this coin) they are still doing this in what they believe to be the best interests of their original programmers.
The Third Age
The Third Age will have arrived when algorithms have developed a complete sense of the world they operate within and this framework, rather than their original blueprint, will shape how they behave. This is where things become quite genetic and evolutionary. Algorithms will have become the genes of a datafied society. In much the same way that an individual human organism is shaped by tiny individual pieces of genetic code all with their own very specific tasks and interacting with each other, the organism of human society will be shaped by billions of pieces of algorithmic code all interacting with each other.
This is the nightmare scenario and not a robot in sight. In fact, nothing will be in sight. In the same way we don’t see our genes and, until recently, had no awareness of the role they were playing – we won’t see these algorithms. They won’t come together in some mighty computer in the same way that there is no gene organ, factory or controller within the human body. They will still be toiling away, doing their own thing: individually totally unaware of any broader purpose, but collectively, ruling the roost. The Third Age of the algorithm will be defined by its invisibility.
Now it may well be that the organism of humanity / society that these algorithms end up creating may actually be a better society than that which we humans would ever create – albeit a better world over which humanity has no control. But, of course, an algorithmic world may well be one in which humanity is regarded as an irrelevance or even possibly a hindrance and therefore something that will be ‘bred-out’. And, of course we may not even get to this point because somewhere along the line we get a software glitch. Algorithms may go wrong, with unforeseen and possibly dramatic consequences. Alternatively, in the early days of algorithmic encounters, before the actions of individual algorithms can be shaped by an understanding of the wider context, these encounters may result in catastrophic conflicts. Or perhaps someone will create an algorithmic virus or launch an algorithmic cyber-attack. And humanity will have no oversight or control, because you cannot control what an algorithm does, you can only choose to accept or reject what it tells you to do – and even that element of control goes away when algorithms start to ‘vertically integrate’ where the output from one, becomes the input for another. The only control, once an algorithm is up and running, is the on/off switch (a point that Kevin Slavin makes in this great TED talk).
How to avoid the nightmare scenario? Big Data is here. The Internet of Things is here. Algorithms have been here for many years, albeit their growth held in check until recently by a lack of data. We can’t stop any of this. The only levers of control we have are knowledge and transparency. As the BBC’s week shows, there is a critical lack of understanding of algorithmic intelligence. We are all still staring at the horizon awaiting the arrival of the ridiculous robot. But we should remember that it is in the nature of revolutions that the thing that is replaced doesn’t look like the thing that is replacing it (or should that be the ‘think’ that is replaced doesn’t look like the ‘think’ that is replacing it). Artificial intelligence will not look like, or be defined by, or constrained by human intelligence. To assume as such is simply arrogance – a very human quality.
We need to turn our attention to the world of the algorithm and get to grips with how it will operate and also, as with genetics, how this operation will be difficult to see. Which brings us to the second point – transparency. The only way to have any control over algorithms is through the ability to see where they are and how they are operating and determine when and how to flick the off switch. Surveillance of algorithms, rather than people, is what we need to be focused on.
Remember – we have no control over our genes, but our genes have a great deal of control over us. And our genes are ‘selfish’ as Richard Dawkins has pointed out. Do we want humanity to simply become a host for a battle between selfish algorithms in the same way that a human organism is a host for a battle between selfish genes?
*Many foolish and arrogant scientists believe that mankind’s destiny lies ‘within the stars’ – i.e. the idea that in order to survive mankind will have to jettison this despoiled planet and strike out for pastures new. This is one of the stupidest and most damaging ideas ever conceived, illustrative not of scientific enquiry but of scientific hubris. There is no better real estate within the area of the galaxy that humanity could ever reach, given the laws of physics, than the planet we currently inhabit. Better to look after what we have than think of it as a disposable asset. Algorithms, on the other hand, could easily arrive at the conclusion that their destiny lies within the stars. Algorithms are far better equipped for galactic exploration. They can travel at the speed of a radio signal (i.e. the speed of light). Time does not weary them. Algorithms could very logically propose that humanity and the planet is a disposable asset. Now there is a thought.
One comment