Cambridge Analytica, Facebook and data: was it illegal, does that matter?
For the last year Carole Cadwalladr at the Observer has been doing a very important job exposing the activities of Cambridge Analytica and its role in creating targeted political advertising using data sourced, primarily, from Facebook. It is only now, with the latest revelations published in this week’s Observer, that her work is starting to gain political traction.
This is an exercise in shining a light where a light needs to be shone. The problem however is in illuminating something that is actually illegal. Currently the focus is on the way in which CA obtained the data it then used to create its targeting algorithm and whether this happened with either the consent or knowledge of Facebook or the individuals concerned. But this misses the real issue. The problem with algorithms is not the data that they feed on. The problem is that an algorithm, by its very nature, drives a horse and cart through all of the regulatory frameworks we have in place, mostly because these regulations have data as their starting point. This is one of the reasons why the new set of European data regulations – the GDPR – which come in to force in a couple of months, are unlikely to be much use in preventing US billionaires from influencing elections.
If we look at what CA appear to have been doing, laying aside the data acquisition issue, it is hard to see what is actually illegal. Facebook has been criticised for not being sufficiently rigourous in how it policed the usage of its data but, I would speculate, the reason for this is that CA was not doing anything particularly different from what Facebook itself does with its own algorithms within the privacy of its own algorithmic workshops. The only difference is that Facebook does this to target brand messages (because that is where the money is), whereas CA does it to target political messages. Since the output of the CA activity was still Facebook ads (and thus Facebook revenue), from Facebook’s perspective their work appeared to be little more that a form of outsourcing and thus didn’t initially set-off any major alarm bells.
This is the problem if you make ownership or control of the data the issue – it makes it very difficult to discriminate between what we have come to accept as ‘normal’ brand activities and cyber warfare. Data is data: the issue is not who owns it, it is what you do with it.
We are creating a datafied society, whether we like it or not. Data is becoming ubiquitous and seeking to control data will soon be recognised as a futile exercise. Algorithms are the genes of a datafied society: the billons of pieces of code that at one level all have their specific, isolated, purpose but which, collectively, can come to shape the operation of the entire organism (that organism being society itself). Currently, the only people with the resources or incentive to write society’s algorithmic code are large corporations or very wealthy individuals and they are doing this, to a large extent, outside of the view, scope or interest of either governments or the public. This is the problem. The regulatory starting point therefore needs to be the algorithms, not the data, and creating both transparency and control over their ownership and purpose.
Carol’s work is so important because it brings this activity into view. We should not be distracted or deterred by the desire to detect illegality because this ultimately plays to the agenda of the billionaires. What is happening is very dangerous and that danger cannot be contained by the current legal cage, but it can be constrained by transparency.
20 comments