Tagged: data

Cambridge Analytica, Facebook and data: was it illegal, does that matter?

For the last year Carole Cadwalladr at the Observer has been doing a very important job exposing the activities of Cambridge Analytica and its role in creating targeted political advertising using data sourced, primarily, from Facebook. It is only now, with the latest revelations published in this week’s Observer, that her work is starting to gain political traction.

This is an exercise in shining a light where a light needs to be shone. The problem however is in illuminating something that is actually illegal. Currently the focus is on the way in which CA obtained the data it then used to create its targeting algorithm and whether this happened with either the consent or knowledge of Facebook or the individuals concerned. But this misses the real issue. The problem with algorithms is not the data that they feed on. The problem is that an algorithm, by its very nature, drives a horse and cart through all of the regulatory frameworks we have in place, mostly because these regulations have data as their starting point. This is one of the reasons why the new set of European data regulations – the GDPR – which come in to force in a couple of months, are unlikely to be much use in preventing US billionaires from influencing elections.

If we look at what CA appear to have been doing, laying aside the data acquisition issue, it is hard to see what is actually illegal. Facebook has been criticised for not being sufficiently rigourous in how it policed the usage of its data but, I would speculate, the reason for this is that CA was not doing anything particularly different from what Facebook itself does with its own algorithms within the privacy of its own algorithmic workshops. The only difference is that Facebook does this to target brand messages (because that is where the money is), whereas CA does it to target political messages. Since the output of the CA activity was still Facebook ads (and thus Facebook revenue), from Facebook’s perspective their work appeared to be little more that a form of outsourcing and thus didn’t initially set-off any major alarm bells.

This is the problem if you make ownership or control of the data the issue – it makes it very difficult to discriminate between what we have come to accept as ‘normal’ brand activities and cyber warfare. Data is data: the issue is not who owns it, it is what you do with it.

We are creating a datafied society, whether we like it or not. Data is becoming ubiquitous and seeking to control data will soon be recognised as a futile exercise. Algorithms are the genes of a datafied society: the billons of pieces of code that at one level all have their specific, isolated, purpose but which, collectively, can come to shape the operation of the entire organism (that organism being society itself). Currently, the only people with the resources or incentive to write society’s algorithmic code are large corporations or very wealthy individuals and they are doing this, to a large extent, outside of the view, scope or interest of either governments or the public. This is the problem. The regulatory starting point therefore needs to be the algorithms, not the data, and creating both transparency and control over their ownership and purpose.

Carol’s work is so important because it brings this activity into view. We should not be distracted or deterred by the desire to detect illegality because this ultimately plays to the agenda of the billionaires. What is happening is very dangerous and that danger cannot be contained by the current legal cage, but it can be constrained by transparency.

US Mid-terms: the role of social media in the Republican’s success

I would not consider myself to be a fan of the Republican Party, but I am a fan of this comment by Lori Brownlee, social media director for the Republican National Committee (RNC).  Commenting on the success of their recent campaign she said “rather than simply using Twitter and Facebook as a broadcast tool,  we centered our plan around using social as a strategic listening and data collection tool.”

Check out this article just published in AdAge for more details.  There is so much that brands could learn from this approach – especially the ability to understand, in real-time, what people are talking about or asking.  Social media is a real-time game and it requires that a brand design real-time processes to play it.  This is not a game where you sit down and plan your content in advance – you plan your process in advance and this will then tell you what content you need to have out there right now.  A content strategy needs to be seen as a process that matches brand answers to consumers’ questions in real-time.

Neither do you plan your influencers in advance, people become influential because what it is they are doing or saying right now, and you therefore need to identify them in real-time.  Someone who is influential today, is not necessarily going to be influential tomorrow.

And key to this process are tools and people.  Listening and analysis tools (such as Sprinklr, mentioned in the article), but then places (such as newsrooms or command centres) where the tools can be plugged into people who can then process and share the information and make decisions about what to do.  Rather than spending time and money simply filling up channels with ‘brandfill’, brand should spend time and money creating (and then staffing and managing) command centres.

Privacy: let’s have the right conversation

The whole social media, Big Data, privacy thing is getting an increasing amount of air time. This is good, because this is very important thing to start getting our heads around. However, I don’t think we are really yet having the right conversation.

The pre-dominant conversation out there seems to be focused on the issues concerned with the potential (and reality) of organisations (businesses or governments) ‘spying’ on citizens or consumers by collecting data on them, often without their knowledge or permission.

Our privacy is therefore being ‘invaded’.

But this is an old-fashioned, small data, definition of privacy. It assumes that the way to gain an understanding of an individual, which can then be used in a way which has consequences for that individual, is by collecting the maximum amount of information possible about them: it is about creating an accurate and comprehensive personalised data file. The more comprehensive and accurate the file is, the more useful it is. From a marketing perspective, it is the CRM way of looking at things (it is also the VRM way of looking at things, where the individual has responsibility for managing this data file).  It is also a view that then gives permission to the idea that if you detach the person from the data (i.e. make it anonymous) it stops it being used in a way which will have consequences for the individual concerned and is therefore ‘cleared’ for alternative usage.

But this is not the way that Big Data works. The ‘great’ thing about Big Data (or more specifically algorithms) is that they require almost no information about an individual in order to arrive at potentially very consequential decisions about that individual’s identity.   Instead they use ‘anonymised’ information gathered from everyone else. And increasingly this information is not just coming from other people, it is coming from things (see Internet of Things). The great thing about things is that they have no rights to privacy (yet) and they can produce more data than people.

The name of the game in the world of the algorithm is to create datafied (not digitised) maps of the world. I don’t mean literally geographical maps (although they can often have a geographical / locational component): from a marketing perspective it can be a datafied map of a product sector, or form of consumer behaviour. These maps are three dimensional in that they comprise a potentially limitless numbers of data layers. These layers can be seemingly irrelevant, inconsequential or in no way related to the sector of behaviour that is being mapped. The role of the algorithm is the stitch these layers together, so that a small piece of information in one layer can be related to all the other layers and thus find its position upon the datafied map.

In practical terms, this can mean that you can be refused a loan based on information concerning your usage of electrical appliances, as collected by your ‘smart’ electricity meter. This isn’t a scary, down-the-road sort of thing. Algorithmic lending is already here and the interesting thing about the layers in the datafied maps of algorithmic lenders is the extent to which they don’t rely on traditional ‘consequential’ information such as credit scores and credit histories. As I have said many times before, there is no such thing as inconsequential data anymore: all data has consequences.

Or to put it another way, your identity is defined by other peoples’ (or things’) data: your personal data file (i.e. your life) is simply a matter of personal opinion. It has little relevance to how the world will perceive you, no matter how factually correct or accurate it is. You are who the algorithm says you are, even if the algorithm itself has no idea why you are this (and cannot explain it if anyone comes asking) and has come to this conclusion based in no small part, by the number of times you use your kettle every day.

The world of the algorithm is a deeply scary place. That is why we need the conversation. But it needs to be the right conversation.