Stop and Think: a note to marketing leadership about the digital revolution

I often say I’ve seen more change in the past five years as chief marketing and communications officer of Unilever than I did in the 25 years I was in business before that, and it’s not a statement I make just for dramatic effect.” So said Keith Weed, the chief marketing and communications officer at Unilever, in an article published in Marketing Week in May 2018.

Anyone with any experience in marketing must feel that we are living through a period of rapid change that qualifies as a revolution. It is a revolution in information and communication that is transforming the world of brands (as well as the world of pretty much everything else).

Here is a question. Does anyone really feel they are on top of this: that they have cracked the code? Continue reading

Call in a beer-strike: another reason why DTC is overblown

Mark Ritson has just published a piece in Marketing Week which skewers a lot of the hype around the so-called direct-to-consumer (DTC) revolution. I have also been brewing some thoughts on this one having read this piece a couple of weeks ago about Heineken’s DTC efforts. This actually presents the refreshingly honest admission that Heineken doesn’t really know what it is doing in this space.

Heineken believes the problem is that it doesn’t have sufficient data on their consumers. My take was that their problem might be that they have failed to find data which supports the idea that their consumers actually want to buy a beer online – because such data doesn’t exist. Personally (and like Heineken I have no ‘data’ to support this view) I can’t imagine a situation where you would ever want to order a beer online. For beer, and most other FMGC / grocery products there is almost always going to be an advantage to operate through some form of intermediary. The digital and data revolution may well change who these intermediaries are and how they operate, but it is unlikely to do away with them.

The only instance where it would make sense to order a beer online is if delivery could be guaranteed within around 2 minutes. If, in effect, you could call-in a beer-strike. So if there was a bar-app that would allow you to order drinks for delivery at your table – great (although in this instance the bar is acting as a form of intermediary). But outside of a bar it is always going to make sense to aggregate your purchases of beer (and many other similar types of products), rather than order them one at a time. Quite possibly you will add them to an voice-activated list one item at a time (or have them algorithmically suggested or added for you), but delivery efficiencies and consumer convenience will always create a system that tends towards towards batching items for delivery within a minimised number of predetermined time slots.

This is not to say that there isn’t a place for DTC in FMCG in the GDF (Great Digital Future), but is likely to be restricted to a relatively narrow range of products – those products that are not naturally adapted to aggregated retailing but have had to accept aggregation because of lack of business model to support individualised distribution. Remember, razors fit through a letterbox, beer bottles don’t.

It comes back to a focus on the big structural shifts that the digital revolution is creating. Shift number one is the separation of stuff from the thing that distributes that stuff – be that the separation of news from newspapers or banking from bankers. This is often a separation of process (finance) from institutionalised delivery (banks) thus supporting a process of disintermediation or emergence of new forms of intermediary (Uber).

Second, and of greatest consequence for marketing, is the shift from the world of the audience to the world of the individual where the challenge is behaviour identification and response (a connection challenge rather than a distribution challenge). Don’t use technology to impose behaviours on consumers, use it to respond to identified (rather than assumed) consumer behaviours. If Heineken can deliver a beer to my hand within 120 seconds and at no significant extra cost – fantastic. But until that can happen, Heineken is playing a game that doesn’t exist.

Question: what is TV? Answer: a form of behaviour

Mark Ritson has recently been stirring the pot on TV – challenging broadcasters to take on Netflix and Amazon and predicting that Facebook will buy Netflix within a year.

These are interesting ideas, but in order to make sense of them I think we first need to ask ourselves the question, what exactly is TV: is it a form of content, distribution, a device (TV ‘set’) a business model or something else entirely? At the moment we are confusing all of these things.

In the past we haven’t had to ask this question, because TV has been a single thing created from a fusion of all of these elements, albeit we have come to understand it primarily as a form of content. This is why we talk about TV ‘programmes’. In reality traditional TV is a form of distribution that has imprisoned a certain type of video content within it, but we have focused on its content because this has been the basis of difference. There hasn’t (until recently) been an alternative (different) type of distribution and thus alternative content or an alternative place where ‘TV’ content can live.

At its heart the digital revolution is all about the separation of information / content from its means of distribution. This marriage, and consequential relationship, between information and distribution established 600 years ago by Gutenberg, is coming to an end. The separation allows us to understand that in many instances this was a loveless marriage where distribution wore the trousers and forced content to take its name and adapt to its formatting strictures – hence TV ‘programme’.

The implication of this divorce for the distribution-dependant business model that was TV is the discovery that much of the content it used to be wedded to can enjoy a life with other distribution partners and also that it has an opportunity to flirt with content that was formerly imprisoned within other distribution media (such as movie screens). It has also meant that Casanovas such as Netflix can establish themselves in the space previously owned exclusively by the business model known as ‘TV’. Imprisoning content is no longer the best or only route to commercial success.

In order to understand what is going to happen to this thing known as TV we need to develop a new way of defining the problem that TV is there to solve and thus reconstructing a business model that is based around providing a solution to that problem, rather than a model designed to preserve as much as possible the confederation of functions and skills that sit within a TV channel or network.

I look at it in two ways. First, TV as a form of behaviour. The fusion of content, distribution technology and device that we know as TV created a form of behaviour: people (often in groups) sitting in comfortable chairs in their own homes, gathered around a screen in order to be entertained (and to a lesser extent informed), primarily between the hours of 8pm and 11pm, watching content that frequently formed the basis of subsequent online or offline conversations.

The good news is that this form of behaviour is not going to go away anytime soon – and to that extent the behaviour we call TV (and thus TV advertising) is going to endure. The less good news, for the traditional business model associated with TV, is that these people now know that they can expect a much greater choice of content (albeit probably within a more restricted range of content categories) than has traditionally been the fare of what we called TV programmes or can be provided by the things known as TV networks or channels.

The form of content that is best adapted to this form of behaviour is where the future lies. It will tend to be based around long-form storytelling, live sports events, mass entertainment that has an element of either real-time audience participation or real-time social currency and, to a lesser extent, news. This is the space Netflix (and Strictly Come Dancing) is addressing and it is growing. This is the thing Ritson has identified in his article as being a ‘third line’ of ‘autro’ viewing that he defines rather confusingly as being ‘on a TV set but not TV’ and sitting between TV and mobile. This is the wrong way of defining this stuff as the rather confusing ‘on TV but not TV’ description implies and it stems from our inability to separate the differing elements that constitute traditional TV, our conflation of distribution and content and our obsession with channel (TV versus mobile) – which in itself is a hangover from the world of distribution dominance. This stuff is better defined as simply content that is adapted to TV behaviour but that isn’t currently produced by TV networks.

This brings me onto another way of looking at the broader video space, which is to define it by screen size, which in itself is also allied to behaviour. There is the big screen that sits in front of the home-based comfy chairs and hosts the type of content referred to above. Then there is the personal screen that we will use when we want to behave as an individual (and currently is provided by the device known as a laptop / tablet).  And there will be a palm-sized screens which we will use when it is not possible to access the other types of screen, or for candy-content (short, sweet, usually consumed ‘to go’).  We have also seen, in Google Glass, that there is a new screen-based environment/behaviour – which I guess you could call the real-time, heads-up screen. The device known as Google Glass has obviously not worked as the device to host this type of screen, but the behaviour associated with it remains valid and will probably first be hosted on a palm-sized screen but this time held in front of the face (and on car windscreens) and associated with augmented reality. And all of these screens will be fed by a variety of distribution technologies and content producers.

Note: I haven’t called the palm-sized screen a mobile because that simply compounds the mistake of seeing mobile as a form of channel, when mobile, as per all the above screen types, is best understood as a form of behaviour. Mobile has a huge significance going forward: not as a channel, but as a behaviour detection device.

Also note: last Wednesday I watched the second half of Tottenham Hotspur’s disappointing performance against PSV huddled around my son’s iPhone in Venice Airport. In all respects a sub-optimal situation – but that was the only alternative relevant to our current situation / behaviour. Which is why we need to understand technology / channel in the context of real time behaviours. Behaviours drive selection of technology or channel, not the other way around. Of course a mobile is not appropriate to the environment or behaviour that is living room viewing, but in some situations it will be the best (only) option.

Having defined the forms of behaviour or environment associated with consumption of video the challenge is to define a model associated with satisfying all, or part of, these behaviours. I think there are four things this model has to address.

First, and of greatest relevance to consumers, there is content aggregation: a mechanism for finding and filtering relevant content. Google is a content aggregator as is Spotify to a certain extent. However, their models can’t be directly imported because the behaviours associated with video consumption are different. Video consumption (at least for the big-screen living-room behaviour that is TV) is more here-and-now and socially relevant. There is a need to watch what everyone else is watching. If you want to work-out how to insulate a loft you don’t need your friends to also watch the video. Likewise if you want to listen to Freebird by Lynyrd Skynryd, this is always something best done alone (and preferably in secret).

Aggregators generally are the future for lots of things in the digital world. Uber is an aggregator. We can also see, if we choose to (and many don’t), that the future of retail will be divided between purchase aggregators and providers of consumption experiences.

Second, there is revenue aggregation. Google became mighty because it started off solving a content aggregation problem but found a way of aggregating revenue around the consumer behaviour it was addressing. The current ‘TV’ models of aggregating revenue are not sufficiently consistent with the behaviour consumers will want from a content aggregator. Revenue and content are currently brought together in a portal model: Netflix is really a portal as is a TV channel – but portals are a sub-optimal from a consumer’s perspective. Portals are a way of getting consumers to pay for content they won’t ever watch, albeit this this provides a way to manage the third problem: management of risk.

Living room content is expensive to produce. You therefore need up-front money tied to some guarantee of future revenue. YouTube is close to being a functioning content aggregator but its revenue aggregation model only works to support content that is cheap to produce and where producers have the incentive to carry the risk. The risk problem is currently solved via the commissioning process which ties distribution to content. The revenue aggregation solution will probably be defined by the requirement for consumers to pay (via subscription or exposure to advertising) only for the content they wish to view, plus the ability to provide some guarantee of future revenue.

The fourth issue, which is linked to risk management, is promotion / social relevance. A video on insulating your loft will always be relevant (with respect to loft insulating behaviour). But this years ‘Strictly’ winner very quickly becomes last year’s ‘Strictly’ winner. If a content producer has access to content distribution, they can use this to promote their upcoming content. They can also restrict access to this content, via release dates and scheduling, to build anticipation.

Effectively it is only social relevance and risk management that currently ties content to distribution because the technology already exists to provide content and revenue aggregation (the barriers here are only ones of economic self-interest). But content and distribution will become separated because this is the way the tectonic plates are shifting and because consumers will demand it.

So – I can’t draw the picture of what the business model that satisfies the behaviour known as TV will look like. But I am pretty sure that the route to finding it will be based around a recognition of the ultimate end-state of content separated from distribution, the connection instead of content directly to revenue via a process of content and revenue aggregation (rather than through an intermediary portal), the ability to manage risk and the need to generate social relevance. And the starting point is to stop thinking about TV as a form of content or a form of channel or a device, and start thinking about it as a form of behaviour.

P.S. My favourite media analyst, Clay Shirky, tells a great story about TV (I think in his book ‘Here Comes Everybody). His young daughter was at a friend’s house and was scrambling around behind the TV. Shirky assumed she was looking for the remote, which he gave to her. She looked at it quizzically and said “no daddy, I am looking for the mouse.” She didn’t care about the device known as TV, to her it was just a screen and a screen which ships without a mouse is broken. A screen without a mouse: that’s not a bad way of summarising the state of the thing we currently call TV. Something that is out of line with consumers’ desired behaviour.

Why the new easyJet digital thingy is all about fantasy

The real opportunity that the digital / data space presents is the ability to target behaviours rather than people. I am not sure that many marketers realise this yet. As evidence I would present the latest initiative from easyJet called Look and Book.

easyJet has a CMO who is 5 months into the job – i.e. about the amount of time necessary to dump the previous CMOs agency / campaign, roll-out a new ad and develop a bright shiny new digi-data thingy. Look and Book is that new shiny thing (the new campaign aired 14 September). In the words of the new CMO “You will be able to take a photograph from, say, Instagram and find that destination on our app and go straight on to booking”.

I give it 12 months tops (3 months to discover that it is not driving sales, 3 months to try and shout louder about it in order to make it drive sales, 3 months of living in denial, 3 months for the CMO to plan how to move-on without losing face).

Why? At one level it is app/data-driven-techno-gizmology for the sake of pretending to be at the cutting edge of app/data-driven-techno-gizmology. At another level it is just about channel-chasing and product placement. Instagram is seen as the current hot channel, Instagram is all about photos, so let’s find a way of bridging across from photos to our product. Simples.

Except that Instagram is not a channel. In common with all the new social thingies, Instagram is much better understood as a form of behaviour. To use it effectively (if indeed you use it at all) you have to align the real-time behaviours of people using Instagram with the behaviours that correspond to the purchase behaviour implicit in your customer journey. This, in fact, is the future of data in marketing – behaviour identification and response, rather than simply using data to craft increasingly ‘personalised’ (fragmented) messages.

This is not the way marketers are accustomed to operating. In traditional marketing we aligned product messages with customer demographics and media location. It wasn’t necessarily ideal, but it was necessary because this was the way the channels were structured – and in traditional marketing the channels were the boss. When people sat in front of a screen watching the The Apprentice their behaviour is “I want to watch some obnoxious wannabees be humiliated by an obnoxious hasbeen” not “I want to buy a car”. None-the-less it made sense for a car manufacturer to interrupt their experience with a message about a car if research showed that these were the types of people who buy this car and that The Apprentice represents a media location where a large group of such people can be gathered together. It is an approach based on targeting people on account of who they are and where they are, not what they are doing.

The digital space presents the opportunity to target people according to what they are doing – behaviours. This is where Look and Book falls down. It is insufficiently attentive to behaviour and grounded instead in the old-fashioned channel-dependant idea of demographics and interruption. Are there actually people out there who will see a photo of some location and think “Ooh, that looks nice, I would like to go there, I wonder where it is, let’s hope that EasyJet flies there and, if they do and I can afford it, let’s get out my phone (having previously subscribed to the Look and Book app), take a screen shot and book it now”? That’s the behaviour this initiative is aligned against – but I suspect it is a fantasy behaviour. In the real world there are just too many reasons why this is not going to happen, primarily that people almost never simply look and book.

Look and Book is actually an initiative designed to make the new CMO look cool, be on the latest hot platform, use data and digital thingies and deliver an ‘enhanced, data-driven, seamless, integrated, customer experience’ (‘cos that’s what you want when you want to go to Magaluf). It is an idea that is defined by the channel it wishes to sits within rather than an idea that defines the channels it could sit within (channel-defining ideas being the future in my opinion – see the Nike Kaepernick campaign).

However, you can easily see how you could make the ad for Look and Book (in fact this may form part of the ‘let’s shout about Look and Book’ component inherent in its predicted demise), but behaviours of people in ads are very rarely the behaviours of people in the real world – which is partly the point of advertising.

By the way, you should check-out the new ad, which is designed to “deliver a big dollop of emotion”. Hmm – looks like just another non-differentiated category ad to me. As part of a new CEO’s inevitable re-structuring easyJet has also recently separated the marketing function from the sales function. Hmm – looks like they might have actually separated marketing from sales to me.

Agile = legitimised panic

The ‘agile business’ is very much of the moment. Wherever you look you find consultants promoting it and business leaders adopting it (or at least exhorting their troops to become it). The need for agility is usually linked to the ‘rapid pace of technological change’ and that other concept du jour ‘disruption’ (the need to either avoid it or become it).

Here is a slightly disruptive thought. What if our obsession with agility is a present day manifestation of the fact that in the past, not enough organisations spent enough time thinking about the future?

Here is another one. Businesses don’t become successful by being disruptive, they become disruptive by being successful.

I wouldn’t disagree with the claim that we are living through a technological revolution, but we have had this thing called the internet for more than 20 years now. For sure, it has caused huge changes – but they have panned-out over that period of 20 years. Most of the fundamental forces that have shaped those changes have been apparent from the earliest times – if people chose to study them. The problem has been that many organisations have spent their time ignoring what has been going on and therefore find themselves now living in a state of perpetual crisis management – a condition which they have sought to dignify with the term agility.

The real problem is that the future is not what it used to be. Dealing with a changing future does not depend on agility, it depends on thinking. In writing this I am reminded of a piece I wrote almost exactly 12 years ago (The future is not what it used to be). This was a series of ten, semi-serious predictions designed to get people thinking about how the digital revolution might change things. Almost none of these have panned out exactly as predicted (or within the time-frame predicted), but I look at them quite proudly because they did a pretty good job at nailing the fundamental direction of required thinking. If, as a brand, you had spent some time thinking in this way 12 years ago, you wouldn’t currently find yourself looking down the barrel of disruption while desperately trying to do the agility dance.

Businesses that think about the future don’t need to be agile. Agility is something we have invented to put a positive spin on panic. It is time we started to Think.

Marketing: it’s just a joke

Following the publication of my Stop and Think (think) piece I have been having an email conversation with Stan Magniant. Stan is Digital & Social Communications Director, Western Europe, for The Coca-Cola Company – i.e. a player. I used to work with him at Publicis when we were both bright-eyed early adopters of the whole social digital thing.

One of the issues we got into was the question of what constitutes an audience and specifically what size an audience is. As Stan put it “ to marketers, an audience is synonymous with scale: as big a group of people as I can expose to my brand, synchronously or asynchronously. I’m not clear, in your argument, whether you invite brands to explore new creative ways to gather large audiences (through paid or earned tactics? Likely both), or whether it’s all about aggregated niche audiences (a more “long tail” approach).”

This was a good question and in trying to answer it I stumbled into the analogy of joke telling. The point I was trying to make is that an audience is not defined by size, it is defined by behaviour and/or context. The reason that, to marketers, it has become synonymous with scale is a question of conventional mass-marketing economics.

If you are telling a joke, the person or people you are telling it to is an audience. This is something that implicit in the nature (behaviour) of joke telling. What is also implicit is that a joke, even if told to only one other person, is based on an element of universal relevance. A joke that only one person finds funny isn’t really a joke, even if you are telling it to the person who is meant to find it funny.

So, the audience for a joke can be one person, or millions of people. However, when it comes to deciding the optimum size of the audience, this is down to the money. If you are a stand-up comedian who has invested time and effort in developing a set, you need to get as many as people as possible into your audience. Brands are like stand-up comedians. Their material (campaigns) is time consuming and expensive to produce – which is why a brands definition of an audience has become synonymous with scale.

The joke analogy also helps explain why the concept of aggregated niche audiences doesn’t work. As a stand-up comedian you wouldn’t tell a series of five jokes, each of which would appeal to only 20 percent of the audience. The only way this would work is if you first dis-aggregated (segmented) the audience into those five groups and put each into a separate room – so that when you told the joke 100 percent of the people in each group would find it funny. An audience of aggregated niches may look like an audience in terms of size, but it doesn’t behave like an audience in terms of how you make it laugh.

As a brand you can have an audience of one, but not if you then try to create a joke that only that one person will find funny – which is essence is what most ‘mass personalisation’ strategies try to do (one reason I view these with scepticism). Mass (joke telling) is important, personalisation is important – but mass personalisation could be one of those things Seth Godin has called a ‘meatball sundae (ice cream)’.

But to be the thing we call a brand, you need to make people laugh. Which is why preserving the concept of an audience remains critical. Most social media campaigns (and many digital strategies) are the equivalent of a stand-up comedian telling their jokes to people one person at a time – i.e. a waste of time.

Anyway – a brand manager, an advertising executive and a consumer walked into a bar …

Lies on the line: why MP Lucy Powell’s bill won’t solve the problem of online hate or fake news

Summary: Institutionalised forms of content regulation rest on the realistic assumption that all published content can been monitored and made to conform. If you can’t establish this expectation, this form of regulation becomes instantly redundant. That is why applying the old, publication-based regulatory model to Facebook et al is a distraction that only serves to make politicians feel good about themselves while actually increasing the dangers of online hate and fake news.

In the UK the phrase of ‘leaves on the line’ is firmly established within the national conversation as an example of an unacceptable corporate excuse. It is an excuse rail operators use around this time of year to explain delays to trains. The reason it is deemed unacceptable is that leaves fall off trees every year and have done so for quite some time and thus there is a realistic expectation that this is a problem rail operators should have cracked by now.

Which brings me to another problem where an expectation is building of a corporate fix: fake news and all forms of inappropriate online content/behaviour. This has clearly become something of big issue in recent times to the extent that governments are under considerable pressure to Do Something. And the Thing they are mostly looking to Do is to turn around and tell Facebook, Google, Twitter et al that they need to Do Something. In essence what governments are looking for them to do is assume a publishers responsibility for the content that appears on their platforms. The most recent example of this is the private members bill just introduced by the UK Member of Parliament, Lucy Powell (of which more in a moment).

You can see why this is a popular approach. In the first instance, it allows government to deflect responsibility away from itself or, at the very least, create an imagined space where established regulatory approaches can continue to have relevance. It is an approach which finds favour with the traditional media, which has to operate under conventional publication responsibilities and resents the fact that these new players are eating their advertising lunch while avoiding such constraints. To an extent, it even plays to the agenda of Facebook and Google themselves, because they know that in order to attract the advertising shilling, they need to present themselves as a form of media channel, if not a conventional form of publication. Facebook and Google also know that, despite all the regulatory huffing and puffing, governments will not be able to effectively deploy most of the things they are currently threatening to do – because the space they are trying to create is a fantasy space.

The trouble is – this approach will never work. Worse than that, it is dangerous. Continue reading

Ritson versus Sharpe and a story about geomorphology (and a tsunami)

I have just received an email with the confirmed line-up for this year’s Festival of Marketing. My first reaction to this was that I can’t believe it is nearly a year since the last one. It also reminded me that the headline event last year was a battle of the professors between Byron Sharpe and Mark Ritson. Unfortunately I missed it, but for those not in the know the basis for said battle is Sharpe’s advocacy of mass communication versus Ritson’s focus on targeting and segmentation. I was also reminded of this conflict because Byron Sharpe was featured on Adam Fraser’s EchoJunction podcast a couple of weeks ago.  I must confess I haven’t read Sharpe’s famous book ‘How Brands Grow’ so was hopeful that the podcast might give me a shortcut. I also availed myself of the opportunity to listen to the Prof. Ritson’s appearance on the same podcast some time previously.

On the basis of the podcasts, I would say I came out more of a Ritsonist. Of course, as Ritson himself has pointed out, it is not a question of either/or. A brand has to be able to address its entire audience and its ability to do this essentially defines it status as a brand. But an audience is not homogeneous, either in terms of attitudes or behaviours over time – which creates the requirement for targeting.

In fact, according to my theory of the future of marketing in a digital world, brands face two challenges: first is redefining the concept of an audience (and indeed a segment of such) and becoming more adept at convening these audiences rather than renting access to them; and the second is understanding how to create value from relationships with consumers as individuals (the world of distribution and the world of connection).

I guess the reason I came out on Ritson’s side was because I very much liked his scathing view of social media and assessment that most marketers have simply been jumping on a series of digital bandwagons, but also because there was something in Sharpe’s absolutist approach that I was uncomfortable with. First was his contempt for the idea of the niche and his dismissal of a niche brand as an unsuccessful brand. We are entering a time when the competitive advantages associated with being big are reducing and the ability to be small is increasing. Many big brands are facing the long-term challenge of death by a thousand niches. Second, while I am all in favour of developing a more rigourous, data driven approach to marketing I couldn’t help but get the feeling the Professor Sharpe was restricting his field of analysis according to the ability to gather or analyse the available data and disregarding evidence outside of this, not because the data was telling him to do this, but because the data was not available (or not available to measure in the way he wished).

It reminded me of a story about data, measurement techniques and assumptions that was doing the rounds back when I was studying for my degree. I studied geography, with a specialism in geomorphology (rivers, erosion and stuff). At the time, geomorphology had a problem in that we could look around and see evidence that erosion had happened, but couldn’t see it, and measure it, actually happening. (This is a bit like the issue of knowing that half of the marketing budget works, but not knowing which half). The assumption was therefore that erosion was a very slow process – water dripping on a stone – and the reason we couldn’t detect it was that we hadn’t had sufficiently sophisticated techniques or equipment to measure it.

Geomorphology had another problem in that it was, at heart, an observational science: knock-kneed bearded blokes in hobnail boots and khaki shorts wandering around with notebooks looking at things and thinking about stuff. This was deeply unfashionable back in the 60s and 70s at a time when computers were becoming established in academia. You couldn’t be a proper scientist if you didn’t run what we then thought of as large amounts of quantitative data through computer programmes.

So an attempt was made to address these two problems by wiring-up a hill slope (geomorphologists were, and probably still are, obsessed with slopes) with all the latest detection equipment, feeding all the data into a computer, pressing the button and finally nailing the causes of erosion. This slope was going to be so closely monitored that an ant couldn’t fart without us knowing about it. Who knows, perhaps farting ants would be revealed as the culprits?

Anyway, the equipment was put in place, turned on, and revealed precisely nothing. A total flatline. No erosion was taking place. And so it continued for weeks on end until after a prolonged period of heavy rain a landslide washed all the equipment away. It was as though the slope was saying “so you wanted to measure me? Well measure this sucker”.

Of course the real issue was one of false assumptions (erosion as a slow process), a restriction of the field of investigation to those areas from which data could be extracted, a desire to use new bright shiny techno things, plus a distaste for conventional, less data-driven, analysis. Geomorphologists have subsequently realised that erosion is often not a slow process, but an an infrequent, catastrophic process. The slight irony is that an old fashioned knock-kneed bloke with a certain level of experience, wandering around with a notebook, looking at stuff, noting slope angles, digging some holes to determine soil depth and composition and sticking a finger in the ground to get a sense of soil moisture levels could actually develop a much more effective functional understanding of what was going on, what had previously happened and what was likely to happen in the future that someone possessed of all the latest measurement techniques and data.

This is not to say that we should eschew evidence-based marketing, but we need to take about what assumptions we make, what evidence we seek and, crucially, not discard evidence simply because it is difficult to measure or crunch through an analysis programme. 

And also, in relation to his dismissal of the niche, I have a suspicion that Sharpe’s book may come to be regarded more as a piece of historical analysis than as a guide to the future. Perhaps it should be renamed “How Bands Grew”.

However, there is a geomorphological post-script to this story which does favour Prof. Sharpe. In relation to catastrophic geomorphological events, we now know that the east coast of Australia was once devastated by a massive tsunami with waves in excess of 100 metres high. Ths was caused by the collapse of one of the islands in the Hawaiian archipelago – a phenomena know as a long run-out landslide. And the bad news is that this is going to happen again in the not too distant future, geomorphologically speaking. So Prof. Ritson in Melbourne could be in trouble, but Prof. Sharpe around the corner in Adelaide should be OK especially if he sets up house up in the Flinders Ranges.

 

A politician who understands the world of the algorithm

Thanks to Jeremy Epstein (go-to for all things blockchain) for drawing my attention to this Wired interview with Emmanuel Macron. Here is a man who understands the world of the algorithm. There are three reasons you can tell this. First: he doesn’t talk about trying to lock-up access to data – he talks about making data open (with conditions attached – primarily transparency). Second: from a regulatory perspective he focuses on the importance of transparency and shows he understands the dangers of a world where responsibility is delegated to algorithms. Third: he talks about the need for social consent, and how lack thereof is both a danger to society but also to the legitimacy (and thus ability to operate) of the commercial operators in the space (I was  7 years ahead of you here Emmanuel).

As an example, he is opening access to public data on the condition that any algorithms that feed on this data are also made open. This is an issue that I belive could be absolutely critical. As I have said before, algorithms are the genes of a datafied society. In much the same way that some commercial organisations tried (and fortunately failed) to privatise pieces of our genetic code, there is a danger that our social algorithmic code could similarly be removed from the public realm. This isn’t to say that all algorithms should become public property but they should be open to public inspection. It is usage of algorithms that require regulatory focus, not usage of data.

This is a man who understands the role of government in unlocking the opportunities of AI, but also recognises the problems government has a duty to manage. It is such a shame that there are so few others (especially in the UK where the government response is child-like, facile and utterly dissmisive of the idea that government has any role to play other than to let ‘the market’ run its course whilst making token gestures of ‘getting tough‘).

 

Facebook, the STASI, KitKats, the NSA and a digital caste system: defining the privacy problem

The GDPR (as played by King Canute) and the rising tide of data (as played by The Sea)

Mark Zuckerberg’s appearance before Congress is a good example of the extent to which politicians and regulators have no idea, to quote The Donald, of “what on earth is going on”. It is not just them, this lack of understanding extends into the communities of thought and opinion framed by academia and journalism. This is a problem, because it means we have not yet identified the questions we need to be asking or the problems we need to be solving. If we think we are going to achieve anything by hauling Mark Zuckerberg over the coals, or telling Facebook to “act on data privacy or face regulation”,  we have another think coming.

This is my attempt to provide that think.

The Google search and anonymity problem

Let’s start with Google Search. Imagine you sit down at a computer in a public library (i.e. a computer that has no data history associated with you) and type a question into Google. In this situation you are reasonably anonymous, especially if we imagine that the computer has a browser that isn’t tracking track search history. Despite this anonymity, Google can serve you up an answer that is incredibly specific to you and what it is you are looking for. Google knows almost nothing about you, yet is able to infer a very great deal about you – at least in relation to the very specific task associated with finding an answer to your question. It can do this because it (or its algorithms) ‘knows’ a very great deal about everyone or everything in relation to this specific search term.

So what? Most people sort of know this is how Google works and understand that Google uses data derived from how we all use Google, to make our individual experiences of Google better. But hidden within this seemingly benign and beneficial use of data is the same algorithmic process that could drive cyber warfare or mass surveillance. It therefore has incredibly important implications for how we think about privacy and regulation, not least because we have to find a way to outlaw the things we don’t like, while still allowing the things that we do (like search). You could call this the Google search problem or possibly the Google anonymity problem, because it demonstrates that in the world of the algorthm, anonymity has very little meaning and provides very little defence.

The Stasi problem

When you frame laws or regulations you need to start with defining what sort of problem you are trying to solve or avoid. To date the starting point for regulations on data and privacy (including the GDPR – the regulation to come) is what I call the STASI problem. The STASI was the East German Security Service and it was responsible for a mass surveillance operation that encouraged people to spy on each other and was thus able to amass detailed data files on a huge number of its citizens. The thinking behind this, and indeed the thinking applied to the usage of personal data everywhere in the age before big data, is that the only way to ‘know’ stuff about a person is to collect as much information about them as possible. The more information you have, the more complete the story and the better your understanding. At the heart of this approach is the concept that there exists in some form a data file on an individual which can be interrogated, read or owned.

The ability of a state or an organisation to compile such data files was seen as a bad thing and our approach to data regulation and privacy has therefore been based on trying to stop this from happening. This is why we have focused on things like anonymity, in the belief that a personal data file without a name attached to it becomes largely useless in terms of its impact on the individual to whom the data relates. Or we have established rights that allow us to see these data files, so that we can check that they don’t contain wrong information or give us the ability to edit, correct or withdraw information. Alternatively, regulation has sought to establish rights for us to determine how our the data in the file data is used or for us to have some sort of ownership or control over that data, wherever they may be held.

But think again about the Google search example. Our anonymity had no material bearing on what Google was able to do. It was able to infer a very great deal about us – in relation to a specific task –  without actually knowing anything about us. It did this because it knew a lot about everything, which it had gained from gathering a very small amount of data from a huge number of people (i.e. everyone who had previously entered that same search term). It was analysing data laterally, not vertically. This is what I call Google anonymity, and it is a key part of Google’s privacy defence when it comes to things such as gmail. If you have a gmail account, Google ‘reads’ all your emails. If you have Google keyboard on your mobile, Google ‘knows’ everything that you enter into your mobile (including the passwords to your bank account) – but Google will say that it doesn’t really know this because algorithmic reading and knowledge is a different sort of thing. We can all swim in a sea of Google anonymity right up until the moment a data fisherman (such as a Google search query) gets us on the hook.

The reason this defence (sort of) stacks-up is that Google can only really know your bank account password is if it analyses your data vertically. The personal data file is a vertical form of data analysis. It requires that you mine downwards and digest all the data to then derive any range of conclusions about the person to whom that data corresponds. It has its limitations, as the Stasi found out, in that if you collect too much data you suffer from data overload. The bigger each file becomes the more cumbersome it is to read or digest the information that lies within it. It is a small data approach. Anyone who talks about data overload or data noise is a small data person.

Now while it might have been possible to get the Stasi to supply all the information it has on you, the idea that you place the same requirement on Google is ridiculous. If I think about all the Google services I use and the vast amount of data this generates, there is no way this data could be assembled into a single file and even if it could, it would have no meaning, because the way it would be structured has no relevance to the way in which Google uses this data. Google already has vastly more data on me than the biggest data file the STASI ever had on a single individual. But this doesn’t mean that Google actually knows anything about me as an individual. I still have a form of anonymity, but this anoymity is largely useless because it has no bearing on the outcomes that derive from the usage of my data.

The KitKat Problem

Algorithms don’t suffer from data overload, not just because of the speed at which they can process information but because they are designed to create shortcuts through the process of correlations and pattern recognition. One of the most revealing nuggets of information within Carole Cadwalladr’s expose of the Facebook / Cambridge Analytica ‘scandal’ was the fact that a data agency of the like of Cambridge Analytica working for a state intelligence service had discovered a correlation between people who self-confess to hating Israel and a tendency to like Nike trainers and KitKats. This exercise, in fact, became known as Operation KitKat. To put it another way, with an algorithm it is possible to infer something very consequential about someone (that they hate Israel) not by a detailed analysis of their data file, but by looking at consumption of chocolate bars. This is an issue I first flagged back in 2012.

I think this is possibly the most important revelation of the whole saga, because, as with the Google search example it cuts right to the heart of the issue and exposes the extent to which our current definition of the problem is so misplaced. We shouldn’t be worrying about the STASI problem, we should be worried about the KitKat problem. Operation KitKat demonstrates two of the fundamental characteristics of algorithmic analysis (or algorithmic surveillance). First, not only can you derive something quite significant about a person based on data that has nothing whatsoever to do with what it is you are looking for. Second, algorithms can tell you what to do (discover haters of Israel by looking at chocolate and trainers) without the need to understand why this works. An algorithm cannot tell you why there is a link between haters of Israel and KitKats. There may not even be reason that makes any sort of sense. Algorithms cannot explain themselves, they leave no audit trail – they are the classic black box.

The reason this is so important is that it drives a cart and horse through any form of regulation that tries to establish a link between any one piece of data and the use to which that data is then put. How could one create a piece of legislation that requires manufacturers or retailers of KitKats to anticipate (and otherwise encourage or prevent) data about their product being used to identify haters of Israel? It also scuppers the idea that any form protection can be provided through the act of data ownership.  You cannot make the consumption of a chocolate bar or the wearing of trainers a private act, the data on which is ‘owned’ by the people concerned.

KitKats and trainers bring us neatly to the Internet of Things. Up until now we have been able to assume that most data is created by, or about, people. This is about to change as the amount of data produced by people is dwarfed by the amount of data produced by things. How do we establish rules about data produced by things especially when it is data about other things? If your fridge is talking to your lighting about your heating thermostat, who owns that conversation? There is a form of Facebook emerging for objects and it is going to be much bigger than the Facebook for people.

Within this world the concept of personal data as a discrete category will melt away and instead we will see the emergence of vaste new swathes of data, most of which is entirely unregulatable or even unownable.

The digital caste problem

A recent blog post by Doc Searls has made the point that what Facebook has been doing is simply the tip of an iceberg, in that all online publishers or any owners of digital platforms are doing the same thing to create targeted digital advertising opportunities. However, targeted digital advertising is itself the tip of a much bigger iceberg. One of Edward Snowden’s Wikileaks exposures concerned something known as Operation Prism. This was (probably still is) a programme run by the NSA in the US that involves the abilty to hoover-up huge swathes of data from all of the world’s biggest internet companies. Snowden also revealed that the UK’s GCHQ is copying huge chunks of the internet by accessing the data cables that carry internet traffic. This expropriation of data is essentially the same as Cambridge Analytica’s usage of the ‘breach’ of Facebook data, except on a vastly greater scale. Cambridge Analytica used their slice of Facebook to create a targeting algorithm to analyse political behaviour or intentions, whereas GCHQ or the NSA can use their slice of the internet to create algorithms that analyse the behaviour or intentions of all of us about pretty much anything. Apparently GCHQ only holds the data it copies for a maximum of 30 days, but once you have built your algorithms and are engaged in a process of real-time sifting, the data that you used to build the agorithm in the first place, or the data that you then sift through it, is of no real value anymore. Retention of data is only an issue if you are still thinking about personal data files and the STASI problem.

This is all quite concerning on a number of levels, but when it comes to thinking about data regulation it highlights the fact that, provided we wish to maintain the idea that we live in a democracy where governments can’t operate above the law, any form of regulation you might decide to apply to Facebook and any current or future Cambridge Analyticas also has to apply to GCHQ and the NSA. The NSA deserves to be put in front of Congress just as much as Mark Zuckerberg.

Furthermore, it highlights the extent to which this is so much bigger than digital advertising. We are moving towards a society structured along lines defined by a form of digital caste system. We will all be assigned membership of a digital caste. This won’t be fixed but will be related to specific tasks in the same way that Google search’s understanding of us is related to the specific task of answering a particular search query. These tasks could be as varied as providing us with search results, to deciding whether to loan us money, or whether we are a potential terrorist. For some things we may be desirable digital Brahmins, for others we may be digital untouchables and it will be algorithms that will determine our status. And the data the algorithms use to do this could come from KitKats and fridges – not through any detailed analysis of our personal data files. In this world the reality of our lives becomes little more than personal opinion: we are what the algorithm says we are, and the algorithm can’t or won’t tell us why it thinks that. In a strange way, creating a big personal data file and making this available is the only way to provide protection in this world so that we can ‘prove’ our identity (cue reference to a Blockchain solution which I could devise if I knew more about Blockchains), rather than have an algorithmic identity (or caste) assigned to us. Or to put it another way, the problem we are seeking to avoid could actually be a solution to the real problem we need to solve.

The digital caste problem is the one we really need to be focused on.

The challenge

So – the challenge is how do we prevent or manage the emergence of a digital caste system. And how do we do this in a way which still allows Google search to operate, doesn’t require that we make consumption of chocolate bars a private act or regulate conversations between household objects (and all the things on the Internet of Things) and can apply both to the operations of Facebook and the NSA. I don’t see any evidence thus far the the great and the good have any clue that this is what they need to be thinking about. If there is any clue as to the direction of travel it is that the focus needs to be on the algorithms, not the data they feed on.

We live in a world of a rising tide of data, and trying to control the tides is a futile exercise, as Canute the Great demonstrated in the 11th century. The only difference between then and now is that Canute understood this, and his exercise in placing his seat by the ocean was designed to demostrate the limits of kingly power. The GDPR is currently dragging its regulatory throne to the waters edge anticipating an entirely different outcome.

P.S. I am talking about this post on the excellent Echo Junction podcast, hosted by Adam Fraser