Lies on the line: why MP Lucy Powell’s bill won’t solve the problem of online hate or fake news

Summary: Institutionalised forms of content regulation rest on the realistic assumption that all published content can been monitored and made to conform. If you can’t establish this expectation, this form of regulation becomes instantly redundant. That is why applying the old, publication-based regulatory model to Facebook et al is a distraction that only serves to make politicians feel good about themselves while actually increasing the dangers of online hate and fake news.

In the UK the phrase of ‘leaves on the line’ is firmly established within the national conversation as an example of an unacceptable corporate excuse. It is an excuse rail operators use around this time of year to explain delays to trains. The reason it is deemed unacceptable is that leaves fall off trees every year and have done so for quite some time and thus there is a realistic expectation that this is a problem rail operators should have cracked by now.

Which brings me to another problem where an expectation is building of a corporate fix: fake news and all forms of inappropriate online content/behaviour. This has clearly become something of big issue in recent times to the extent that governments are under considerable pressure to Do Something. And the Thing they are mostly looking to Do is to turn around and tell Facebook, Google, Twitter et al that they need to Do Something. In essence what governments are looking for them to do is assume a publishers responsibility for the content that appears on their platforms. The most recent example of this is the private members bill just introduced by the UK Member of Parliament, Lucy Powell (of which more in a moment).

You can see why this is a popular approach. In the first instance, it allows government to deflect responsibility away from itself or, at the very least, create an imagined space where established regulatory approaches can continue to have relevance. It is an approach which finds favour with the traditional media, which has to operate under conventional publication responsibilities and resents the fact that these new players are eating their advertising lunch while avoiding such constraints. To an extent, it even plays to the agenda of Facebook and Google themselves, because they know that in order to attract the advertising shilling, they need to present themselves as a form of media channel, if not a conventional form of publication. Facebook and Google also know that, despite all the regulatory huffing and puffing, governments will not be able to effectively deploy most of the things they are currently threatening to do – because the space they are trying to create is a fantasy space.

The trouble is – this approach will never work. Worse than that, it is dangerous.

It won’t work because it is an approach that is based on assumptions that no longer hold true. The old assumptions depend on the fact that publication has been an expensive business. This is why there have been relatively few, institutionalised, publishers and the content produced has been similarly restricted. In this environment it is reasonably easy to control content through forms of institutionalised regulation. If someone publishes something deemed inappropriate, this is easy to see and it is also easy to effect a form of redress on the publisher. Publishers are highly visible established corporate entities, readily available for prosecution and punishment through the established legal processes or through a legalised ability to withdraw the licence to operate.

Institutionalised content regulation is based on the realistic expectation that publishers can assume responsibility for all the content they publish. But this isn’t a realistic expectation any more. This is both because online platforms don’t actually produce the content, but also because of the volume problem. Any traditional form of editorial control can’t cope with the flood of content uploaded to these platforms every second. The response from governments has been to demand a technological (i.e algorithmic) fix, the logic being that if a platform can use algorithms to filter content and screen and select users as targets for ads it should be possible to re-purpose this to screen for inappropriate content or behaviour.

There are problems here though. First, screening for ad targeting when you are searching for a relatively limited range of ‘digital cues’ is much easier than the more sophisticated analysis required to assess and interpret the detailed nature of content. There is also less at stake. A poorly targeted ad is not an issue of particular consequence for anyone concerned. But even if the technology was available (and we must assume that at some point this will become the case) this raises issues about to whom we delegate responsibility. I don’t think it is a good idea to give this responsibility to profit-seeking corporations. These sorts of decisions need to be made by democratically accountable governments (or at least made by algorithms created in a transparent way by such governments).

But the real danger occurs in establishing a precedent. Under the old, institutionalised, regulatory model as soon as you assume responsibility for one thing, you have to assume responsibility for everything. Traditionally, the simple act of publication conferred a form of status on that which was published. It is what I have called the sanctity of publication. But you can’t confer this status selectively. You either do it for everything that comes from a platform (which we have seen is impossible) or you do it for nothing. If you selectively sanctify or sanitise content according to one set of criteria, the status this confers will rub-off on everything else.

It is a bit like having a water filter that only filters 10 percent of the water. It is far better to establish a very clear and transparent divide between content which comes with the protection assumed by traditional publication (guaranteed 100 percent filtered water) and that which does not (unfiltered water, drink at your own risk). You must not muddy the waters and this is why it is dangerous to label the likes of Facebook and Google as publishers. I would suggest this is mostly done by people who have a vested interest in them being seen as such – either as a way to throw a spanner in their business models and lower their competitive threat, or those who have a ready-made set of institutionalised, publication-based regulatory templates they wish to impose in order to be seen to be ‘doing something’.

Institutionalised forms of content regulation rest on the realistic assumption that all the content can been monitored and made to conform. If you can’t establish this expectation, this form of regulation becomes instantly redundant. End of.

This takes us back to Lucy Powell’s private member bill. Lucy explains what the bill is about in a piece in the Guardian. The bill focuses specifically on Facebook groups (and by extension all online forums) and looks to stamp out those groups which are used to spread hate and forms of extremism. It is a very reasonable argument and it is hard to disagree with the problems and issues she identifies. However, the sting in the tail is contained in the last paragraph, specifically the statement “by establishing legal accountability for what’s published in large online forums, I believe we can force those who run these echo chambers to stamp out the evil that is currently so prominent.” Problem number one is the idea of legal accountability for publication. As already mentioned, as soon as you start to establish this accountability in one set of Facebook groups, you are conferring the status or protections this implies across all Facebook groups – where in fact no such status exists.

Problem number two is that this is an example of narrowing the problem to fit against presumed competence or remit. This is an issue I came across some years back when talking at a conference of national regulators of advertising and it is about redefining the boundaries of the problem until it fits within the scope of whatever regulatory tools you have to manage it.

Thus, while you can’t monitor everything out there and ban inappropriate content, and you can’t even monitor all the content on Facebook, there is a realistic expectation that you would be able to identify most of the groups that are deemed inappropriate and then… BAN THEM! To paraphrase the Queen song “can anybody find me somebody to ban?” We are just simply reducing the scale of the problem to the point at which there can exist a realistic expectation that we can make the old rules apply. This might be a sensible approach if doing this can progressively lead to a solution to the wider problem – but it doesn’t. This approach isn’t scalable: there is nothing within it that contains anything that might be useful in solving the wider problem. All it does it make us feel better about ourselves and convince us that we are ‘doing something’.

This isn’t to say that we should allow the digital space that operates outside of the constraints of traditional publishing to become a lawless free-for-all. Or to let Google and Facebook off the hook – it is more a case of finding the right hook to hang them from. The law can, and should still apply and there are plenty of ways this (and other forms of constraint and regulation) can operate provided these are based on assumptions that reflect the way the digital space operates.

The first thing to recognise is that a legal or regulatory process can’t (and probably shouldn’t) catch everything. There are plenty of existing examples of laws whose effectiveness doesn’t depend on their ability to catch all transgressors. Think about speeding. We have all broken speeding laws and probably do so to a limited extent every time we drive a car. But that doesn’t mean we are all reckless drivers, nor does it diminish the collective understanding about the real dangers to life that speeding – especially excessive speeding – poses. We know about the existence of speed restrictions, we know about and respect the reasons they exist. Indeed, through the democratic process we will have participated in their creation and maintenance. We also know that there is a possibility that if we speed we will be caught and prosecuted and also that excessive speeding will attract a proportionately severe response.

The law here exists as part of a much broader process of social consent which ensures that driving isn’t a reckless free-for-all. The important word here is process. As I have said oft times before, the defining characteristic of the digital age is the shift of trust from institutions into processes. We trust information not purely on account of who (the channel) brought it to us, we trust it only insofar as we trust how it came to us. We are only going to solve the problem of inappropriate information through a process that embeds legal regulation into a much wider framework of social consent and collective participation.

Speeding is also an interesting example because speeding regulation was devised in an era when it was not realistic to identify and prosecute every instance of transgression. But now we can do this if we want to. If you have Google maps activated, Google knows when you speed. Governments could easily ask for this information, but they don’t. Governments realise that you could only operate such a system if driving became totally automated and you removed from the process any element of human intervention or discretion. Driverless cars facilitate the imposition of absolute speeding restrictions – as we shall soon see. However, online hate or content production is never going to become an entirely automated process – even though we know automated hate or news bots already exist.

Wikipedia is a good example of process-based regulation. Wikipedia has strict rules, but they are not rules about content, they are rules about process. And at heart of this process is transparency. Wikipedia has established a system which ensures that the method by which any individual entry is defined is subject to complete transparency and a form of collective consent. In effect, it has designed a method whereby we can place things in the correct context – we can see where it sits ‘on the line’.

(Update 19/09/18: Jeremy Epstein has just published a piece on how blockchain / crypto is already being used within the publishing process. Blockchain is a form of community-based trust process and presents another example of creating context and using collective processes to see where something sits ‘on the line’ – albeit this is very much in its infancy at the moment).

The line I am talking about here is what I call the probability curve of news. This idea is roughly based on the Big Data concept of n=all, i.e. no longer needing to extract a representative sample from a data set, because we now possess the ability to encompass the entire data set. Traditional publication adopted a small data approach in that publishers established themselves as institutions which extracted what they believed to be a representative sample of the ‘news’ out there. “All the news that’s fit to print” as the New York Times famously put it (or all the news that’s profitable to print as it is in reality). Of course, each institution’s definition of news varied according to forms of political, economic or social biais – but these biases were relatively transparent, and thus citizens were aware of this and could decide how or if to consume this news. And, of course, beyond these biases there still existed a form of legalised regulatory filter.

Rather than see the volume of digital ‘publication’ as the problem perhaps we need to see it as an asset because of the ability we now have to apply a form of n=all and generate a form of crowd (or algorithmic) intelligence. Rather than try and hide things, paring back information to a sanitised and sanctified data set, we can try and position things so that we can see where things sit, how it got there and information cannot misrepresent itself

Take Holocaust denial. We can’t, and shouldn’t deny that Holocaust denial exits. In fact knowing that it is out there and also having the ability to see where it is and whether it is growing is incredibly important. But what is also important is that we don’t allow Holocaust deniers to create status for their beliefs – either by borrowing from an assumed sanctity of publication or by suggesting that their views sit anywhere else on the curve other than at the most extreme end, well beyond the limits of any form of human decency. Holocaust denial becomes most dangerous when it can find those little chinks in the sewers through which it can crawl up and start to insinuate itself into the realm of accepted human society.

By trying to pretend that everything out there on Facebook, Twitter or Google is ‘published’ and thus subject to the regulatory framework (and protections) of publication, we are encouraging the creation of a grey area where Vladimir Putin and Holocaust deniers can flourish. The laws of publication are only effective when certain conditions can be met, primarily the realistic expectation of detecting all instances of transgression, and these conditions do not, and will never, exist in the digital space.

The focus, from a regulatory perspective, needs to be based on exposure and transparency, not trying to create sanitised enclosures. Rather than assume that social media provides the “oxygen of publicity” for extremist views, we should see social media as an opportunity to expose these views to the oxygen of probability. Rather than suggest that profit-seeking corporations should assume the mantle of global police forces, we need to enroll their assistance in the quest for transparency. Fake news spread by the Russians stops being a problem, not when it is banned, but when it is forced to be tagged #fakenewsfromtherussians. Rather than assume the answer is to ban the echo chambers, perhaps it would be a better idea to ensure that the echo chambers have glass walls and spotlights shone upon them – and then see how well they flourish. And if they flourish, we can then focus our attention on them in terms of established legal processes.

Facebook should’t be made to take responsibility for stamping out online hate, but perhaps Facebook should be made to allow algorithms and processes that democratically elected governments can design, to wander through their data sets looking for what needs to be exposed.

We don’t have the necessary tools and processes to do this yet, but it is relatively easy to identify a direction of travel and there are useful examples out where where this is working (Wikipedia). This is also an approach which has scalable potential. Resuscitating the old rules of institutionalised publication has none of these.

 

13 comments

Post a comment

You may use the following HTML:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>