Tagged: truth

Truth in Twitterland

Here is a very interesting article by Alan Patrick.  It compares the Google and the Twitter windows on a current news story and proposes that the view through the Twitter window is actually more nuanced and investigative than the rather one-dimensional, or populist, view provided by Google.

This certainly chimes with my own experience.  Some while back I compared the Twitter versus tabloid media view, in relation to the Ryan Giggs / super injunction fiasco in the UK in 2011.  The conclusion I reached here was that the Twitter view was, again, much more nuanced and far less sensationalist than the view the tabloid press traditionally put out in these sort of cases.  Most people were really not that interested in Ryan Giggs love life, certainly not to the extent which might justify front page spreads.  Which is probably why many tabloid journalists are so scornful of ‘the people on Twitter’, because Twitter deflates the tabloids’ ability to titilate.

There is a further, more recent example.  Last year the BBC and its Newsnight programme got into a huge amount of hot water over the ‘naming’ of a former Tory politician, Lord McAlpine, as a paedophile at the centre of a child abuse ring.  Lord McAlpine is not a paedophile and while the BBC did not actually name him, it was inferred that his name was the one that was heading a list names that were ‘circulating on the internet’ – primarily Twitter.  McAlpine himself then went on to instigate legal proceeding against some of those people on Twitter deemed responsible.  This just goes to show how fundamentally untrustworthy and downright evil this whole Twitter-website-internet thing is – one might have thought.

Except – as this story was brewing I went and had a look ‘at Twitter’ to see exactly what was going on.  Now whilst Lord McAlpine’s name certainly came up, along with a whole list of other, frequently ludicrous, suggestions – there was another name which was much more firmly linked to much more specific allegations.  If one had looked at Twitter in the whole, you would not have reached the conclusion that Lord McAlpine was the prime suspect in this case.  I was thus astonished to see the BBC allowing McAlpine’s name to enter the frame on the basis that this was already out there on Twitter, because while some individual tweets may have been suggesting this, a consideration of the collective view of Twitter would have led one to a very different conclusion.  (I shall not name who Twitter saw as the prime suspect for obvious reasons).

Thus – the BBC effectively inferred that Lord McAlpine was the suspect – and got it wrong.  And the evil untrustworthy Twitter may not have got it right (we shall never know the truth because the powers that be have dropped this subject like a hot potato), but it didn’t get it as wrong as the BBC did.

The main point, from all of this, is that news in the social digital space, cannot be defined in an institutional way any more.  News is becoming a raw material, not a finished product and the distillation of what is truth is shifting from institutions into processes.  You can’t understand Twitter as an institution, you can only understand it as a process.  Twitter (unlike Newsnight) was not purporting to tell me that something was true or not true – it simply provided me with a process that allowed me to make my own conclusions.   And key to this process working effectively is transparency and the ability to put information in context.  It is what I call the ability to see the whole probability curve of news and where upon it, any individual bit of information sits.

And going back to Alan Patrick’s article, Twitter is much better placed to deliver against this than Google – certainly when it comes to news – because it doesn’t attempt to attach a score to a particular piece of information in order to rank it (or define its truthfulness).  Instead it allows you to see the spread of opinion and apply a probability approach.   Google’s strength is in other areas, where seeing the curve is less important.  Thus Google is good at answering question such as ‘when to prune raspberries?’ whereas Twitter is better at answering questions such as ‘is this news story really true?’

 

Facts, lies and probability

Politics in the USA has become tainted by lies, or more specifically by the willingness of large sections of the media to manufacture or circulate lies for political ends.  This is because there is not a BBC in the USA, maintaining a basic standard of rigour in interogating claims and validating facts and it is why the BBC is, in my opinion, an institution every British citizen must fight, to their dying breaths, to preserve from the assaults of government, media barons and “free” market fundamentalists.

(As an aside : it is no co-incidence that the greatest incidence of lying in the British media occurs within the tabloid press, i.e. the area of the media where the BBC doesn’t operate – note the revelations currently tumbling forth from the Leveson Inquiry.)

As a consequence of endemic lying, there is a great deal of focus in the USA on the opportunity for citizens to become involved in fact-checking – note the recent efforts by Jeff Jarvis and Craig Newmark, summarised in this Huffington Post article.  The points that Craig make are all very good, but I can’t help thinking that the solution he is advocating – a huge database or network of networks – may prove unworkable because it represents another form of institution (albeit one managed more collaboratively) to supervise the current institutions which are deemed to be failing.  This seems to swim against the tide of what is happening in social media where trust is being swept out of institutions into transparent processes.   Perhaps, therefore, we already have the tools we need – the databse already exists, it is the social digital space – it is more a question of thinking how we design processes, rather than the technologies, to validate facts.

This brought me back to a slide I presented at a #Phonar workshop at the Coventry University School of Art and Design a couple of weeks ago.  The slide (in all its messy build(ed) complexity) is below.

This was an attempt to use the normal distribution curve to explain or understand the future of media, or more precisely the future of mediation and fact checking.  The basic assumption behind this is that the way institutionalised media has worked to date is a reductive process.  It seeks to cut-away the facts that it sees as not relevant or ‘worthy’ of publication and focus on its own, necessarily restricted interpretation, of what is news or what we need to know.  “All the news that is fit to print”, as the NY Times famously put it – albeit it a more accurate presentation of this might be “All the news that it is profitable to print”.  This because media space is a precious and expensive resource – there is no space within it to contain everything. As a result, the institutionalised media focuses on what it perceives to be “the norm”, that which clusters around a median point which it has set.

But the thing about social media is that it is not restricted – it can contain the entire data set – and the issue therefore is how to create a process that allows us to form a judgement about information that exploits this abundance.  It seems to me that this cannot be a process based on saying “this is right” and “this is wrong”, or setting an arbitrary median point around which to focus, to the exclusion of that which falls outside – this is an institutionalised response.  Rather it has to be a process that allows us to see where on the curve everything sits – based around how many people support a particular fact or truth (the two are different) and sufficient transparency to see who these people are.

The example I used to illustrate this, drawn from an earlier part of my presentation, was the recent #superinjunction furore that struck the UK media in which certain celebrities (e.g. footballer Ryan Giggs) sought, to protect themselves from the intrusive behaviour of the tabloid media via the use of legal injunctions.  A quick examination of Twitter and other social media networks, revealed that the vast majority of people were actually not interested in Ryan Giggs’ love-life.  This, of course, clashed with the agenda of the tabloid media who wished to splash this in salacious detail across their front pages.  In other words readers of the tabloid press saw the Ryan Giggs affair as sitting on a very different part of the curve to that of the tabloid media, who saw it as worthy of acres of newsprint.  This lead me to the observation that one of the reasons many tabloid journalists hate social media is because it deflates their ability to titillate.

What we need to focus on are therefore the processes that allow us to see where on the curve something sits, rather than classifying it as right or wrong, fact or lie.  We also need to take care about what we call truth.  The comments that follow Craig’s Huffington Post piece demonstrate the tendency for many to equate the opposite of a lie as being the truth.  Returning to original exercise, the opposite of a lie is a fact.  Facts and lies are absolute things, whereas truth is a relative thing.  Democracy is about preserving a world that supports many truths – establishing single truths is the business of fundamentalism.

This insight doesn’t give me the answer – I can’t, as a consequence, design a process that allows us to re-establish trust in information presented to us (by the media or Twitter).  However, I hope it does illustrate the direction of travel.