Tagged: algorithms

Will Big Data kill Vendor Relationship Management?

Modernization of Al-Khalid Main Battle Tank (MBT) PAKISTAN ARMY I III have just finished reading Doc Searls’ Intention Economy. And about time too. The book has been out about two years and it is widely recognised as being a Very Important Book. In my defence, I have been following the Vendor Relationship Management (VRM) thing anyway and have even had some marginal contact with the good Doc himself on the issue. So it was more a case of filling-in the gaps. For those not already in the know, VRM is positioned as the counterpoint to CRM (Customer Relationship Management). CRM is how brands use data about their customers in order to define the relationship the brand decides it wants to have with the customer: VRM proposes that customers should own and control the data about themselves so that they can define the relationship they want to have with brands.

I can validate that it is indeed a Very Important Book because it not only defines this new and potentially interesting area (VRM) but also because it strays into a wider analysis of the history and operation (and philosophy) of the internet. The issues that it raises here are becoming increasingly important as pressures build to manage, regulate and appropriate the internet in order to make it conform to political or commercial vested interest. In fact, this wider analysis could turn out to be the most important aspect in the book, or perhaps a valid subject for a new book.

The Intention Economy and VRM is something I would very much like to believe in. Trouble is, form me VRM is a bit like God: something I would like to believe in if only I could get the evidence and reality to stack up. There seem to be just too many reasons why VRM (like God) doesn’t or won’t exist.   At one level, VRM appears to be overly reliant on a code-based answer. This is probably because Doc Searls himself and many of the current VRM gang come out of this place. But the concept that I found most interesting in the book was the idea of the things Doc calls ‘fourth parties’. Fourth parties are organisations that can aggregate customer intentions and thus create leverage and scale efficiencies. This takes us into the realm of community, which rings bells with me since I believe that within a few years almost all relationships between individuals and brands will be mediated by some form of community. In fact, this would be my own take on how the Intention Economy might actually come into being. I think it is the ability to connect individual customers, rather than empower them as individuals, that is likely to present the greatest opportunity to change the rules of the game – as things like TripAdvisor or even Airbnb are starting to demonstrate. However, fourth parties get relatively short shrift in the book, perhaps because they are not a code-based answer.

But my greatest area of scepticism, or perhaps fear, for the future of the customer and citizen, stems from the emerging world of Big Data and algorithms. As outlined in my previous post, algorithms suck the power out of the idea of having a personal data repository and make the ownership of this, from a government, brand, customer or citizen perspective largely irrelevant. In the world of the algorithm, your personal data file (i.e. your life) becomes little more than personal opinion. To all intents and purposes your ‘real’ identity is defined by the algorithm and the algorithm’s decision about who you are and how you shall be treated will pay scant attention to any information that is personal to you, other than to use it as a faint, initial signal to acquire ’lock-on’.

The problem with algorithms is that (like tanks) they favour governments and corporations. It is hard for a citizen to get a hold of, or be able to use, an algorithmic tank. And if you are standing in front of an algorithmic tank, giving you the rifle and flak-jacket of your own data isn’t much protection. It is why Wall Street is the first place that the world of the algorithm has really taken hold – it could afford the best geeks. And as Wall Street is showing, the world of the algorithm tends towards a very dark and opaque sort of place – about as far removed from the sun-lit commons of open-source code sharing as it is possible to be.

However, create the opportunity to connect a million people with rifles and flak-jackets to confront one algorithmic tank, and the odds get better. You may even be able to form a fourth party which can create its own tank, or at least some effective anti-tank weapons.

So, I guess my message to Doc Searls and the VRM gang would be: don’t loose faith in the idea of VRM and the Intention Economy as a destination, but think again about the route.  Build on the idea of fourth parties and focus on community and connection, rather than tools and code, and recognise that CRM is about to be swept away as brands and governments learn how to roll-out the algorithmic tanks.

Astonishingly important article by Evgeny Morozov

FireShot Screen Capture #164 - 'Why the internet of things could destroy the welfare state I Technology I The Observer' - www_theguardian_com_technology_2014_jul_20_rise-of-data-death-of-politics-evgeny-morozov-algorThis is an astonishingly important article, by Evgeny Morozov, published yesterday in The Observer.  It starts to paint the picture of the world of the algorithm, drawing together the important themes that define what it is we need to be thinking and talking about so that we don’t sleep-walk into this new world – the paradoxical world where an individual’s connectedness (to other indivduals and to things) is used as a mechnism of isolation and control.

As I have said previously, the algorithm is the most powerful instrument of social control invented since the sword (and current systems of regulation are powerless against it).

http://www.socialmediatoday.com/content/sword-printing-press-and-algorithm-three-technologies-changed-world

http://richardstacy.com/2014/05/15/algorithms-growth-sensorship/

http://richardstacy.com/2014/06/27/facebook-just-dark-pool/

 

 

 

Is Facebook just a ‘dark pool’?

FireShot Screen Capture #156 - 'Barclays shares tumble after allegations about private 'dark pool' trading system I Business I The Guardian' - www_theguardian_com_business_2014_jun_26_barclays-shares-tumble-dark-poolWednesday saw an important announcement from the New York Attorney General. He announced that Barclays Bank is to be prosecuted concerning their operation of a ‘dark pool’. A dark pool is basically a private trading area which a bank can operate on behalf of its clients, or anyone else to whom the bank grants access. It is dark because it doesn’t operate to the same level of transparency as conventional exchanges. The accusation is that Barclays allowed high frequency traders into their dark pool and allowed these traders to prey on the trading activity of the other investors within the pool, including Barclays’ own clients.

This is an astonishingly important announcement for two reasons: First, it is important for Wall Street but it also important for Facebook, Google, Big Data, data protection, the Internet of Things and thus, quite possibly therefore the future of humanity itself.

First Wall Street: What is happening within Barclays’ dark pool is almost certainly similar to what is happening in the dark pools operated by almost all the major banks. It is also pretty similar to what is happening in the ‘light pools’ that constitute the official Wall Street stock exchanges (just read Michael Lewis’s ‘Flash Boys’, published a few weeks ago if you want validation of this). This will therefore be a test case and rather than go after one of the Big Beasts, the Attorney General has sensibly chosen to pick off an already wounded juvenile.   Barclays is a foreign bank, it is a peripheral player (albeit one with a very large dark pool) and it is already discredited by it actions in rigging inter-bank lending rates. It is therefore easy prey, but bringing it down will provide the ammunition necessary to tackle, or at least discipline, the major players. You can bet that there are a lot of people on Wall Street right now really focused on how this case plays out, even if the mainstream media has yet to really wake-up to its significance.

But this isn’t about just about Wall Street. What is playing out here are the first attempts to understand and regulate the world of the algorithm. High frequency trading is driven by algorithms and exploits one of an algorithm’s principle characteristics, which is its speed in processing large amounts of data. High frequency trading illustrates the power of algorithms and also their potential for abuse. High frequency trading is not illegal (yet), but it is abusive. It is only not illegal because the law makers don’t really understand how algorithms work and no-one has worked out a way to stop people who do understand them from using them in an abusive way.  Interestingly the Attorney General has not tried to establish that high frequency trading is illegal, rather that Barclays misrepresented its dark pool as offering protection from the abusive behaviour of high frequency traders.

Algorithms colonised Wall Street for two reasons: first Big Data was already there in the form of the vast amount of information flowing through the financial markets and; second, Wall Street could afford to pay top-dollar for the relatively small group of geeks who actually understand algorithms. But this is about to change. The pool of geeks is expanding and pools of data, large enough for complex algorithms to operate within, are now developing in many other places, driven by the growth of Big Data and the Internet of Things.

Which brings us to Facebook. In many ways Facebook is a dark pool, except the data within it isn’t data about financial trading, it is data about human behaviour. Now I don’t want to suggest that Facebook is trading this information or necessarily inviting access to this data for organisations who are going to behave in an abusive or predatory way. In a somewhat ironic sense of role reversal, the PRISM affair has revealed that the regulators (i.e. the NSA and the UK’s GCHQ) are the equivalent of the high frequency traders. They are the people who want to get into Facebook’s dark pool of data so they can feed it through their algorithms and Facebook has been doing what (little) it can to resist their entry. But of course there is nothing at the moment to really stop Facebook (or for that matter Google or Twitter) from allowing algorithms into their data pools. In fact, we know they are already in there. While there may not be abusive activity taking place at the moment there is nothing to stop abusive behaviour from taking place, other than the rules of integrity and behaviour that Facebook and Google set for themselves or those that might be set by the people Facebook or Google allow into their pools. Remember also that Facebook needs to generate sufficient revenue to justify a valuation north of $80 billion – and it is not going to do that simply through selling advertising, it is going to do that by selling access to its pool of data. And, of course, the growth of Big Data and the Internet of Things is creating vast data pools that exist in far more shadowy and less obvious places that Google and Facebook. This is a recipe for abusive and predatory behaviour, unless the law-makers and regulators are able to get there first and set-out the rules.

Which brings us back to New York versus Barclays. It is not just Wall Street and financial regulators who need to focus on this: this could prove to be the opening skirmish in a battle that will come to define how society will operate in the world we are now entering – the world of the algorithm. I can’t lay claim to understanding how this may play out, or how we are going to regulate the world of algorithms. The only thing I do know is that the abusive use of algorithms flourishes in the dark and the daylight of transparency is their enemy. Trying to drive a regulatory stake through the heart of every abusive algorithm is a near self-defeating exercise – far better is to create an environment where they don’t have a competitive advantage.

 

Algorithms and the growth of sensorship

Here is a quick thought.  As I have previously said, I think we are moving from the age of the printing press into the age of the algorithm.  Printing led to the growth of censorship whereas algorithms are going to lead to the growth of sensorship.

I was prompted to write this today because of the announcement that one of the UK’s largest electronic goods retailers is linking up with one of the UK’s largest mobile phone retailers.  Electronic goods are basically forms of sensor that monitor human behaviour via how they are used (note: there are now even cameras in Barbie dolls).  Mobile phone retailers basically sell connection to the internet and also provide mobile handsets, which are the most comprehensive form of personal sensor currently out there.  I heard the CEO of the new company on Radio 4’s Today programme make no bones about the fact that the underlying logic behind the deal was the growth in The Internet of Things (with electronic things being the most obvious and easy of such things to connect to the internet).

We are just at a start of a form of data detection landgrab – the Scramble for Data if you like.  Continue reading