Tagged: Google

Google: the United States of Data

A couple of weeks ago I stumbled across something called Google Big Query and it has changed my view on data. Up until that point I had seen data (and Big Data) as something both incredibly important and incredibly remote and inaccessible (at least for an arts graduate). However, when I checked-out Google Big Query I suddenly caught a glimpse of a future where an arts graduate can become a data scientist.

Google Big Query is a classic Google play in that it takes something difficult and complicated and rehabilitates it into the world of the everyday. I can’t pretend I really understood how to use Google Big Query, but I got the strong sense that I wasn’t a million miles away from getting that understanding – especially if GBQ itself became a little more simplified.

And that presents the opportunity to create a world where the ability to play with data is a competence that is available to everyone. Google Big Query could become a tool as familiar to the business world as PowerPoint or Excel. Data manipulation and interrogation will become a basic business competence, not just a rarefied skill.

The catch, of course, is that this opportunity is only available to you once you have surrendered your data to the Google Cloud (i.e. to Google) and paid for an entry visa. As it shall at the base of the Statue of Googlability that marks the entry point to the US of D:

“Give me your spreadsheets, your files,
Your huddled databases yearning to breathe free,
The wretched data refuse of your teeming shore.
Send these, the officeless, ppt-tossed, to me:
I lift my algorithms beside the (proprietary) golden door.”

And the rest, as they say, shall be history (and a massive future revenue stream).

Truth in Twitterland

Here is a very interesting article by Alan Patrick.  It compares the Google and the Twitter windows on a current news story and proposes that the view through the Twitter window is actually more nuanced and investigative than the rather one-dimensional, or populist, view provided by Google.

This certainly chimes with my own experience.  Some while back I compared the Twitter versus tabloid media view, in relation to the Ryan Giggs / super injunction fiasco in the UK in 2011.  The conclusion I reached here was that the Twitter view was, again, much more nuanced and far less sensationalist than the view the tabloid press traditionally put out in these sort of cases.  Most people were really not that interested in Ryan Giggs love life, certainly not to the extent which might justify front page spreads.  Which is probably why many tabloid journalists are so scornful of ‘the people on Twitter’, because Twitter deflates the tabloids’ ability to titilate.

There is a further, more recent example.  Last year the BBC and its Newsnight programme got into a huge amount of hot water over the ‘naming’ of a former Tory politician, Lord McAlpine, as a paedophile at the centre of a child abuse ring.  Lord McAlpine is not a paedophile and while the BBC did not actually name him, it was inferred that his name was the one that was heading a list names that were ‘circulating on the internet’ – primarily Twitter.  McAlpine himself then went on to instigate legal proceeding against some of those people on Twitter deemed responsible.  This just goes to show how fundamentally untrustworthy and downright evil this whole Twitter-website-internet thing is – one might have thought.

Except – as this story was brewing I went and had a look ‘at Twitter’ to see exactly what was going on.  Now whilst Lord McAlpine’s name certainly came up, along with a whole list of other, frequently ludicrous, suggestions – there was another name which was much more firmly linked to much more specific allegations.  If one had looked at Twitter in the whole, you would not have reached the conclusion that Lord McAlpine was the prime suspect in this case.  I was thus astonished to see the BBC allowing McAlpine’s name to enter the frame on the basis that this was already out there on Twitter, because while some individual tweets may have been suggesting this, a consideration of the collective view of Twitter would have led one to a very different conclusion.  (I shall not name who Twitter saw as the prime suspect for obvious reasons).

Thus – the BBC effectively inferred that Lord McAlpine was the suspect – and got it wrong.  And the evil untrustworthy Twitter may not have got it right (we shall never know the truth because the powers that be have dropped this subject like a hot potato), but it didn’t get it as wrong as the BBC did.

The main point, from all of this, is that news in the social digital space, cannot be defined in an institutional way any more.  News is becoming a raw material, not a finished product and the distillation of what is truth is shifting from institutions into processes.  You can’t understand Twitter as an institution, you can only understand it as a process.  Twitter (unlike Newsnight) was not purporting to tell me that something was true or not true – it simply provided me with a process that allowed me to make my own conclusions.   And key to this process working effectively is transparency and the ability to put information in context.  It is what I call the ability to see the whole probability curve of news and where upon it, any individual bit of information sits.

And going back to Alan Patrick’s article, Twitter is much better placed to deliver against this than Google – certainly when it comes to news – because it doesn’t attempt to attach a score to a particular piece of information in order to rank it (or define its truthfulness).  Instead it allows you to see the spread of opinion and apply a probability approach.   Google’s strength is in other areas, where seeing the curve is less important.  Thus Google is good at answering question such as ‘when to prune raspberries?’ whereas Twitter is better at answering questions such as ‘is this news story really true?’

 

First TweetDeck now Google Reader

FireShot Screen Capture #158 - 'Official Google Reader Blog_ Powering Down Google Reader' - googlereader_blogspot_co_uk_2013_03_powering-down-google-reader_htmlGoogle is ditching Google Reader.  This is a new move for Google, because while it has a history of ‘sunseting’ various initiatives, these have generally been suns that have failed to rise very high in the sky – Wave, Buzz, Sidewiki etc. (Sidewiki? I hear you say – exactly).  Google Reader, on the other hand, was a pretty well established part of the social media firmament.

This has a couple of implications.  One, which is being much discussed, is the impact such a move has in confidence in Google’s products as a whole.  If things that work well and are popular get killed-off, how will this affect enthusiasm to get behind behind both existing and new products?

The other is that this sends a clear message as to where Google is headed – which is towards the cloud and data harvesting.  The problem, from Google’s perspective, is that Reader was a tool that helped people manage information: it yielded very little information about the people who were using it.

This is all part of what I think is a worrying trend.  The tools that help people create and manage their social media world are being sacrificed on the alter of creating cloud based mega-worlds within which people are managed (Chrome is Google’s brand name for its version of this world).  This is driven by the need, or expectation, that social media tools or platforms have to deliver a level of revenue per user that way outstrips the cost per user of providing the tool or platform.  Ultimately this view of the social media business model is unsustainable (just go back to David Ricardo’s theory of marginal costs and revenues to work this out) – but people are out to make as much money as possible before this reality kicks in.

Facebook Graph Search: why this could be so important to the future of Big Data

Last week Facebook launched Graph Search.  This is an attempt to turn Facebook into Google – i.e. make it a place where people go to ask questions, but with the supposedly added bonus that the information you receive is endorsed by people you know rather than people you don’t.

This is a very important step, not just for Facebook, because it could come to be understood as one of the critical opening skirmishes in the Battle of Big Data.  How it plays-out could have enormous implications for the commercial future of many social media properties, including Google.

This is how the Battle of Big Data squares-up.  On the one hand you have platforms, such as Google and Facebook, amassing  huge behavioural data sets based on information that users give out through their usage of these infrastructures.  Googlebook then sells access to this data gold mine to whom-ever wants it.  On the other hand you have the platform users, who, up until this point, have been relatively happy to hand-over their gold.  The reason for this is that these users see this information as being largely inconsequential, and have no real understanding of its considerable value or the significant consequences of letting an algorithm know what you had for lunch.  The fisticuffs begins when these users start to understand these consequences – because in most instances, their reaction is to say “stop – give me back control over my data.”

There is an enormous amount riding on this.  If users start to make demands to repatriate, or have greater control over, their data – this delivers hammer blows to the commercial viability of Googlebook type businesses, who are either making huge amounts of money from their existing  data goldmine, or have valuations that are based on the future prospect of creating such goldmines.  It also starts to open-up the field for new platforms that make data privacy and control a fundamental part of their proposition.

Initial reports from the field are not encouraging (for Facebook).  There were immediate issues raised about privacy implications which Facebook had to pacify (see this Mashable piece) and significant negative comment from the user community – as reported in this Marketing Week article.  See also this further analysis from Gary Marshall at TechRadar.  It will be very interesting to see how this plays-out.

From another perspective, I think this announcement illustrates what Facebook believes is its advantage over Google – i.e. its sociability and the fact that it can deliver information that is endorsed by people that you know.  The interesting thing about this is that the power of social media lies in its ability to create the processes that allow you to trust strangers.  The value of the information can therefore based on the relevance or expertise of the source – not the fact that they are a friend.  Google is the master of this in a largely unstructured way, and services such as Amazon or even TripAdvisor can deliver this via a more structured process.  Facebook can’t really do this, because it neither has Google level access to enough broad-spectrum data, not does it have processes relevant to specific tasks (Trip Advisor for travel – Amazon for product purchase).

Is Google favouring Google Groups in Discussion search?

I am doing some research for a client identifying conversations and communities, using as just one of the tools, Google’s search by Discussions option.  I was finding that Google Groups seemed to rank very highly here and therefore wondered if this was just a quirk of the (relatively specialist) subject matter I was investigating and therefore an interesting insight to report to the client.  But then, switching topics, I found the same thing – a high prevalence of Google Groups in Google Discussion search.

Is Google therefore artificially boosting the rank of content in Google Groups, over and above other forums or platforms, in order to increase the attractiveness of Google Groups as a platform for group or community formation?  Or is what I have found just a chance effect?  Would be worth taking a closer look at this – hence why I have flagged this for Eli Pariser, author of the excellent The Filter Bubble and expert Google watcher.

Google+ – a solution in search of a problem

First I must say that I am desperate to like Google+.  I really want it to succeed because I like Google. I find many of their products fantastically useful (gmail, maps, Android, calendar, docs.) I also trust Google (within the limits imposed by the fact that it is a listed corporation).  I also believe that the world of the social media citizen is desperate for a breakthrough tool that can start to impose some order on the management of your social media world – and Google seems to be the company best placed to do this.

And – the good news is I do like Google+.  In the same way that Apple have worked out how to do ‘beautiful straight from the box’ for devices, Google have created something which has that same appeal in terms of a platform (something that will probably only get better as they iron out the wrinkles).  Google+ has that sort of playability that makes you want to use it.

But here is where the doubts start to creep in.  You want to use it, but to do what exactly?  Continue reading

Google versus Facebook: a battle for social consent

The recent launch of Google + has prompted much commentary on the battle between Google and Facebook and the need for Google to establish a foothold in the social space where people, rather than algorithms, do the work.   Google + has still not gone on general release, but the consensus seems to be that it is a good product which stands a better chance of success that Buzz or Wave – Google’s previous ‘Big Social Thingies’.  The smart money is saying that it might not kill Facebook, but it could kill Twitter.  Ultimately though, it doesn’t matter what the digerati think, a social tool only becomes relevant when it secures mass adoption, or, as Clay Shirky has put it – tools only become socialy interesting when they become technically boring.

All this speculation his has prompted me to think again about the whole Google versus Facebook battle and conclude that we are missing a trick here.  This isn’t simply that the business model for both companies is based on the assumption that both are forms of media and thus advertising platforms, when in reality Facebook in particular is more akin to an infrastructure (as previously blogged here and here in relation to LinkedIn).  It extends to the fact that society as whole has not developed a form of social consensus around the business models of Facebook and Google (et al).  Basically Google and Facebook have not yet acquired a social licence to operate and, potentially, may not be able to secure such a licence.

This may sound a pretty abstract concern and I can bet  considerations of social consensus have not worried the awfully clever chaps at Goldman Sachs when they have been devising their models for valuing Facebook.  But I think they should be worried about it, and this is why.

There is a form of social consensus that has developed around the business model of the traditional media.  This is based around the recognition that being a conventional media business involves a lot of cost – not just in making the content, but because distributing the content is expensive.  This high cost of distribution is what actually creates the high cost of producing the content – it has to be high quality / mass interest in order to make it worthwhile putting into expensive distribution channels.  Therefore if we want to receive the content the mass media produces, we have to give something in return – we either pay for it, or we allow our consumption of it to be interrupted by advertising.  This basic social contract is hardwired into our understanding and behaviour.  Even if individual citizens don’t connect all the dots, society as a whole has worked this one out and thus this business model has gained social consent.  It is a balanced relationship – what we give is reflected in what we get: advertising revenues or subscriptions cover the production and distribution costs – with a modest margin on top which represents the media’s profits.

How much money is enough?  Why Craig Newmark is smarter than Sir Martin Sorrell

A few years back, when Craigslist was eating up the classified advertising lunch of regional newspapers in the US, Craigslist and founder Craig Newmark drew the ire of the likes of Sir Martin Sorrell, boss of WPP, the world’s largest advertising and media network.   He accused Craigslist of destroying value, in the sense that here was a market that was worth billions and Craigslist was taking it apart and not replacing it with a model that yielded similar billions.  For a chap like Martin Sorrell this just seems inconceivable – why waste an opportunity to make billions for yourself.  Craig Newmark himself was often asked why he wasn’t making more money from his idea.  His response to this was incredibly revealing in more ways than one.  He simply said “I don’t need that much money”.  Now while Craig, unassuming chap that he is, may have meant that he, personally, didn’t need or want  billions of dollars; his reply actually reveals a much more profound truth.  Craiglist, quite literally, didn’t need that much money.  It replaced something that cost a great deal of money to organise (regional newspapers and classified advertising) with something that largely organised itself, requiring only some software, some rules of participation and some server space.

The rules of basic economics dictate that in a functioning market economy, in the medium to long-term you cannot make super-profits. A couple of hundred years ago David Ricardo proved why this is the case, based on the theory that marginal revenues can never significantly exceed marginal costs.  If you apply Ricardo’s theory to Craigslist, Craig may have been able to charge millions for his service initially, but because the costs of being Craigslist and entering this market are so minimal, it was always going to be easy for a competitor to set up an offering at a lower price and relatively soon the market will stabilise at a point where the price of using the service (value opportunity) sits pretty close to cost of providing the service.  And because the cost is minimal the value opportunity will also be minimal.   Craig Newmark was not destroying value, he was actually liberating capital to be employed more efficiently elsewhere.  Mr Sorrell, arch defender of capitalism and the free markets that he is, really should have known that Craigslist is capitalism and free markets in action, even if Craig himself eschews the behaviours of a traditional capitalist.

Why is this relevant to social consent, Facebook and Google? Neither of these organisations are making super-profits.  Google is certainly making healthy profits almost all of which comes from advertising around its core search product.  Facebook had estimated revenues in 2010 of $2 billion and while we don’t know how this translated into profits this figure seems very low when compared to a valuation of $41 billion.  Well, as with Craig Newmark you have to ask the question “how much money do you they really need” – what does it actually cost to deliver Google or Facebook.  And the reason this question is important is because it is this – the true marginal cost – that will ultimately determine what it is that people will be prepared to give them in return. Of course we think that we don’t give Google or Facebook anything, because they don’t charge us to use their services.  However, in reality we do give them something.  We give them information about ourselves.  This is where the problem starts, because we are only just beginning to understand the implications of giving away vast amounts of personalised information, not just through our usage of Google and Facebook but actually via our participation in the social digital space.  It is not just a question of the commercial value of this information it is also a question of understanding a whole host of implications – both positive and negative – that stem from giving away this data.  In all probability no-one, not even Google or Facebook, really has a thorough grasp on the implications of a world where so much personalised data in being generated and has the potential to be used either by themselves, sold to, or otherwise obtained, by interested parties.  One thing is for certain, the individual users of Google and Facebook have no idea about the consequences of giving away so much knowledge about their lives.

Understand the true cost of Google and Facebook

If you want to start to get a handle on exactly what these implications are, a very good place to start is Eli Pariser’s book The Filter Bubble.  The Bubble in question is a unique to each one of us, individually crafted via our actions and choices as tracked via our digital activity.  This bubble not only controls what information comes in to us, it also reflects an image of us to the outside world – albeit an image that has huge potential for distortion, manipulation or mis-use.  Pariser does a very good job of highlighting the dangers of creating highly personalised worlds,  showing how this can isolate us from the experiences and serendipitous encounters that are necessary both to develop a balanced world-view and also generate creativity and innovation.  He also touches on the dark side – the potential for personalised digital information to be used either in ways which are far removed from the intention or original usage when we decided to share this in the first place, or to create identities of ourselves which are either misleading or far more revealing than we would wish (or believe we have given consent for).   One example that caused me to thumb-mark a particular page was the fact that banks can, or are, using social data to determine creditworthiness.  And this is not just derived from data you may have shared about yourself, but data derived from your network of friends or contacts.   Thus, if you have friends who have not paid their bills, this will have a negative impact on your own credit score.  This one single fact should be enough to make anyone think twice about their participation in social networks.

The other, rather scary thought, is that there is no one single ‘digital file’ held on all of us.  Neither Facebook or Google can easily pull their own files on a particular user – largely because that file is so big and distributed.  Instead information is pulled out through windows shaped by the questions that are asked – you can isolate specific characteristics, but you can’t get the whole picture.  At the moment, in the case of Google and Facebook, these questions are framed by advertisers and the need to become more targeted in the selling of products and services.  Advertisers don’t really need to know the whole picture, they only need to know the bits of it relevant to what it is they are selling.

Where this becomes more worrying is in areas where it is in someone’s interest to make broader or more significant conclusions about an individual.  Take the example of a government intelligence agency.  All of us share a very great deal in common with your average terrorist or member of an organised crime syndicate.  We go to the same shops, buy similar clothes, eat the same food, listen to similar music etc.  In fact, the vast majority of what we do makes us look exactly the same.  It is of course, a very few but highly significant, differences that mark us apart.  The problem is that the instinct and abilities of most intelligence agencies are not to try to find evidence to disprove who is a terrorist or criminal – it is to look for similarities in patterns of behaviour and then search for more evidence to confirm these initial suspicions.  This means that it is theoretically possible for you or I to very easily end-up on a list of terrorist suspects, despite that fact that there may be huge amounts of available evidence to dissprove that conclusion, were anyone actually looking for it.  But because we don’t know we are on the list in the first place; neither we nor anyone else, really understands exactly what activities got us onto that list and no one is looking for the easily available evidence to get us off it – on the list we remain.  We, and to extent even our interrogators, are powerless because no-one really knows what ‘the internet’ thinks of us, we can only derive an imperfect picture of ourselves based on the question we ask it.

We give more than we receive: re-negotiating a social contract with social media

I could go much further into this whole issue – but the important conclusion from all of this is that the implications and value of what we are giving away is far greater than what we imagine it to be.  To take us back to the issue of social consent and the bargain we have struck when using ‘free’ social media services – we are giving away much more than we are receiving in return.  This fact is already implicit in the business models or valuations of both Facebook and Google.  In the instance of Facebook it is reflected in the fact that its valuation is many times greater than its current earnings would suggest.  In the instance of Google it is the fact that Google is probably making super-profits – but they are hidden.  Taken in isolation, its core search product is hugely profitable – but the organisation is using these profits to sustain investment in acquisitions and in developing range of ancillary products that are loss making, but which it believes are necessary to creating a sense of lock-in to the Google world, or which have the potential to generate more data and thus improve the proposition to advertisers.

The question therefore is what will happen as society as a whole develops a greater awareness of the value and implications of giving away information?  What will happen as we collectively comes to negotiate, or re-negotiate a social contact with social media?  What is unlikely to happen is that Google or Facebook will have to increase their offer.  People are not going to say that the best way to even up the relationship is for Facebook and Google to give us more.  Instead it is far more likely that people will start to demand that Google and Facebook take less – and how much less they have to take brings us back to Craig’s question and how much money Google and Facebook actually need in order to provide their services.

In Facebook’s case, the answer is probably “not a lot”.  What it took to create Facebook was a clever geek, a couple of good insights and some server space.  Outside of the actual costs of running the servers and developing the product – Facebook has added more cost, but this is related largely to the ability to generate revenue via advertising and these costs are essentially discretionary or are derived from the business model Facebook has decided to pursue, rather than the business model it has to pursue (albeit in reality the business model it has to pursue has now been set via the value it has placed on itself via the selling of shares in the business).  In essence, Facebook now needs to generate a lot of cash to fulfil its valuation expectations – but it doesn’t need a lot of cash to actually ‘be’ Facebook in the same way that Craigslist didn’t need a lot of cash to ‘be’ Craigslist.

In Google’s case the answer is more complicated.  Google is technologically a far more sophisticated set-up that Facebook.  Its core search algorithm and its associated technical processes have a great deal more intellectual property within them.  However, Google is not the only search engine, but it is the most popular search engine.   At one level you could point to the existence of several much less popular alternatives as evidence to demonstrate that in order to secure the profits Google makes from search, it is necessary to carry the costs of providing the rest of Google World thus creating the necessary lock-in or search loyalty.  It is difficult to make this call, and in many ways, the difficulty in getting a real handle on the business models of Google and Facebook stems from the absence of a real competitor.  Maybe it is not so much the absence of a competitor, but the absence of a genuine market within which the rules of competition have become established.  And implicit in the creation of a genuine market is the necessary element of social consent – or perhaps, in light of everything discussed thus far, informed social consent.

Taking less – why this may be the key to comparative advantage

Both Facebook and Google have grown up, and shaped their business models, in a digital world characterised by the absence of rules.  The social digital space in particular is new, it’s very different, and we are only starting to work out how to deal with it.  It is therefore quite likely that as we start to develop the necessary rules and as genuine markets start to emerge, this will be an environment that is increasingly hostile to the business models of the pioneers than opened up the space in the first place.  For example, as society starts to recognise the implications associated with giving away data, rather than demand more in return, it will instead demand far more restrictions around how personal data is used.  This may  manifest itself in demands for greater privacy or control from the likes of Google and Facebook – demands that these companies may not be able to satisfy and still continue to operate to their current business models.  But what is more likely to happen is that competitors will emerge – but these won’t be like Google or Facebook.  The main reason they won’t be like Google or Facebook is that they won’t have to generate the quantity of cash Google and Facebook need – either to sustain an artificial valuation or to preserve an enormous corporate empire, 90 per cent of which is loss making.  They will be able to approach consumers and say “we can do everything that Facebook does (because it is not difficult or expensive to do this) but what we don’t need to do is sell your data”.  And because of the emerging sensitivity about giving away data, this benefit will have a high perceived value.  In effect, these new players will be the ones that have the competitive advantage necessary to negotiate the necessary social licence to operate.

Predicting the demise of Google and Facebook may seem a little far-fetched, especially as they currently are the masters of the digital universe.  However, it is worth taking note of a couple of things.  First, in relation to this idea of social consent, we have recently seen some tumultuous events within the traditional media – the phone hacking scandal that has engulfed News Corporation.  Amongst other things this has caused News Corporation to close the News of the World, the UK’s largest circulation Sunday newspaper.  The reason it was forced to do this was due to the fact that the News of the World lost its social permission to operate.  It was also forced to abandon its bid to acquire all the shares in Sky TV – again not because it was forced to do so, but because it became clear that it failed to create the necessary social consent.

Second, we are starting to see the signs that the whole issue of usage of personal data is emerging out of the shadows.  Facebook has obviously had problems about privacy in the past – the introduction of Beacon being the best example of where it has already had its fingers burnt.  Google has escaped significant attention, probably because its products are less social and therefore seen as less intimate or revealing about the nature of their users.  However, take a look at this ‘viral’ presumably released by Microsoft – characterising the Gmail Man as the postman that reads all your mail and then tries to sell you products.  This is a clear indication that privacy has been identified as Google’s Achilles Heel and is an issue worthy of competitive exploitation.  We can expect more of this.

I might also suggest that Google has already been weakened by its desire to collect data in that, pre Google + it has not scored too many successes with its recent product launches.  My suspicion as to the reason for this is that its product development has not been sufficiently driven from a fundamental understanding about what ‘social media citizens’ really want, but more by Google’s desire for them to use certain types of products  – in large part products which will yield useful data.  A good example of this is the now largely forgotten Sidewiki.  As I identified at the time, the sting in the tail of this particular product was the fact that it required you give Google access to your browsing history.

Does this mean that Google and Facebook are wilfully manipulating us?

Probably not.  Google in particular has been very transparent about its ambitions to create a personalised web.  At worst you could accuse Google of a level of naivety in not fully understanding the social consequences of what they are doing.  This is not surprising since Google is at heart a technology company run by geeks – and geeks are notoriously bad at understanding social consequences.  Google also is popular and has a range of fantastic products.  It can therefore generate some comfort that it is ‘doing the right thing’.  Facebook is much less transparent, but there again it offers much less than Google and has much more it needs to gain – because it isn’t sitting on top of super-profitable search algorithm.  Facebook is also much easier to replace than Google and we are already starting to see Facebook alternatives that are explicitly making a play about data protection and privacy.   In both instances the driving commercial imperative is around capturing as much attention and usage as possible – thus making themselves harder to replace.  In essence it the Wild West 2.0 – staking out as much territory as possible before civilisation and its attendant wagon train of rules and regulations catches up.  This isn’t wrong or evil, it is just competition and capitalism.

Should we be worried?

We should probably be vigilant, rather than worried.  When revolutions happen, they tend to generate good stuff and bad stuff in equal measures.  The trick is to be alert so the bad stuff can be managed or controlled and the good stuff enhanced.  At the end of the day, it is going to come down to the construction of some new rules and regulations – either those that are legislated or which derive from social permission (as in fact most rules of society do – we don’t ‘not do stuff’ simply because it is illegal, unless you are a hedge fund manager).  We need to play a part in shaping those rules.

On the other hand, it is probably Facebook and Google which should be most worried.  The Wild West is not going to be wild much longer and they need to start developing a business model which is not simply based on squeezing-out the competition but is based on a world where people attach a much greater value to the data they decide to share.  And this doesn’t mean demanding more bang for their byte – it means either sharing less bytes or demanding much greater control over where those bytes end up.

New Google thingy – connecting all the Google dots

I haven’t had a chance yet to really take a look at the new Google thingy – Google Plus (or Google+) – but here is a good analysis from Jay Baer.  It is good because it doesn’t get all geeky and excited about the technical features but looks instead at the broader picture behind why Google is doing this.

It appears that Google+ is really an exercise in connecting all the Google dots – and thus harnessing all the Google power.  However, it is still a long way off being “The One Place” that will really become the social media killer app – i.e. the one and only application you need as a social media citizen to drive all of your social activity.

Let’s see if this one flies in a way that the last Google thingies didn’t (Buzz, Wave, Sidewiki – remember Sidewiki?)

Google’s Sidewiki has a sting in the tail

A couple of days ago Google announced something very interesting – Sidewiki. This creates an overlay on any website / url allowing a form of commenting and rating.  Because this is linked to the browser, the site owners themselves have no say here – you can’t opt-in or opt-out.  At one level this could be a move which forces every website into the social media space – whether they like it or not.

Powerful stuff – so I signed-up and at that point realised the sting in the tail.  In order to work, your browser has to send Google details of your browsing.  This gives Google the information it has been craving for a long time, largely without success thus far – identifiable data about individuals’ behaviour, not just anonymous links that come into a website.

As I understand it – Google’s strategy is based around accumulating as much data as possible about individuals in order to, in Google’s words “improve the quality of service we can offer”.  What this actually means is improve the quality of the data Google can offer advertisers.  Ultimately Google is looking to push this away from just computers into any digital device that individuals use – thus building up a complete picture of their digital life.

The flaw in this strategy is not a technical one – it is a social one.  People were happy with Google search because the results were based on collective behaviour, but each contribution was anonymous.  A shift to a form of output based on their identifiable behaviour as an individual, not their anonymous behaviour within a group – will not be seen as socially acceptable.  People will not trust Google enough to feel comfortable with them having this level of knowledge.   The key to making this strategy work therefore is to construct a big sugar coating around this particular pill – hence Sidewiki.   Perhaps a better name for it would be Big Sidebrother.

This is a shame – because attractive as this sugar coating is, the pill is still too bitter to swallow (that said, I haven’t de-enabled sidewiki yet!)

Google v Facebook is a battle for today’s internet, not the internet of the future

Wired has just published an excellent article on the battle between Facebook and Google.  It covers the key issues concisely and is well worth a read.

However, I think both companies (and possibly Wired) are wrong to think that this is a battle for future of the internet.  Instead it is a battle for today’s internet.  In my view neither Google nor Facebook will win the battle for the future of the internet because both are fighting in the wrong space.  Both organisations are basing their strategies on the assumption that the future lies in an ad-driven, data capture, real estate model of the internet – and this is a 1.0, traditional institutionalised communications model.

Advertising is a creation of the world of traditional institutionalised information.  No one is suggesting that advertising is still not incredibly important – but it is a pot that is shrinking as distribution-based communication itself shrinks.  And while some of it is moving on-line, the on-line opportunity is never going to be as big as the current total pot and ultimately will disappear altogether.

Here’s why.  Continue reading