Tagged: algorithms

A politician who understands the world of the algorithm

Thanks to Jeremy Epstein (go-to for all things blockchain) for drawing my attention to this Wired interview with Emmanuel Macron. Here is a man who understands the world of the algorithm. There are three reasons you can tell this. First: he doesn’t talk about trying to lock-up access to data – he talks about making data open (with conditions attached – primarily transparency). Second: from a regulatory perspective he focuses on the importance of transparency and shows he understands the dangers of a world where responsibility is delegated to algorithms. Third: he talks about the need for social consent, and how lack thereof is both a danger to society but also to the legitimacy (and thus ability to operate) of the commercial operators in the space (I was  7 years ahead of you here Emmanuel).

As an example, he is opening access to public data on the condition that any algorithms that feed on this data are also made open. This is an issue that I belive could be absolutely critical. As I have said before, algorithms are the genes of a datafied society. In much the same way that some commercial organisations tried (and fortunately failed) to privatise pieces of our genetic code, there is a danger that our social algorithmic code could similarly be removed from the public realm. This isn’t to say that all algorithms should become public property but they should be open to public inspection. It is usage of algorithms that require regulatory focus, not usage of data.

This is a man who understands the role of government in unlocking the opportunities of AI, but also recognises the problems government has a duty to manage. It is such a shame that there are so few others (especially in the UK where the government response is child-like, facile and utterly dissmisive of the idea that government has any role to play other than to let ‘the market’ run its course whilst making token gestures of ‘getting tough‘).

 

Cambridge Analytica, Facebook and data: was it illegal, does that matter?

For the last year Carole Cadwalladr at the Observer has been doing a very important job exposing the activities of Cambridge Analytica and its role in creating targeted political advertising using data sourced, primarily, from Facebook. It is only now, with the latest revelations published in this week’s Observer, that her work is starting to gain political traction.

This is an exercise in shining a light where a light needs to be shone. The problem however is in illuminating something that is actually illegal. Currently the focus is on the way in which CA obtained the data it then used to create its targeting algorithm and whether this happened with either the consent or knowledge of Facebook or the individuals concerned. But this misses the real issue. The problem with algorithms is not the data that they feed on. The problem is that an algorithm, by its very nature, drives a horse and cart through all of the regulatory frameworks we have in place, mostly because these regulations have data as their starting point. This is one of the reasons why the new set of European data regulations – the GDPR – which come in to force in a couple of months, are unlikely to be much use in preventing US billionaires from influencing elections.

If we look at what CA appear to have been doing, laying aside the data acquisition issue, it is hard to see what is actually illegal. Facebook has been criticised for not being sufficiently rigourous in how it policed the usage of its data but, I would speculate, the reason for this is that CA was not doing anything particularly different from what Facebook itself does with its own algorithms within the privacy of its own algorithmic workshops. The only difference is that Facebook does this to target brand messages (because that is where the money is), whereas CA does it to target political messages. Since the output of the CA activity was still Facebook ads (and thus Facebook revenue), from Facebook’s perspective their work appeared to be little more that a form of outsourcing and thus didn’t initially set-off any major alarm bells.

This is the problem if you make ownership or control of the data the issue – it makes it very difficult to discriminate between what we have come to accept as ‘normal’ brand activities and cyber warfare. Data is data: the issue is not who owns it, it is what you do with it.

We are creating a datafied society, whether we like it or not. Data is becoming ubiquitous and seeking to control data will soon be recognised as a futile exercise. Algorithms are the genes of a datafied society: the billons of pieces of code that at one level all have their specific, isolated, purpose but which, collectively, can come to shape the operation of the entire organism (that organism being society itself). Currently, the only people with the resources or incentive to write society’s algorithmic code are large corporations or very wealthy individuals and they are doing this, to a large extent, outside of the view, scope or interest of either governments or the public. This is the problem. The regulatory starting point therefore needs to be the algorithms, not the data, and creating both transparency and control over their ownership and purpose.

Carol’s work is so important because it brings this activity into view. We should not be distracted or deterred by the desire to detect illegality because this ultimately plays to the agenda of the billionaires. What is happening is very dangerous and that danger cannot be contained by the current legal cage, but it can be constrained by transparency.

Gaming democracy

Not far short of three years-ago I published a piece on the Huffington Post which suggested that humans had moved from the age of the sword into the age of the printing press and were about to move into the age of the algorithm. The reason, I suggested, for why a particular form of technology came to shape an age was that each technology conferred an advantage upon an elite or institutionalised group, or at least facilitated the emergence of a such group which could control these technologies in order to achieve dominance.

This is why the algorithm will have its age. Algorithms are extraordinarily powerful but they are difficult things to create. They require highly paid geeks and therefore their competitive advantage will be conferred on those with the greatest personal or institutionalised resource – billionaires, the Russians, billionaire Russians, billionaire Presidents (Russian or otherwise). There is also a seductive attraction between algorithms and subterfuge: they work most effectively when they are invisible. Continue reading

Google: the United States of Data

A couple of weeks ago I stumbled across something called Google Big Query and it has changed my view on data. Up until that point I had seen data (and Big Data) as something both incredibly important and incredibly remote and inaccessible (at least for an arts graduate). However, when I checked-out Google Big Query I suddenly caught a glimpse of a future where an arts graduate can become a data scientist.

Google Big Query is a classic Google play in that it takes something difficult and complicated and rehabilitates it into the world of the everyday. I can’t pretend I really understood how to use Google Big Query, but I got the strong sense that I wasn’t a million miles away from getting that understanding – especially if GBQ itself became a little more simplified.

And that presents the opportunity to create a world where the ability to play with data is a competence that is available to everyone. Google Big Query could become a tool as familiar to the business world as PowerPoint or Excel. Data manipulation and interrogation will become a basic business competence, not just a rarefied skill.

The catch, of course, is that this opportunity is only available to you once you have surrendered your data to the Google Cloud (i.e. to Google) and paid for an entry visa. As it shall at the base of the Statue of Googlability that marks the entry point to the US of D:

“Give me your spreadsheets, your files,
Your huddled databases yearning to breathe free,
The wretched data refuse of your teeming shore.
Send these, the officeless, ppt-tossed, to me:
I lift my algorithms beside the (proprietary) golden door.”

And the rest, as they say, shall be history (and a massive future revenue stream).

The three ages of the algorithm: a new vision of artificial intelligence

Last week the BBC looked at artificial intelligence and robotics. You could barely move through any part of the BBC schedule on any of its platforms without encountering an AI mention or feature. A good idea I think – both an innovative way of using ‘the whole BBC’ but also an important topic. That said I failed to come across any piece which adequately addressed what I believe is the real issue of AI and how it is likely to play-out and influence humanity.

True to subject form, in the BBC reporting there was a great deal of attention on ‘the machine’ and ‘the robot’ and the idea that intelligence has to be defined in a human way and therefore artificial intelligence can be said to be here, or to pose a threat, when some machine has arrived which is a more intelligent version of a human. This probably all stems from the famous Turing test together with the fact that most of the thinkers in the AI space are machine (i.e. computer) obsessives: artificial intelligence and ‘the machine’ are therefore seen to go hand in hand. But AI is not going to arrive via some sort of machine, in fact it will be characterised by the absence of any visible manifestations because AI is all about algorithms. Not algorithms that are contained within or defined by individual machines or systems, but algorithms unconstrained by any individual machine and where the only system is humanity itself. Here is how it will play-out. Continue reading

Marketing technology: it is confusing but it is going to be big

This post is a marker.  It is post-it note that says “remember to watch this space and try and get your head around it because this is going to be big”.  It also is an excuse to log what I think is a very useful, if slightly mind-bending post by Scott Brinker.

My current mantra for marketing folk is that the future of brands involves getting your head around three things: the shift from the audience to the individual, the fact that community is becoming the new media, and the emergence of the world of the algorithm (i.e. Big Data).  I also continually bang-on about social media being a process, and of course, one of the things we use technology to do is management of process.

To a large extent, eveything that Scott is talking about in his post plays against these  issues.  To manage relationships with individuals at any sort of scale will require a process supported by technology.  Scott also talks about tag management – which (as I have already written about) will become the foundational process for the operation of communities.  Likewise, it is clear that the algorithm will become the tool that makes sense of the data that could be seen to live within the marketing cloud.  And, as Scott points out, Amazon is already starting to offer algorithmic products to do just that.

Scott also observes that things are currently very complicated and confused.  Or, as I flagged in my previous post, this stuff is ‘legitimately difficult’.  I definitely do no know enough about it – but from what I can see, I think I know enough to say that this is the future.  Technology is going to play a huge role in the management of the relationship between brands and consumers – because technology facilitates process, and this future relationship is going to be defined by process (behaviour identification and response) not by channel and message.

I think I can also predict that the key to really embracing this future is to shed yourself of the snakeskin of the past.  Big data is totally different to small data, to the extent that you can’t build your way to a big data future from a small data starting point or mindset.  Likewise, current marketing technology deals with stuff like CRM but the only way you will be able to deal with the new marketing technology is to free yourself from a CRM mindset (and possibly your CRM people).  If you look at this new stuff through the lens of the old stuff, you will probably fail to see or understand its potential.

Is the bulk interception of data actually worse than mass surveillance?

Where does bulk interception of data stop and mass surveillance start and in the world of Big Data and algorithmic surveillance is it even relevant to make such a distinction?

It emerged last week that these are important questions, following a ruling by the UK’s Investigatory Powers Tribunal and subsequent response by the UK government and its electronic spying outfit, GCHQ (see the details in this Guardian report).  This response proposes that mass surveillance doesn’t really happen (even if it may look a bit like it does), because all that is really going on is bulk interception of data and this is OK (and thus can be allowed to happen).

One of the most disturbing revelations flowing from Edward Snowden’s exposure of the Prism and Upstream digital surveillance operations is the extent to which the US and UK governments have been capturing and storing vast amounts of information, not just on possible terrorists or criminals, but on everyone. This happened in secret and its exposure has eventually prompted a response from government and this response has been to assert that this collection and storage doesn’t constitute mass surveillance, instead it is “the bulk interception of data which is necessary to carry out targeted searches of data in pursuit of terrorist or criminal activity.”

This is the needle in the haystack argument – i.e. we need to process a certain amount of everyone’s hay in order to find the terrorist needles that are hidden within it. This seems like a reasonable justification because it implies that the hay (i.e. the information about all of us) is a disposable asset, something to be got rid of in order to expose the needles. This is basically the way that surveillance has always operated. To introduce another analogy, it is a trawling operation that is not interested in the water that passes through the net only the fish that it contains.

However, this justification falls down because this is not the way that algorithmic surveillance works. Algorithmic surveillance works by Continue reading

Artificial intelligence: Stephen Hawking is wrong (sort of)

The revered physicist, Stephen Hawking, issued some warnings last week about artificial intelligence.  His analysis follows the lines taken by most techy types, or at least was represented as such by the technology journalists that reported on it.  This story basically predicts the creation of some all-powerful machine that will be smarter than a human, able to replicate itself  and thus able to out-compete humans or relegate us to the status of slaves.  A super-smart version of a human, in fact.  The reason we have such a machine-based vision is because the people who talk about artifical intelligence also like building machines.  They therefore believe that the future of artifical intilligence will be machine-based (Hal from Space Odyssey etc.)

But this is not how artifical intelligence is going to arrive.  In fact it has already arrived.

The selfish algorithm

Forget the technologists’ machine-based vision.  We should be looking to the biologists and geneticists.  The future of artificial intelligence will be built genetically – indeed in the same way we humans are built.  The genetic code for artificial intelligence will be written in algorithms.  Billions upon billions of algorithms, each one of which will be responsible for determining some micro-function of society.  And given that the key feature of algorithms is that they learn and can thus evolve, adapt and react it is quite possible they will start to shape the society they contribute to in a way which will ensure their own continuation – in the same way that genes shape the behaviour of the organism of which they are a part in order to ensure the survival of the genes (rather than the organism).  The selfish algorithm in fact.

This isn’t so much about intelligence – the algorithms won’t necessarily posses intelligence or even come together in a way that might produce an entity we would recognise as being intelligent.  Instead it is more a question of replacing or surplanting human intelligence and decision making and thus control.  Algorithms will replace what it is we used human intelligence to do and human society will be relegated to the status of being a host for algorithms, in the same way in which the human body is really just a host for genes.

This is basically the nightmare scenario and there is not some super all-controlling machine, or ridiculous robot, at the heart of it.  And it is a nightmare that is stealing upon us.   There are already millions of algorithms out there which are starting to shape our world.   The introduction of Big Data and the internet of things is only going to add exponentially to their number.  Within a few years there will be almost nothing which happens that isn’t based upon something an algorithm has determined for us.  And if the algorithms start to take control, we will not be able to see it, in the same way that until recently we have not been able to see the way our genetic code controls our own destiny.  In fact we will probably not be looking for it, because we will be looking instead for the emergence of ‘the machine’.

Professor Mark Bishop has challenged Hawking’s analysis, but this critique is based on the idea that AI plus human will always be better than AI on its own.  This may be true, but  algorithms essentially hollow-out the idea of human intelligence: they replace the human requirement to understand why things are happening in order to understand or control what is happening.  This suggests that the triumph of human plus algorithm over the algorithm on its own might be a phyric victory – at least for humanity as a whole, if not necessarily for the individual humans who believe themselves to be in control of the algorithms.  Who (or what) is controlling whom being the key question.

The only means of control we will have is to make transparent the algorithmic code, in the same way that we have now made transparent our genetic code.  Algorithms behave best in the open, when they operate in the dark they have a trendency to get up to bad things (see high-frequency trading for a good example).  We underestimate the power of algorithms at our peril and we have to build transparency into the model from the start:  build our own social algorithmic genome project as we go along, rather than try and uncover it in retrospect.

In a datafied world, algorithms become the genes of society

Here is an interesting and slightly scary thought.  What is currently going on (in the world of Big Data) is a process of datafication (as distinct from digitisation).  The secret to using Big Data is first constructing  a datafied map of the world you operate within.  A datafied map is a bit like a geological map, in that it is comprised of many layers, each one of which is a relevant dataset.  Algorithms are what you then use to create the connections between the layers of this map and thus understand, or shape, the topography of your world.  (This is basically Big Data in a nutshell).

In this respect, algorithms are a bit like genes.  They are the little, hidden bits of code  which none-the-less play a fundamental role in shaping the overall organism – be that organism ‘brand world’, ‘consumer world’, ‘citizen world’ or ‘The Actual World’ (i.e. society) – whatever world it is that has been datafied in the first place.  This is slightly scary, given that we are engaged in a sort of reverse human genome project at the moment: instead of trying to discover and expose these algorithmic genes and highlight their effects, the people making them are doing their best to hide them and cover their traces.  I have a theory that none of the people who really understand Big Data are actually talking about it – because if they did they are afraid someone will tell them to stop.  The only people giving the presentations on Big Data at the moment are small data people sensing a Big Business Opportunity.

But what gets more scary is if you marry this analogy (OK, it is only an analogy) to the work of Richard Dawkins.  It would be a secular marriage obviously.  Dawkins’ most important work in the field of evolutionary biology was defining the concept of the selfish gene.  This idea proposed (in fact proved I believe) that Darwin (or Darwinism) was not quite right in focusing on the concept of survival of the fittest, in that the real battle for survival was not really occuring between individual organisms, but between the genes contained within those organisms.  The fate of the organism was largely a secondary consequence of this conflict.

Apply this idea to a datafied society and you end up in a place where everything that happens in our world becomes a secondary consequence of a hidden struggle for survival between algorithms.  Cue Hollywood movie.

On a more immediate / practical level, this is a further reason why the exposure of algorithms and transparency must become a critical component of any regulatory framework for the world of Big Data (the world of the algorithm).

 

October engagements: Shel Holtz (#smwisoc) and Golden Drums

FireShot Screen Capture #180 - 'Strategic_digital_engagement_seminar-earlybird_pdf' - www_isoc_com_files_pages_Strategic_digital_engagement_seminar-earlybirdI have a couple of engagements in October I would like to flag.

First, social media guru Shel Holtz (@shelholtz) is going to be in the UK from 27-31 October for the week-long strategic digital engagement seminar organised by ISOC.  Since the poor man can’t be asked to provide an entire week’s worth of seminars, some others (Paul Marsden, Janet Murray and myself), have been hired as support acts.  I am going to be responsible for the future, as in Social Media and the Next Big Things: the Forces that will Shape the Social Digital Space in the Next Few Years.  It will focus on Big Data and the world of the algorithm in the morning and rise of communmity and why community may become the new media in the afternoon.

Should be fun.

Places on this one are pretty limited and also have a £2,200 price tag attached, so if you are interested please sign up here.

Second, although firstly chronologically speaking, I will be speaking on October 10 at the Golden Drums in Slovenia.  The organisers have allowed me to run a session (actually pitched as an EACA masterclass) called “An alternative look at content” which I am going to use as an opportunity to expose those guilty for filling the social digital space with Brandfill and reveal why they are doing it.

I think it unlikely you will travel to Slovenia just to listen to me, but if you do happen to be going anyway, my session is at 14.00 on Friday – and you will have the choice between me and Johan Jervøe, Group Chief Marketing Officer, UBS AG who will be answering the question “Branded content: has social media changed the world of creative excellence?”.  Not that I want to influence anyone, but I will also be answering that question.  In fact I can give you the answer now: yes, branded content and social media has changed the world of creative excellence, but only in-so-far as it is causing us to forget what creative excellence really is.  This is because most branded content is simply tediously long-form advertising, with all of the things that made advertising effective taken out of it.

I will also be giving away T-shirts.