Saturday, 12 October 2013

INVENT-10N: THOSE PESKY ALGORITHMS


THOSE PESKY ALGORITHMS

In researching my book Invent-10n it quickly became apparent that it wasn’t the surveillance side of State intervention in our lives – the use of cameras and digital-communication intercepts to collect data about us – that we should be worried about but the use that is made of that data. And here we enter the decidedly creepy world of the algorithm.

I guess (and it is just a guess) most people if they think about surveillance at all see the spread of CCTV cameras and the like as really quite benign, that someone is looking out for them. This is reflected in the slogan used by the fictional National Protection Agency in Invent-10n: ‘watching out for the good guys by watching out for the bad guys’, my idea being that they would sneakily emphasise the ‘watching’ aspect of surveillance. But that’s not the most important aspect … that’s the ‘collecting, storage and analysis’ part … especially the ‘analysis’ bit and this is where algorithms come in.

Algorithms are basically enormously complex decision trees which can be used to solve breath-takingly difficult problems by breaking down these problems into a long string of binary choices. They operate much like the neurons powering our brain which is a good analogy given that they have become so damned sophisticated that they can now imitate thought processes.

Algorithms have been with us a while now, their use being especially prevalent in the world of banking and finance where their ability to grunt in real-time through an amazing amount of data makes them par excellent in discerning trends and making correct buy/sell decisions. In the US the ‘high-frequency trading’ firms utilising algorithms account for at least half of equity trading volume, but it isn’t just in finance where they’re making their presence felt. They’re being increasingly used as diagnostic helpmates in medicine, in the interview process (remember that funny on-line test your company made you take?), in traffic management, in the optimising of the deliveries Tesco et al make to their supermarkets and in the area of law-enforcement.

This latter area I find particularly interesting as it gives (I believe) an indication of the shape of things to come. In a terrific article (‘Penal Code’ New Scientist, 7th September, 2013) Katia Moskvitch opines that automated, algorithmic-directed policing ‘could lead to a world akin of Kaka’s novel The Trial, in which a man stands accused but has no opportunity to defend himself’. But in my view it’s even worse than Ms Moskvitch envisages. My belief is we’re entering a world to that envisaged by Philip K. Dick in The Minority Report, a world beset by predictive law-enforcement.

Prediction is the pot of gold at the end of the surveillance rainbow, but to accurately predict human actions surveillance systems have to be pan-surveillance: they have to know everything.

I am always amazed by the disingenuous way politicians talk about surveillance systems being so very selective about the information they collect, looking so very aghast when there’s any suggestion that they might be trawling up stuff which isn’t related to terrorism or evil-doing (the prime example of this is the denial made by Sir Andrew Parker, Head of MI5 see http://www.theguardian.com/uk-news/2013/oct/08/gchq-surveillance-new-mi5-chief). This, of course, is a crock. The whole point of data-mining is to make connections – to identify seemingly unimportant behavioural relationships between two or more variables and the only way to do this is for the computer driving the algorithm to have access to ALL the data. Which is why algorithms (and the computers that platform them) have such a voracious appetite for trivial information (and why the surveillance systems just keep on growing).

And grow they have. Surveillance captures simply HUGE amounts of data, and security agencies are spending a fortune to construct computer facilities that can handle it. The Utah Data Center built for the National Security Agency in the US has been dubbed ‘the second Manhattan Project’ which gives some idea as to both its importance and its cost.

Now why, you might ask, are the NSA – along with Britain’s GCHQ, Russia’s FSB, China’s MSS and other security agencies around the world – going to all this trouble and expense to process, store  and analyse the chaff of human existence and the answer comes in their ambition to predict our actions.

Security services aren’t interested in what people did (that’s history and the reactive stuff they leave to the police) but in what people will be doing as that offers a chance of interdicting the bad guys before they get dangerous. A la Gottfried Leibniz, those designing and operating the computers that run the security-orientated data-mining systems and the algorithms that direct them believe that how human beings act can be predicted by the forensic examination of the minutiae of our lives. By knowing (and analysing) a person’s DNA, the details of their upbringing, what they say, what they read and listen to, how they think, who they talk with … all this makes easy to predict what exactly they’ll be up to in the future … and how those they interact with will act.

I have to admit that I found it difficult to get my head around a machine predicting what I would do in any given circumstance having always believed that human actions, being so whimsical and emotion-driven, were impossible to predict. But non-linear or not, given enough data (hence the growth of the surveillance culture) it can be done. And don’t think this is something for the future. Bruce Bueno de Mesquita uses algorithmic analysis combined with game theory as a template to predict political events (the development/non-development of the Iranian H-bomb is the one most usually cited) and, apparently, he’s been so successful he now advises the US government on international policy.

There’s a great line in Ian Ayres’ book ‘Super Crunchers’ (highly recommended, a fascinating read) where he says ‘We may have free will, but data mining can let business emulate a kind of aggregate omniscience’. I would only add that for ‘business’ we should now read ‘security services’.

As Jenni-Fur says in Invent-10n: Then all of us will be reduced to remote-controlled puppets, and there will be no chance of being able to zig when they say zag, or to beep when they say bop. Post-Patriot we will not be able to think, to act, to speak or to move without the spirit-sapping realisation that the ChumBots know everything’.

No comments:

Post a Comment