Algorithms ate your pension!

Posted on: 22 August 2011 by Alexander Hay

Our over-reliance on algorithms may well be our undoing, or at least, cost us our savings and our economy...

Too much algorithm = too much failAlgorithms are all around. Do a search on Google and you're already using one. The only problem is that increasingly algorithms are being used not to assist personal decisions but to do away with them altogether - we are letting the machines do the thinking, which is problematic not least because computers don't 'think' but calculate.

American programmer Kevin Slavin explains the main problem with our algorithmic world:

...The pernicious thing about algorithms is that they have the mathematical quality of truth - you have the sense that they are neutral - and yet, of course, they have authorship. For example, Google's search engine is composed entirely of fancy mathematics, but its algorithms, like everybody's, are all based on an ideology - in this case that a page is more valuable if other pages think it's valuable. Each algorithm has a point of view, and yet we have no sense of what algorithms are, or even that they exist...

After all, algorithms don't in and of themselves 'know' what they're doing. Yet more insidiously, they are also what chooses the search results we get based on an interpretation of our behaviour, combined with the abstract motives of the programmer. In other words, we are being given an illusion of choice:

...If you know that machine control is part of the picture, you might behave differently. Once you are aware that most of what you are renting from Netflix is based on a very specific model of the human brain that might not correspond to reality, maybe you would start asking your friends what they recommend - which is what we used to do...

This over-reliance on algorithms has already lead to financial disaster several times. For example, 1987's 'Black Monday' economic crash was the result of computers continuing to devalue stocks long after a human would have realised what was going on and stopped. Similarly, the 2008 crash was based on faulty assumptions based on algorithmic projections of the economy. Needless to say, this crash wiped out the pension funds and retirement plans of thousands in the UK. Sadly, we seem to be relying on numbers to make the value judgements only a human being should make. As Slavin observes:

...Here's an example. A postdoc [student] wanted to buy a copy of a developmental biology textbook called The Making of a Fly on Amazon. There were 17 copies for sale, starting at $40, but two copies were priced at $1.7 million. When he checked later, the price was $27 million. He tried to work out what was going on. Basically, two pricing algorithms had got caught in a loop, multiplying the existing price by 1.3 and offering it again. Because algorithms have the logic to raise the price but not the common sense to recognise the value, they just kept escalating...

We may well be venturing into the territory of dystopian science fiction here, and Slavin certainly admits that algorithms have "immense value" when used in the right context, but he argues that we need to have a better understanding of what algorithms are and, most importantly, what they are being used for and the risks this poses.

But if algorithms are problematic, it is because they reflect the limitations and short-sightedness of those that design and operate them. If one conceives an algorithmic process that is designed exclusively for one purpose - such as playing the markets under specific circumstances - then it is by definition unable to adjust to circumstances outside of its remit and may either malfunction or, indeed, carry on working only too well.

The real issue is our relationship with technology, which remains fundamentally dysfunctional. At its most extreme, this unhealthy relationship is underpinned either by technophobia or blind, credulous faith in technology, but never an assumption that software and hardware are only given meaning by the human agency they are often assumed to replace. An algorithm still needs a human to press the 'on/off' button - our mistake, however, is assuming that this is all we need to do.

[SOURCE: New Scientist]

Share with friends


Alexander Hay

Do you agree with this Article? Agree 0% Disagree 0%
You need to be signed in to rate.

Loading comments...Loader