Fry%2C%20Hannah%20-%20Hello%20World_%20Being%20Human%20in%20the%20Age%20of%20Algorithms_10-26%20(2).pdf
Document Details
Uploaded by DelightedPolonium
Stanford University
Tags
Full Transcript
Power G KASPAROV KNEW EXACTLY HOW to intimidate his rivals. At 34, he was the greatest chess player the world had ever seen, with a reputation fearsome enough to put any opponent on edge. Even so, there was one unnerving trick in particular that his competitors had come to dread. As they sat, swea...
Power G KASPAROV KNEW EXACTLY HOW to intimidate his rivals. At 34, he was the greatest chess player the world had ever seen, with a reputation fearsome enough to put any opponent on edge. Even so, there was one unnerving trick in particular that his competitors had come to dread. As they sat, sweating through what was probably the most difficult game of their life, the Russian would casually pick up his watch from where it had been lying beside the chessboard, and return it to his wrist. This was a signal that everybody recognized – it meant that Kasparov was bored with toying with his opponent. The watch was an instruction that it was time for his rival to resign the game. They could refuse, but either way, Kasparov’s victory was soon inevitable.1 ARRY But when IBM’s Deep Blue faced Kasparov in the famous match of May 1997, the machine was immune to such tactics. The outcome of the match is well known, but the story behind how Deep Blue secured its win is less widely appreciated. That symbolic victory, of machine over man, which in many ways marked the start of the algorithmic age, was down to far more than sheer raw computing power. In order to beat Kasparov, Deep Blue had to understand him not simply as a highly efficient processor of brilliant chess moves, but as a human being. For a start, the IBM engineers made the brilliant decision to design Deep Blue to appear more uncertain than it was. During their infamous six-game match, the machine would occasionally hold off from declaring its move once a calculation had finished, sometimes for several minutes. From Kasparov’s end of the table, the delays made it look as if the machine was struggling, churning through more and more calculations. It seemed to confirm what Kasparov thought he knew; that he’d successfully dragged the game into a position where the number of possibilities was so mind- bogglingly large that Deep Blue couldn’t make a sensible decision.2 In reality, however, it was sitting idly by, knowing exactly what to play, just letting the clock tick down. It was a mean trick, but it worked. Even in the first game of the match, Kasparov started to become distracted by secondguessing how capable the machine might be.3 Although Kasparov won the first game, it was in game two that Deep Blue really got into his head. Kasparov tried to lure the computer into a trap, tempting it to come in and capture some pieces, while at the same time setting himself up – several moves ahead – to release his queen and launch an attack.4 Every watching chess expert expected the computer to take the bait, as did Kasparov himself. But somehow, Deep Blue smelt a rat. To Kasparov’s amazement, the computer had realized what the grandmaster was planning and moved to block his queen, killing any chance of a human victory.5 Kasparov was visibly horrified. His misjudgement about what the computer could do had thrown him. In an interview a few days after the match he described Deep Blue as having ‘suddenly played like a god for one moment’.6 Many years later, reflecting on how he had felt at the time, he would write that he had ‘made the mistake of assuming that moves that were surprising for a computer to make were also objectively strong moves’.7 Either way, the genius of the algorithm had triumphed. Its understanding of the human mind, and human fallibility, was attacking and defeating the alltoo-human genius. Disheartened, Kasparov resigned the second game rather than fighting for the draw. From there his confidence began to unravel. Games three, four and five ended in draws. By game six, Kasparov was broken. The match ended Deep Blue 3½ to Kasparov’s 2½. It was a strange defeat. Kasparov was more than capable of working his way out of those positions on the board, but he had underestimated the ability of the algorithm and then allowed himself to be intimidated by it. ‘I had been so impressed by Deep Blue’s play,’ he wrote in 2017, reflecting on the match. ‘I became so concerned with what it might be capable of that I was oblivious to how my problems were more due to how badly I was playing than how well it was playing.’8 As we’ll see time and time again in this book, expectations are important. The story of Deep Blue defeating the great grandmaster demonstrates that the power of an algorithm isn’t limited to what is contained within its lines of code. Understanding our own flaws and weaknesses – as well as those of the machine – is the key to remaining in control. But if someone like Kasparov failed to grasp this, what hope is there for the rest of us? Within these pages, we’ll see how algorithms have crept into virtually every aspect of modern life – from health and crime to transport and politics. Along the way, we have somehow managed to be simultaneously dismissive of them, intimidated by them and in awe of their capabilities. The end result is that we have no idea quite how much power we’re ceding, or if we’ve let things go too far. Back to basics Before we get to all that, perhaps it’s worth pausing briefly to question what ‘algorithm’ actually means. It’s a term that, although used frequently, routinely fails to convey much actual information. This is partly because the word itself is quite vague. Officially, it is defined as follows:9 algorithm (noun): A step-by-step procedure for solving a problem or accomplishing some end especially by a computer. That’s it. An algorithm is simply a series of logical instructions that show, from start to finish, how to accomplish a task. By this broad definition, a cake recipe counts as an algorithm. So does a list of directions you might give to a lost stranger. IKEA manuals, YouTube troubleshooting videos, even self-help books – in theory, any self-contained list of instructions for achieving a specific, defined objective could be described as an algorithm. But that’s not quite how the term is used. Usually, algorithms refer to something a little more specific. They still boil down to a list of step-by-step instructions, but these algorithms are almost always mathematical objects. They take a sequence of mathematical operations – using equations, arithmetic, algebra, calculus, logic and probability – and translate them into computer code. They are fed with data from the real world, given an objective and set to work crunching through the calculations to achieve their aim. They are what makes computer science an actual science, and in the process have fuelled many of the most miraculous modern achievements made by machines. There’s an almost uncountable number of different algorithms. Each has its own goals, its own idiosyncrasies, its clever quirks and drawbacks, and there is no consensus on how best to group them. But broadly speaking, it can be useful to think of the real-world tasks they perform in four main categories:10 Prioritization: making an ordered list Google Search predicts the page you’re looking for by ranking one result over another. Netflix suggests which films you might like to watch next. Your TomTom selects your fastest route. All use a mathematical process to order the vast array of possible choices. Deep Blue was also essentially a prioritization algorithm, reviewing all the possible moves on the chessboard and calculating which would give the best chance of victory. Classification: picking a category As soon as I hit my late twenties, I was bombarded by adverts for diamond rings on Facebook. And once I eventually got married, adverts for pregnancy tests followed me around the internet. For these mild irritations, I had classification algorithms to thank. These algorithms, loved by advertisers, run behind the scenes and classify you as someone interested in those things on the basis of your characteristics. (They might be right, too, but it’s still annoying when adverts for fertility kits pop up on your laptop in the middle of a meeting.) There are algorithms that can automatically classify and remove inappropriate content on YouTube, algorithms that will label your holiday photos for you, and algorithms that can scan your handwriting and classify each mark on the page as a letter of the alphabet. Association: finding links Association is all about finding and marking relationships between things. Dating algorithms such as OKCupid have association at their core, looking for connections between members and suggesting matches based on the findings. Amazon’s recommendation engine uses a similar idea, connecting your interests to those of past customers. It’s what led to the intriguing shopping suggestion that confronted Reddit user Kerbobotat after buying a baseball bat on Amazon: ‘Perhaps you’ll be interested in this balaclava?’11 Filtering: isolating what’s important Algorithms often need to remove some information to focus on what’s important, to separate the signal from the noise. Sometimes they do this literally: speech recognition algorithms, like those running inside Siri, Alexa and Cortana, first need to filter out your voice from the background noise before they can get to work on deciphering what you’re saying. Sometimes they do it figuratively: Facebook and Twitter filter stories that relate to your known interests to design your own personalized feed. The vast majority of algorithms will be built to perform a combination of the above. Take UberPool, for instance, which matches prospective passengers with others heading in the same direction. Given your start point and end point, it has to filter through the possible routes that could get you home, look for connections with other users headed in the same direction, and pick one group to assign you to – all while prioritizing routes with the fewest turns for the driver, to make the ride as efficient as possible.12 So, that’s what algorithms can do. Now, how do they manage to do it? Well, again, while the possibilities are practically endless, there is a way to distil things. You can think of the approaches taken by algorithms as broadly fitting into two key paradigms, both of which we’ll meet in this book. Rule-based algorithms The first type are rule-based. Their instructions are constructed by a human and are direct and unambiguous. You can imagine these algorithms as following the logic of a cake recipe. Step one: do this. Step two: if this, then that. That’s not to imply that these algorithms are simple – there’s plenty of room to build powerful programs within this paradigm. Machine-learning algorithms The second type are inspired by how living creatures learn. To give you an analogy, think about how you might teach a dog to give you a high five. You don’t need to produce a precise list of instructions and communicate them to the dog. As a trainer, all you need is a clear objective in your mind of what you want the dog to do and some way of rewarding her when she does the right thing. It’s simply about reinforcing good behaviour, ignoring bad, and giving her enough practice to work out what to do for herself. The algorithmic equivalent is known as a machine-learning algorithm, which comes under the broader umbrella of artificial intelligence or AI. You give the machine data, a goal and feedback when it’s on the right track – and leave it to work out the best way of achieving the end. Both types have their pros and cons. Because rule-based algorithms have instructions written by humans, they’re easy to comprehend. In theory, anyone can open them up and follow the logic of what’s happening inside.13 But their blessing is also their curse. Rule-based algorithms will only work for the problems for which humans know how to write instructions. Machine-learning algorithms, by contrast, have recently proved to be remarkably good at tackling problems where writing a list of instructions won’t work. They can recognize objects in pictures, understand words as we speak them and translate from one language to another – something rulebased algorithms have always struggled with. The downside is that if you let a machine figure out the solution for itself, the route it takes to get there often won’t make a lot of sense to a human observer. The insides can be a mystery, even to the smartest of living programmers. Take, for instance, the job of image recognition. A group of Japanese researchers recently demonstrated how strange an algorithm’s way of looking at the world can seem to a human. You might have come across the optical illusion where you can’t quite tell if you’re looking at a picture of a vase or of two faces (if not, there’s an example in the notes at the back of the book).14 Here’s the computer equivalent. The team showed that changing a single pixel on the front wheel of the image overleaf was enough to cause a machine-learning algorithm to change its mind from thinking this is a photo of a car to thinking it is a photo of a dog.15 For some, the idea of an algorithm working without explicit instructions is a recipe for disaster. How can we control something we don’t understand? What if the capabilities of sentient, super-intelligent machines transcend those of their makers? How will we ensure that an AI we don’t understand and can’t control isn’t working against us? These are all interesting hypothetical questions, and there is no shortage of books dedicated to the impending threat of an AI apocalypse. Apologies if that was what you were hoping for, but this book isn’t one of them. Although AI has come on in leaps and bounds of late, it is still only ‘intelligent’ in the narrowest sense of the word. It would probably be more useful to think of what we’ve been through as a revolution in computational statistics than a revolution in intelligence. I know that makes it sound a lot less sexy (unless you’re really into statistics), but it’s a far more accurate description of how things currently stand. For the time being, worrying about evil AI is a bit like worrying about overcrowding on Mars.* Maybe one day we’ll get to the point where computer intelligence surpasses human intelligence, but we’re nowhere near it yet. Frankly, we’re still quite a long way away from creating hedgehoglevel intelligence. So far, no one’s even managed to get past worm.† Besides, all the hype over AI is a distraction from much more pressing concerns and – I think – much more interesting stories. Forget about omnipotent artificially intelligent machines for a moment and turn your thoughts from the far distant future to the here and now – because there are already algorithms with free rein to act as autonomous decision-makers. To decide prison terms, treatments for cancer patients and what to do in a car crash. They’re already making life-changing choices on our behalf at every turn. The question is, if we’re handing over all that power – are they deserving of our trust? Blind faith Sunday, 22 March 2009 wasn’t a good day for Robert Jones. He had just visited some friends and was driving back through the pretty town of Todmorden in West Yorkshire, England, when he noticed the fuel light on his BMW. He had just 7 miles to find a petrol station before he ran out, which was cutting things rather fine. Thankfully his GPS seemed to have found him a short cut – sending him on a narrow winding path up the side of the valley. Robert followed the machine’s instructions, but as he drove, the road got steeper and narrower. After a couple of miles, it turned into a dirt track that barely seemed designed to accommodate horses, let alone cars. But Robert wasn’t fazed. He drove five thousand miles a week for a living and knew how to handle himself behind the wheel. Plus, he thought, he had ‘no reason not to trust the TomTom’.16 Just a short while later, anyone who happened to be looking up from the valley below would have seen the nose of Robert’s BMW appearing over the brink of the cliff above, saved from the hundred-foot drop only by the flimsy wooden fence at the edge he’d just crashed into. It would eventually take a tractor and three quad bikes to recover Robert’s car from where he abandoned it. Later that year, when he appeared in court on charges of reckless driving, he admitted that he didn’t think to over-rule the machine’s instructions. ‘It kept insisting the path was a road,’ he told a newspaper after the incident. ‘So I just trusted it. You don’t expect to be taken nearly over a cliff.’17 No, Robert. I guess you don’t. There’s a moral somewhere in this story. Although he probably felt a little foolish at the time, in ignoring the information in front of his eyes (like seeing a sheer drop out of the car window) and attributing greater intelligence to an algorithm than it deserved, Jones was in good company. After all, Kasparov had fallen into the same trap some twelve years earlier. And, in much quieter but no less profound ways, it’s a mistake almost all of us are guilty of making, perhaps without even realizing. Back in 2015 scientists set out to examine how search engines like Google have the power to alter our view of the world.18 They wanted to find out if we have healthy limits in the faith we place in their results, or if we would happily follow them over the edge of a metaphorical cliff. The experiment focused around an upcoming election in India. The researchers, led by psychologist Robert Epstein, recruited 2,150 undecided voters from around the country and gave them access to a specially made search engine, called ‘Kadoodle’, to help them learn more about the candidates before deciding who they would vote for. Kadoodle was rigged. Unbeknown to the participants, they had been split into groups, each of which was shown a slightly different version of the search engine results, biased towards one candidate or another. When members of one group visited the website, all the links at the top of the page would favour one candidate in particular, meaning they’d have to scroll right down through link after link before finally finding a single page that was favourable to anyone else. Different groups were nudged towards different candidates. It will come as no surprise that the participants spent most of their time reading the websites flagged up at the top of the first page – as that old internet joke says, the best place to hide a dead body is on the second page of Google search results. Hardly anyone in the experiment paid much attention to the links that appeared well down the list. But still, the degree to which the ordering influenced the volunteers’ opinions shocked even Epstein. After only a few minutes of looking at the search engine’s biased results, when asked who they would vote for, participants were a staggering 12 per cent more likely to pick the candidate Kadoodle had favoured. In an interview with Science in 2015,19 Epstein explained what was going on: ‘We expect the search engine to be making wise choices. What they’re saying is, “Well yes, I see the bias and that’s telling me . . . the search engine is doing its job.”’ Perhaps more ominous, given how much of our information we now get from algorithms like search engines, is how much agency people believed they had in their own opinions: ‘When people are unaware they are being manipulated, they tend to believe they have adopted their new thinking voluntarily,’ Epstein wrote in the original paper.20 Kadoodle, of course, is not the only algorithm to have been accused of subtly manipulating people’s political opinions. We’ll come on to that more in the ‘Data’ chapter, but for now it’s worth noting how the experiment suggests we feel about algorithms that are right most of the time. We end up believing that they always have superior judgement.21 After a while, we’re no longer even aware of our own bias towards them. All around us, algorithms provide a kind of convenient source of authority. An easy way to delegate responsibility; a short cut that we take without thinking. Who is really going to click through to the second page of Google every time and think critically about every result? Or go to every airline to check if Skyscanner is listing the cheapest deals? Or get out a ruler and a road map to confirm that their GPS is offering the shortest route? Not me, that’s for sure. But there’s a distinction that needs making here. Because trusting a usually reliable algorithm is one thing. Trusting one without any firm understanding of its quality is quite another. Artificial intelligence meets natural stupidity In 2012, a number of disabled people in Idaho were informed that their Medicaid assistance was being cut.22 Although they all qualified for benefits, the state was slashing their financial support – without warning – by as much as 30 per cent,23 leaving them struggling to pay for their care. This wasn’t a political decision; it was the result of a new ‘budget tool’ that had been adopted by the Idaho Department of Health and Welfare – a piece of software that automatically calculated the level of support that each person should receive.24 The problem was, the budget tool’s decisions didn’t seem to make much sense. As far as anyone could tell from the outside, the numbers it came up with were essentially arbitrary. Some people were given more money than in previous years, while others found their budgets reduced by tens of thousands of dollars, putting them at risk of having to leave their homes to be cared for in an institution.25 Unable to understand why their benefits had been reduced, or to effectively challenge the reduction, the residents turned to the American Civil Liberties Union (ACLU) for help. Their case was taken on by Richard Eppink, legal director of the Idaho division,26 who had this to say in a blog post in 2017: ‘I thought the case would be a simple matter of saying to the state: Okay, tell us why these dollar figures dropped by so much?’27 In fact, it would take four years, four thousand plaintiffs and a class action lawsuit to get to the bottom of what had happened.28 Eppink and his team began by asking for details on how the algorithm worked, but the Medicaid team refused to explain their calculations. They argued that the software that assessed the cases was a ‘trade secret’ and couldn’t be shared.29 Fortunately, the judge presiding over the case disagreed. The budget tool that wielded so much power over the residents was then handed over, and revealed to be – not some sophisticated AI, not some beautifully crafted mathematical model, but an Excel spreadsheet.30 Within the spreadsheet, the calculations were supposedly based on historical cases, but the data was so badly riddled with bugs and errors that it was, for the most part, entirely useless.31 Worse, once the ACLU team managed to unpick the equations, they discovered ‘fundamental statistical flaws in the way that the formula itself was structured’. The budget tool had effectively been producing random results for a huge number of people. The algorithm – if you can call it that – was of such poor quality that the court would eventually rule it unconstitutional.32 There are two parallel threads of human error here. First, someone wrote this garbage spreadsheet; second, others naïvely trusted it. The ‘algorithm’ was in fact just shoddy human work wrapped up in code. So why were the people who worked for the state so eager to defend something so terrible? Here are Eppink’s thoughts on the matter: It’s just this bias we all have for computerized results – we don’t question them. When a computer generates something – when you have a statistician, who looks at some data, and comes up with a formula – we just trust that formula, without asking ‘hey wait a second, how is this actually working?’33 Now, I know that picking mathematical formulae apart to see how they work isn’t everyone’s favourite pastime (even if it is mine). But Eppink none the less raises an incredibly important point about our human willingness to take algorithms at face value without wondering what’s going on behind the scenes. In my years working as a mathematician with data and algorithms, I’ve come to believe that the only way to objectively judge whether an algorithm is trustworthy is by getting to the bottom of how it works. In my experience, algorithms are a lot like magical illusions. At first they appear to be nothing short of actual wizardry, but as soon as you know how the trick is done, the mystery evaporates. Often there’s something laughably simple (or worryingly reckless) hiding behind the façade. So, in the chapters that follow, and the algorithms we’ll explore, I’ll try to give you a flavour of what’s going on behind the scenes where I can. Enough to see how the tricks are done – even if not quite enough to perform them yourself. But even for the most diehard math fans, there are still going to be occasions where algorithms demand you take a blind leap of faith. Perhaps because, as with Skyscanner or Google’s search results, double-checking their working isn’t feasible. Or maybe, like the Idaho budget tool and others we’ll meet, the algorithm is considered a ‘trade secret’. Or perhaps, as in some machine-learning techniques, following the logical process inside the algorithm just isn’t possible. There will be times when we have to hand over control to the unknown, even while knowing that the algorithm is capable of making mistakes. Times when we are forced to weigh up our own judgement against that of the machine. When, if we decide to trust our instincts instead of its calculations, we’re going to need rather a lot of courage in our convictions. When to over-rule Stanislav Petrov was a Russian military officer in charge of monitoring the nuclear early warning system protecting Soviet airspace. His job was to alert his superiors immediately if the computer indicated any sign of an American attack.34 Petrov was on duty on 26 September 1983 when, shortly after midnight, the sirens began to howl. This was the alert that everyone dreaded. Soviet satellites had detected an enemy missile headed for Russian territory. This was the depths of the Cold War, so a strike was certainly plausible, but something gave Petrov pause. He wasn’t sure he trusted the algorithm. It had only detected five missiles, which seemed like an illogically small opening salvo for an American attack.35 Petrov froze in his chair. It was down to him: report the alert, and send the world into almost certain nuclear war; or wait, ignoring protocol, knowing that with every second that passed his country’s leaders had less time to launch a counter-strike. Fortunately for all of us, Petrov chose the latter. He had no way of knowing for sure that the alarm had sounded in error, but after 23 minutes – which must have felt like an eternity at the time – when it was clear that no nuclear missiles had landed on Russian soil, he finally knew that he had been correct. The algorithm had made a mistake. If the system had been acting entirely autonomously, without a human like Petrov to act as the final arbiter, history would undoubtedly have played out rather differently. Russia would almost certainly have launched what it believed to be retaliatory action and triggered a full-blown nuclear war in the process. If there’s anything we can learn from this story, it’s that the human element does seem to be a critical part of the process: that having a person with the power of veto in a position to review the suggestions of an algorithm before a decision is made is the only sensible way to avoid mistakes. After all, only humans will feel the weight of responsibility for their decisions. An algorithm tasked with communicating up to the Kremlin wouldn’t have thought for a second about the potential ramifications of such a decision. But Petrov, on the other hand? ‘I knew perfectly well that nobody would be able to correct my mistake if I had made one.’36 The only problem with this conclusion is that humans aren’t always that reliable either. Sometimes, like Petrov, they’ll be right to over-rule an algorithm. But often our instincts are best ignored. To give you another example from the world of safety, where stories of humans incorrectly over-ruling an algorithm are mercifully rare, that is none the less precisely what happened during an infamous crash on the Smiler rollercoaster at Alton Towers, the UK’s biggest theme park.37 Back in June 2015, two engineers were called to attend a fault on a rollercoaster. After fixing the issue, they sent an empty carriage around to test everything was working – but failed to notice it never made it back. For whatever reason, the spare carriage rolled backwards down an incline and came to a halt in the middle of the track. Meanwhile, unbeknown to the engineers, the ride staff added an extra carriage to deal with the lengthening queues. Once they got the all-clear from the control room, they started loading up the carriages with cheerful passengers, strapping them in and sending the first car off around the track, completely unaware of the empty, stranded carriage sent out by the engineers sitting directly in its path. Luckily, the rollercoaster designers had planned for a situation like this, and their safety algorithms worked exactly as planned. To avoid a certain collision, the packed train was brought to a halt at the top of the first climb, setting off an alarm in the control room. But the engineers – confident that they’d just fixed the ride – concluded the automatic warning system was at fault. Over-ruling the algorithm wasn’t easy: they both had to agree and simultaneously press a button to restart the rollercoaster. Doing so sent the train full of people over the drop to crash straight into the stranded extra carriage. The result was horrendous. Several people suffered devastating injuries and two teenage girls lost their legs. Both of these life-or-death scenarios, Alton Towers and Petrov’s alarm, serve as dramatic illustrations of a much deeper dilemma. In the balance of power between human and algorithm, who – or what – should have the final say? Power struggle This is a debate with a long history. In 1954, Paul Meehl, a professor of clinical psychology at the University of Minnesota, annoyed an entire generation of humans when he published Clinical versus Statistical Prediction, coming down firmly on one side of the argument.38 In his book, Meehl systematically compared the performance of humans and algorithms on a whole variety of subjects – predicting everything from students’ grades to patients’ mental health outcomes – and concluded that mathematical algorithms, no matter how simple, will almost always make better predictions than people. Countless other studies in the half-century since have confirmed Meehl’s findings. If your task involves any kind of calculation, put your money on the algorithm every time: in making medical diagnoses or sales forecasts, predicting suicide attempts or career satisfaction, and assessing everything from fitness for military service to projected academic performance.39 The machine won’t be perfect, but giving a human a veto over the algorithm would just add more error.‡ Perhaps this shouldn’t come as a surprise. We’re not built to compute. We don’t go to the supermarket to find a row of cashiers eyeballing our shopping to gauge how much it should cost. We get an (incredibly simple) algorithm to calculate it for us instead. And most of the time, we’d be better off leaving the machine to it. It’s like the saying among airline pilots that the best flying team has three components: a pilot, a computer and a dog. The computer is there to fly the plane, the pilot is there to feed the dog. And the dog is there to bite the human if it tries to touch the computer. But there’s a paradox in our relationship with machines. While we have a tendency to over-trust anything we don’t understand, as soon as we know an algorithm can make mistakes, we also have a rather annoying habit of overreacting and dismissing it completely, reverting instead to our own flawed judgement. It’s known to researchers as algorithm aversion. People are less tolerant of an algorithm’s mistakes than of their own – even if their own mistakes are bigger. It’s a phenomenon that has been demonstrated time and time again in experiments,40 and to some extent, you might recognize it in yourself. Whenever Citymapper says my journey will take longer than I expect it to, I always think I know better (even if most of the time it means I end up arriving late). We’ve all called Siri an idiot at least once, somehow in the process forgetting the staggering technological accomplishment it has taken to build a talking assistant you can hold in your hand. And in the early days of using the mobile GPS app Waze I’d find myself sitting in a traffic jam, having been convinced that taking the back roads would be faster than the route shown. (It almost always wasn’t.) Now I’ve come to trust it and – like Robert Jones and his BMW – I’ll blindly follow it wherever it leads me (although I still think I’d draw the line at going over a cliff). This tendency of ours to view things in black and white – seeing algorithms as either omnipotent masters or a useless pile of junk – presents quite a problem in our high-tech age. If we’re going to get the most out of technology, we’re going to need to work out a way to be a bit more objective. We need to learn from Kasparov’s mistake and acknowledge our own flaws, question our gut reactions and be a bit more aware of our feelings towards the algorithms around us. On the flip side, we should take algorithms off their pedestal, examine them a bit more carefully and ask if they’re really capable of doing what they claim. That’s the only way to decide if they deserve the power they’ve been given. Unfortunately, all this is often much easier said than done. Oftentimes, we’ll have little say over the power and reach of the algorithms that surround us, even when it comes to those that affect us directly. This is particularly true for the algorithms that trade in the most fundamental modern commodity: data. The algorithms that silently follow us around the internet, the ones that are harvesting our personal information, invading our privacy and inferring our character with free rein to subtly influence our behaviour. In that perfect storm of misplaced trust and power and influence, the consequences have the potential to fundamentally alter our society. * This is paraphrased from a comment made by the computer scientist and machine-learning pioneer Andrew Ng in a talk he gave in 2015. See Tech Events, ‘GPU Technology Conference 2015 day 3: What’s Next in Deep Learning’, YouTube, 20 Nov. 2015, https://www.youtube.com/watch? v=qP9TOX8T-kI. † Simulating the brain of a worm is precisely the goal of the international science project OpenWorm. They’re hoping to artificially reproduce the network of 302 neurons found within the brain of the C. elegans worm. To put that into perspective, we humans have around 100,000,000,000 neurons. See OpenWorm website: http://openworm.org/. ‡ Intriguingly, a rare exception to the superiority of algorithmic performance comes from a selection of studies conducted in the late 1950s and 1960s into the ‘diagnosis’ (their words, not mine) of homosexuality. In those examples, the human judgement made far better predictions, outperforming anything the algorithm could manage – suggesting there are some things so intrinsically human that data and mathematical formulae will always struggle to describe them.