Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

scrollinondubs

Uploaded by scrollinondubs

Stanford School of Medicine

Tags

forecasting expert predictions psychology economics

Summary

This document analyzes the limitations of forecasting by experts, highlighting how experts often attribute success to skill and failure to external factors. It discusses a bias in our perception of random events, and how we perceive ourselves to be better than we actually are. It also examines the limitations of complex mathematical methods in real-world situations.

Full Transcript

yet, the forecasters’ errors were signi cantly larger than the average di erence between individual forecasts, which indicates herding. Normally, forecasts should be as far from one another as they are from the predicted number. But to understand how they manage to stay in business, and why they don...

yet, the forecasters’ errors were signi cantly larger than the average di erence between individual forecasts, which indicates herding. Normally, forecasts should be as far from one another as they are from the predicted number. But to understand how they manage to stay in business, and why they don’t develop severe nervous breakdowns (with weight loss, erratic behavior, or acute alcoholism), we must look at the work of the psychologist Philip Tetlock. I Was “Almost” Right Tetlock studied the business of political and economic “experts.” He asked various specialists to judge the likelihood of a number of political, economic, and military events occurring within a speci ed time frame (about ve years ahead). The outcomes represented a total number of around twenty-seven thousand predictions, involving close to three hundred specialists. Economists represented about a quarter of his sample. The study revealed that experts’ error rates were clearly many times what they had estimated. His study exposed an expert problem: there was no di erence in results whether one had a PhD or an undergraduate degree. Well-published professors had no advantage over journalists. The only regularity Tetlock found was the negative e ect of reputation on prediction: those who had a big reputation were worse predictors than those who had none. But Tetlock’s focus was not so much to show the real competence of experts (although the study was quite convincing with respect to that) as to investigate why the experts did not realize that they were not so good at their own business, in other words, how they spun their stories. There seemed to be a logic to such incompetence, mostly in the form of belief defense, or the protection of self-esteem. He therefore dug further into the mechanisms by which his subjects generated ex post explanations. I will leave aside how one’s ideological commitments in uence one’s perception and address the more general aspects of this blind spot toward one’s own predictions. You tell yourself that you were playing a different game. Let’s say you failed to predict the weakening and precipitous fall of the Soviet Union (which no social scientist saw coming). It is easy to claim that you were excellent at understanding the political workings of the Soviet Union, but that these Russians, being exceedingly Russian, were skilled at hiding from you crucial economic elements. Had you been in possession of such economic intelligence, you would certainly have been able to predict the demise of the Soviet regime. It is not your skills that are to blame. The same might apply to you if you had forecast the landslide victory for Al Gore over George W. Bush. You were not aware that the economy was in such dire straits; indeed, this fact seemed to be concealed from everyone. Hey, you are not an economist, and the game turned out to be about economics. You invoke the outlier. Something happened that was outside the system, outside the scope of your science. Given that it was not predictable, you are not to blame. It was a Black Swan and you are not supposed to predict Black Swans. Black Swans, NNT tells us, are fundamentally unpredictable (but then I think that NNT would ask you, Why rely on predictions?). Such events are “exogenous,” coming from outside your science. Or maybe it was an event of very, very low probability, a thousand-year ood, and we were unlucky to be exposed to it. But next time, it will not happen. This focus on the narrow game and linking one’s performance to a given script is how the nerds explain the failures of mathematical methods in society. The model was right, it worked well, but the game turned out to be a di erent one than anticipated. The “almost right” defense. Retrospectively, with the bene t of a revision of values and an informational framework, it is easy to feel that it was a close call. Tetlock writes, “Observers of the former Soviet Union who, in 1988, thought the Communist Party could not be driven from power by 1993 or 1998 were especially likely to believe that Kremlin hardliners almost overthrew Gorbachev in the 1991 coup attempt, and they would have if the conspirators had been more resolute and less inebriated, or if key military o cers had obeyed orders to kill civilians challenging martial law or if Yeltsin had not acted so bravely.” I will go now into more general defects uncovered by this example. These “experts” were lopsided: on the occasions when they were right, they attributed it to their own depth of understanding and expertise; when wrong, it was either the situation that was to blame, since it was unusual, or, worse, they did not recognize that they were wrong and spun stories around it. They found it di cult to accept that their grasp was a little short. But this attribute is universal to all our activities: there is something in us designed to protect our self-esteem. We humans are the victims of an asymmetry in the perception of random events. We attribute our successes to our skills, and our failures to external events outside our control, namely to randomness. We feel responsible for the good stu , but not for the bad. This causes us to think that we are better than others at whatever we do for a living. Ninety-four percent of Swedes believe that their driving skills put them in the top 50 percent of Swedish drivers; 84 percent of Frenchmen feel that their lovemaking abilities put them in the top half of French lovers. The other e ect of this asymmetry is that we feel a little unique, unlike others, for whom we do not perceive such an asymmetry. I have mentioned the unrealistic expectations about the future on the part of people in the process of tying the knot. Also consider the number of families who tunnel on their future, locking themselves into hard-to- ip real estate thinking they are going to live there permanently, not realizing that the general track record for sedentary living is dire. Don’t they see those well-dressed real-estate agents driving around in fancy two-door German cars? We are very nomadic, far more than we plan to be, and forcibly so. Consider how many people who have abruptly lost their job deemed it likely to occur, even a few days before. Or consider how many drug addicts entered the game willing to stay in it so long. There is another lesson from Tetlock’s experiment. He found what I mentioned earlier, that many university stars, or “contributors to top journals,” are no better than the average New York Times reader or journalist in detecting changes in the world around them. These sometimes overspecialized experts failed tests in their own specialties. The hedgehog and the fox. Tetlock distinguishes between two types of predictors, the hedgehog and the fox, according to a distinction promoted by the essayist Isaiah Berlin. As in Aesop’s fable, the hedgehog knows one thing, the fox knows many things—these are the adaptable types you need in daily life. Many of the prediction failures come from hedgehogs who are mentally married to a single big Black Swan event, a big bet that is not likely to play out. The hedgehog is someone focusing on a single, improbable, and consequential event, falling for the narrative fallacy that makes us so blinded by one single outcome that we cannot imagine others. Hedgehogs, because of the narrative fallacy, are easier for us to understand —their ideas work in sound bites. Their category is overrepresented among famous people; ergo famous people are on average worse at forecasting than the rest of the predictors. I have avoided the press for a long time because whenever journalists hear my Black Swan story, they ask me to give them a list of future impacting events. They want me to be predictive of these Black Swans. Strangely, my book Fooled by Randomness, published a week before September 11, 2001, had a discussion of the possibility of a plane crashing into my o ce building. So I was naturally asked to show “how I predicted the event.” I didn’t predict it—it was a chance occurrence. I am not playing oracle! I even recently got an e-mail asking me to list the next ten Black Swans. Most fail to get my point about the error of speci city, the narrative fallacy, and the idea of prediction. Contrary to what people might expect, I am not recommending that anyone become a hedgehog—rather, be a fox with an open mind. I know that history is going to be dominated by an improbable event, I just don’t know what that event will be. Reality? What For? I found no formal, Tetlock-like comprehensive study in economics journals. But, suspiciously, I found no paper trumpeting economists’ ability to produce reliable projections. So I reviewed what articles and working papers in economics I could nd. They collectively show no convincing evidence that economists as a community have an ability to predict, and, if they have some ability, their predictions are at best just slightly better than random ones—not good enough to help with serious decisions. The most interesting test of how academic methods fare in the real world was run by Spyros Makridakis, who spent part of his career managing competitions between forecasters who practice a “scienti c method” called econometrics—an approach that combines economic theory with statistical measurements. Simply put, he made people forecast in real life and then he judged their accuracy. This led to the series of “M-Competitions” he ran, with assistance from Michele Hibon, of which M3 was the third and most recent one, completed in 1999. Makridakis and Hibon reached the sad conclusion that “statistically sophisticated or complex methods do not necessarily provide more accurate forecasts than simpler ones.” I had an identical experience in my quant days—the foreign scientist with the throaty accent spending his nights on a computer doing complicated mathematics rarely fares better than a cabdriver using the simplest methods within his reach. The problem is that we focus on the rare occasion when these methods work and almost never on their far more numerous failures. I kept begging anyone who would listen to me: “Hey, I am an uncomplicated, no-nonsense fellow from Amioun, Lebanon, and have trouble understanding why something is considered valuable if it requires running computers overnight but does not enable me to predict better than any other guy from Amioun.” The only reactions I got from these colleagues were related to the geography and history of Amioun rather than a no-nonsense explanation of their business. Here again, you see the narrative fallacy at work, except that in place of journalistic stories you have the more dire situation of the “scientists” with a Russian accent looking in the rearview mirror, narrating with equations, and refusing to look ahead because he may get too dizzy. The econometrician Robert Engel, an otherwise charming gentleman, invented a very complicated statistical method called GARCH and got a Nobel for it. No one tested it to see if it has any validity in real life. Simpler, less sexy methods fare exceedingly better, but they do not take you to Stockholm. You have an expert problem in Stockholm, and I will discuss it in Chapter 17. This un tness of complicated methods seems to apply to all methods. Another study e ectively tested practitioners of something called game theory, in which the most notorious player is John Nash, the schizophrenic mathematician made famous by the lm A Beautiful Mind. Sadly, for all the intellectual appeal of these methods and all the media attention, its practitioners are no better at predicting than university students. There is another problem, and it is a little more worrisome. Makridakis and Hibon were to nd out that the strong empirical evidence of their studies has been ignored by theoretical statisticians. Furthermore, they encountered shocking hostility toward their empirical veri cations. “Instead [statisticians] have concentrated their e orts in building more sophisticated models without regard to the ability of such models to more accurately predict real-life data,” Makridakis and Hibon write. Someone may counter with the following argument: Perhaps economists’ forecasts create feedback that cancels their e ect (this is called the Lucas critique, after the economist Robert Lucas). Let’s say economists predict in ation; in response to these expectations the Federal Reserve acts and lowers in ation. So you cannot judge the forecast accuracy in economics as you would with other events. I agree with this point, but I do not believe that it is the cause of the economists’ failure to predict. The world is far too complicated for their discipline. When an economist fails to predict outliers he often invokes the issue of earthquakes or revolutions, claiming that he is not into geodesics, atmospheric sciences, or political science, instead of incorporating these elds into his studies and accepting that his eld does not exist in isolation. Economics is the most insular of elds; it is the one that quotes least from outside itself! Economics is perhaps the subject that currently has the highest number of philistine scholars—scholarship without erudition and natural curiosity can close your mind and lead to the fragmentation of disciplines. “OTHER THAN THAT,” IT WAS OKAY We have used the story of the Sydney Opera House as a springboard for our discussion of prediction. We will now address another constant in human nature: a systematic error made by project planners, coming from a mixture of human nature, the complexity of the world, or the structure of organizations. In order to survive, institutions may need to give themselves and others the appearance of having a “vision.” Plans fail because of what we have called tunneling, the neglect of sources of uncertainty outside the plan itself. The typical scenario is as follows. Joe, a non ction writer, gets a book contract with a set nal date for delivery two years from now. The topic is relatively easy: the authorized biography of the writer Salman Rushdie, for which Joe has compiled ample data. He has even tracked down Rushdie’s former girlfriends and is thrilled at the prospect of pleasant interviews. Two years later, minus, say, three months, he calls to explain to the publisher that he will be a little delayed. The publisher has seen this coming; he is used to authors being late. The publishing house now has cold feet because the subject has unexpectedly faded from public attention—the rm projected that interest in Rushdie would remain high, but attention has faded, seemingly because the Iranians, for some reason, lost interest in killing him. Let’s look at the source of the biographer’s underestimation of the time for completion. He projected his own schedule, but he tunneled, as he did not forecast that some “external” events would emerge to slow him down. Among these external events were the disasters on September 11, 2001, which set him back several months; trips to Minnesota to assist his ailing mother (who eventually recovered); and many more, like a broken engagement (though not with Rushdie’s ex-girlfriend). “Other than that,” it was all within his plan; his own work did not stray the least from schedule. He does not feel responsible for his failure.* The unexpected has a one-sided effect with projects. Consider the track records of builders, paper writers, and contractors. The unexpected almost always pushes in a single direction: higher costs and a longer time to completion. On very rare occasions, as with the Empire State Building, you get the opposite: shorter completion and lower costs—these occasions are becoming truly exceptional nowadays. We can run experiments and test for repeatability to verify if such errors in projection are part of human nature. Researchers have tested how students estimate the time needed to complete their projects. In one representative test, they broke a group into two varieties, optimistic and pessimistic. Optimistic students promised twenty-six days; the pessimistic ones forty-seven days. The average actual time to completion turned out to be fty-six days. The example of Joe the writer is not acute. I selected it because it concerns a repeatable, routine task—for such tasks our planning errors are milder. With projects of great novelty, such as a military invasion, an all-out war, or something entirely new, errors explode upward. In fact, the more routine the task, the better you learn to forecast. But there is always something nonroutine in our modern environment. There may be incentives for people to promise shorter completion dates— in order to win the book contract or in order for the builder to get your down payment and use it for his upcoming trip to Antigua. But the planning problem exists even where there is no incentive to underestimate the duration (or the costs) of the task. As I said earlier, we are too narrow-minded a species to consider the possibility of events straying from our mental projections, but furthermore, we are too focused on matters internal to the project to take into account external uncertainty, the “unknown unknown,” so to speak, the contents of the unread books. There is also the nerd e ect, which stems from the mental elimination of o -model risks, or focusing on what you know. You view the world from within a model. Consider that most delays and cost overruns arise from unexpected elements that did not enter into the plan—that is, they lay outside the model at hand—such as strikes, electricity shortages, accidents, bad weather, or rumors of Martian invasions. These small Black Swans that threaten to hamper our projects do not seem to be taken into account. They are too abstract—we don’t know how they look and cannot talk about them intelligently. We cannot truly plan, because we do not understand the future—but this is not necessarily bad news. We could plan while bearing in mind such limitations. It just takes guts. The Beauty of Technology: Excel Spreadsheets In the not too distant past, say the precomputer days, projections remained vague and qualitative, one had to make a mental e ort to keep track of them, and it was a strain to push scenarios into the future. It took pencils, erasers, reams of paper, and huge wastebaskets to engage in the activity. Add to that an accountant’s love for tedious, slow work. The activity of projecting, in short, was e ortful, undesirable, and marred with self-doubt. But things changed with the intrusion of the spreadsheet. When you put an Excel spreadsheet into computer-literate hands you get a “sales projection” e ortlessly extending ad in nitum! Once on a page or on a computer screen, or, worse, in a PowerPoint presentation, the projection takes on a life of its own, losing its vagueness and abstraction and becoming what philosophers call rei ed, invested with concreteness; it takes on a new life as a tangible object. My friend Brian Hinchcli e suggested the following idea when we were both sweating at the local gym. Perhaps the ease with which one can project into the future by dragging cells in these spreadsheet programs is responsible for the armies of forecasters con dently producing longer-term forecasts (all the while tunneling on their assumptions). We have become worse planners than the Soviet Russians thanks to these potent computer programs given to those who are incapable of handling their knowledge. Like most commodity traders, Brian is a man of incisive and sometimes brutally painful realism. A classical mental mechanism, called anchoring, seems to be at work here. You lower your anxiety about uncertainty by producing a number, then you “anchor” on it, like an object to hold on to in the middle of a vacuum. This anchoring mechanism was discovered by the fathers of the psychology of uncertainty, Danny Kahneman and Amos Tversky, early in their heuristics and biases project. It operates as follows. Kahneman and Tversky had their subjects spin a wheel of fortune. The subjects rst looked at the number on the wheel, which they knew was random, then they were asked to estimate the number of African countries in the United Nations. Those who had a low number on the wheel estimated a low number of African nations; those with a high number produced a higher estimate. Similarly, ask someone to provide you with the last four digits of his social security number. Then ask him to estimate the number of dentists in Manhattan. You will nd that by making him aware of the four-digit number, you elicit an estimate that is correlated with it. We use reference points in our heads, say sales projections, and start building beliefs around them because less mental e ort is needed to compare an idea to a reference point than to evaluate it in the absolute (System 1 at work!). We cannot work without a point of reference. So the introduction of a reference point in the forecaster’s mind will work wonders. This is no di erent from a starting point in a bargaining episode: you open with high number (“I want a million for this house”); the bidder will answer “only eight- fty”—the discussion will be determined by that initial level. The Character of Prediction Errors Like many biological variables, life expectancy is from Mediocristan, that is, it is subjected to mild randomness. It is not scalable, since the older we get, the less likely we are to live. In a developed country a newborn female is expected to die at around 79, according to insurance tables. When she reaches her 79th birthday, her life expectancy, assuming that she is in typical health, is another 10 years. At the age of 90, she should have another 4.7 years to go. At the age of 100, 2.5 years. At the age of 119, if she miraculously lives that long, she should have about nine months left. As she lives beyond the expected date of death, the number of additional years to go decreases. This illustrates the major property of random variables related to the bell curve. The conditional expectation of additional life drops as a person gets older. With human projects and ventures we have another story. These are often scalable, as I said in Chapter 3. With scalable variables, the ones from Extremistan, you will witness the exact opposite e ect. Let’s say a project is expected to terminate in 79 days, the same expectation in days as the newborn female has in years. On the 79th day, if the project is not nished, it will be expected to take another 25 days to complete. But on the 90th day, if the project is still not completed, it should have about 58 days to go. On the 100th, it should have 89 days to go. On the 119th, it should have an extra 149 days. On day 600, if the project is not done, you will be expected to need an extra 1,590 days. As you see, the longer you wait, the longer you will be expected to wait. Let’s say you are a refugee waiting for the return to your homeland. Each day that passes you are getting farther from, not closer to, the day of triumphal return. The same applies to the completion date of your next opera house. If it was expected to take two years, and three years later you are asking questions, do not expect the project to be completed any time soon. If wars last on average six months, and your con ict has been going on for two years, expect another few years of problems. The Arab-Israeli con ict is sixty years old, and counting—yet it was considered “a simple problem” sixty years ago. (Always remember that, in a modern environment, wars last longer and kill more people than is typically planned.) Another example: Say that you send your favorite author a letter, knowing that he is busy and has a two-week turnaround. If three weeks later your mailbox is still empty, do not expect the letter to come tomorrow—it will take on average another three weeks. If three months later you still have nothing, you will have to expect to wait another year. Each day will bring you closer to your death but further from the receipt of the letter. This subtle but extremely consequential property of scalable randomness is unusually counterintuitive. We misunderstand the logic of large deviations from the norm. I will get deeper into these properties of scalable randomness in Part Three. But let us say for now that they are central to our misunderstanding of the business of prediction. DON’T CROSS A RIVER IF IT IS (ON AVERAGE) FOUR FEET DEEP Corporate and government projections have an additional easy-to-spot aw: they do not attach a possible error rate to their scenarios. Even in the absence of Black Swans this omission would be a mistake. I once gave a talk to policy wonks at the Woodrow Wilson Center in Washington, D.C., challenging them to be aware of our weaknesses in seeing ahead. The attendees were tame and silent. What I was telling them was against everything they believed and stood for; I had gotten carried away with my aggressive message, but they looked thoughtful, compared to the testosterone- charged characters one encounters in business. I felt guilty for my aggressive stance. Few asked questions. The person who organized the talk and invited me must have been pulling a joke on his colleagues. I was like an aggressive atheist making his case in front of a synod of cardinals, while dispensing with the usual formulaic euphemisms. Yet some members of the audience were sympathetic to the message. One anonymous person (he is employed by a governmental agency) explained to me privately after the talk that in January 2004 his department was forecasting the price of oil for twenty- ve years later at $27 a barrel, slightly higher than what it was at the time. Six months later, around June 2004, after oil doubled in price, they had to revise their estimate to $54 (the price of oil is currently, as I am writing these lines, close to $79 a barrel). It did not dawn on them that it was ludicrous to forecast a second time given that their forecast was o so early and so markedly, that this business of forecasting had to be somehow questioned. And they were looking twenty-five years ahead! Nor did it hit them that there was something called an error rate to take into account.* Forecasting without incorporating an error rate uncovers three fallacies, all arising from the same misconception about the nature of uncertainty. The rst fallacy: variability matters. The rst error lies in taking a projection too seriously, without heeding its accuracy. Yet, for planning purposes, the accuracy in your forecast matters far more than the forecast itself. I will explain it as follows. Don’t cross a river if it is four feet deep on average. You would take a di erent set of clothes on your trip to some remote destination if I told you that the temperature was expected to be seventy degrees Fahrenheit, with an expected error rate of forty degrees than if I told you that my margin of error was only ve degrees. The policies we need to make decisions on should depend far more on the range of possible outcomes than on the expected nal number. I have seen, while working for a bank, how people project cash ows for companies without wrapping them in the thinnest layer of uncertainty. Go to the stockbroker and check on what method they use to forecast sales ten years ahead to “calibrate” their valuation models. Go nd out how analysts forecast government de cits. Go to a bank or security-analysis training program and see how they teach trainees to make assumptions; they do not teach you to build an error rate around those assumptions—but their error rate is so large that it is far more signi cant than the projection itself! The second fallacy lies in failing to take into account forecast degradation as the projected period lengthens. We do not realize the full extent of the di erence between near and far futures. Yet the degradation in such forecasting through time becomes evident through simple introspective examination—without even recourse to scienti c papers, which on this topic are suspiciously rare. Consider forecasts, whether economic or technological, made in 1905 for the following quarter of a century. How close to the projections did 1925 turn out to be? For a convincing experience, go read George Orwell’s 1984. Or look at more recent forecasts made in 1975 about the prospects for the new millennium. Many events have taken place and new technologies have appeared that lay outside the forecasters’ imaginations; many more that were expected to take place or appear did not do so. Our forecast errors have traditionally been enormous, and there may be no reasons for us to believe that we are suddenly in a more privileged position to see into the future compared to our blind predecessors. Forecasting by bureaucrats tends to be used for anxiety relief rather than for adequate policy making. The third fallacy, and perhaps the gravest, concerns a misunderstanding of the random character of the variables being forecast. Owing to the Black Swan, these variables can accommodate far more optimistic—or far more pessimistic—scenarios than are currently expected. Recall from my experiment with Dan Goldstein testing the domain-speci city of our intuitions, how we tend to make no mistakes in Mediocristan, but make large ones in Extremistan as we do not realize the consequences of the rare event. What is the implication here? Even if you agree with a given forecast, you have to worry about the real possibility of signi cant divergence from it. These divergences may be welcomed by a speculator who does not depend on steady income; a retiree, however, with set risk attributes cannot a ord such gyrations. I would go even further and, using the argument about the depth of the river, state that it is the lower bound of estimates (i.e., the worst case) that matters when engaging in a policy—the worst case is far more consequential than the forecast itself. This is particularly true if the bad scenario is not acceptable. Yet the current phraseology makes no allowance for that. None. It is often said that “is wise he who can see things coming.” Perhaps the wise one is the one who knows that he cannot see things far away. Get Another Job The two typical replies I face when I question forecasters’ business are: “What should he do? Do you have a better way for us to predict?” and “If you’re so smart, show me your own prediction.” In fact, the latter question, usually boastfully presented, aims to show the superiority of the practitioner and “doer” over the philosopher, and mostly comes from people who do not know that I was a trader. If there is one advantage of having been in the daily practice of uncertainty, it is that one does not have to take any crap from bureaucrats. One of my clients asked for my predictions. When I told him I had none, he was o ended and decided to dispense with my services. There is in fact a routine, unintrospective habit of making businesses answer questionnaires and ll out paragraphs showing their “outlooks.” I have never had an outlook and have never made professional predictions—but at least I know that I cannot forecast and a small number of people (those I care about) take that as an asset. There are those people who produce forecasts uncritically. When asked why they forecast, they answer, “Well, that’s what we’re paid to do here.” My suggestion: get another job. This suggestion is not too demanding: unless you are a slave, I assume you have some amount of control over your job selection. Otherwise this becomes a problem of ethics, and a grave one at that. People who are trapped in their jobs who forecast simply because “that’s my job,” knowing pretty well that their forecast is ine ectual, are not what I would call ethical. What they do is no di erent from repeating lies simply because “it’s my job.” Anyone who causes harm by forecasting should be treated as either a fool or a liar. Some forecasters cause more damage to society than criminals. Please, don’t drive a school bus blindfolded. At JFK At New York’s JFK airport you can nd gigantic newsstands with walls full of magazines. They are usually manned by a very polite family from the Indian subcontinent (just the parents; the children are in medical school). These walls present you with the entire corpus of what an “informed” person needs in order “to know what’s going on.” I wonder how long it would take to read every single one of these magazines, excluding the shing and motorcycle periodicals (but including the gossip magazines—you might as well have some fun). Half a lifetime? An entire lifetime? Caravaggio’s The Fortune-Teller. We have always been suckers for those who tell us about the future. In this picture the fortune-teller is stealing the victim’s ring. Sadly, all this knowledge would not help the reader to forecast what is to happen tomorrow. Actually, it might decrease his ability to forecast. There is another aspect to the problem of prediction: its inherent limitations, those that have little to do with human nature, but instead arise from the very nature of information itself. I have said that the Black Swan has three attributes: unpredictability, consequences, and retrospective explainability. Let us examine this unpredictability business.* * The book you have in your hands is approximately and “unexpectedly” fteen months late. * While forecast errors have always been entertaining, commodity prices have been a great trap for suckers. Consider this 1970 forecast by U.S. o cials (signed by the U.S. Secretaries of the Treasury, State, Interior, and Defense): “the standard price of foreign crude oil by 1980 may well decline and will in any event not experience a substantial increase.” Oil prices went up tenfold by 1980. I just wonder if current forecasters lack in intellectual curiosity or if they are intentionally ignoring forecast errors. Also note this additional aberration: since high oil prices are marking up their inventories, oil companies are making record bucks and oil executives are getting huge bonuses because “they did a good job”—as if they brought pro ts by causing the rise of oil prices. * I owe the reader an answer concerning Catherine’s lover count. She had only twelve. Chapter Eleven HOW TO LOOK FOR BIRD POOP Popper’s prediction about the predictors—Poincaré plays with billiard balls—Von Hayek is allowed to be irreverent—Anticipation machines— Paul Samuelson wants you to be rational—Beware the philosopher— Demand some certainties. We’ve seen that a) we tend to both tunnel and think “narrowly” (epistemic arrogance), and b) our prediction record is highly overestimated— many people who think they can predict actually can’t. We will now go deeper into the unadvertised structural limitations on our ability to predict. These limitations may arise not from us but from the nature of the activity itself—too complicated, not just for us, but for any tools we have or can conceivably obtain. Some Black Swans will remain elusive, enough to kill our forecasts. HOW TO LOOK FOR BIRD POOP In the summer of 1998 I worked at a European-owned nancial institution. It wanted to distinguish itself by being rigorous and farsighted. The unit involved in trading had ve managers, all serious-looking (always in dark blue suits, even on dress-down Fridays), who had to meet throughout the summer in order “to formulate the ve-year plan.” This was supposed to be a meaty document, a sort of user’s manual for the rm. A ve-year plan? To a fellow deeply skeptical of the central planner, the notion was ludicrous; growth within the rm had been organic and unpredictable, bottom-up not top-down. It was well known that the rm’s most lucrative department was the product of a chance call from a customer asking for a speci c but strange nancial transaction. The rm accidentally realized that they could build a unit just to handle these transactions, since they were pro table, and it rapidly grew to dominate their activities. The managers ew across the world in order to meet: Barcelona, Hong Kong, et cetera. A lot of miles for a lot of verbiage. Needless to say they were usually sleep-deprived. Being an executive does not require very developed frontal lobes, but rather a combination of charisma, a capacity to sustain boredom, and the ability to shallowly perform on harrying schedules. Add to these tasks the “duty” of attending opera performances. The managers sat down to brainstorm during these meetings, about, of course, the medium-term future—they wanted to have “vision.” But then an event occurred that was not in the previous ve-year plan: the Black Swan of the Russian nancial default of 1998 and the accompanying meltdown of the values of Latin American debt markets. It had such an e ect on the rm that, although the institution had a sticky employment policy of retaining managers, none of the ve was still employed there a month after the sketch of the 1998 ve-year plan. Yet I am con dent that today their replacements are still meeting to work on the next “ ve-year plan.” We never learn. Inadvertent Discoveries The discovery of human epistemic arrogance, as we saw in the previous chapter, was allegedly inadvertent. But so were many other discoveries as well. Many more than we think. The classical model of discovery is as follows: you search for what you know (say, a new way to reach India) and nd something you didn’t know was there (America). If you think that the inventions we see around us came from someone sitting in a cubicle and concocting them according to a timetable, think again: almost everything of the moment is the product of serendipity. The term serendipity was coined in a letter by the writer Hugh Walpole, who derived it from a fairy tale, “The Three Princes of Serendip.” These princes “were always making discoveries by accident or sagacity, of things which they were not in quest of.” In other words, you nd something you are not looking for and it changes the world, while wondering after its discovery why it “took so long” to arrive at something so obvious. No journalist was present when the wheel was invented, but I am ready to bet that people did not just embark on the project of inventing the wheel (that main engine of growth) and then complete it according to a timetable. Likewise with most inventions. Sir Francis Bacon commented that the most important advances are the least predictable ones, those “lying out of the path of the imagination.” Bacon was not the last intellectual to point this out. The idea keeps popping up, yet then rapidly dying out. Almost half a century ago, the bestselling novelist Arthur Koestler wrote an entire book about it, aptly called The Sleepwalkers. It describes discoverers as sleepwalkers stumbling upon results and not realizing what they have in their hands. We think that the import of Copernicus’s discoveries concerning planetary motions was obvious to him and to others in his day; he had been dead seventy- ve years before the authorities started getting o ended. Likewise we think that Galileo was a victim in the name of science; in fact, the church didn’t take him too seriously. It seems, rather, that Galileo caused the uproar himself by ru ing a few feathers. At the end of the year in which Darwin and Wallace presented their papers on evolution by natural selection that changed the way we view the world, the president of the Linnean society, where the papers were presented, announced that the society saw “no striking discovery,” nothing in particular that could revolutionize science. We forget about unpredictability when it is our turn to predict. This is why people can read this chapter and similar accounts, agree entirely with them, yet fail to heed their arguments when thinking about the future. Take this dramatic example of a serendipitous discovery. Alexander Fleming was cleaning up his laboratory when he found that penicillium mold had contaminated one of his old experiments. He thus happened upon the antibacterial properties of penicillin, the reason many of us are alive today (including, as I said in Chapter 8, myself, for typhoid fever is often fatal when untreated). True, Fleming was looking for “something,” but the actual discovery was simply serendipitous. Furthermore, while in hindsight the discovery appears momentous, it took a very long time for health o cials to realize the importance of what they had on their hands. Even Fleming lost faith in the idea before it was subsequently revived. In 1965 two radio astronomists at Bell Labs in New Jersey who were mounting a large antenna were bothered by a background noise, a hiss, like the static that you hear when you have bad reception. The noise could not be eradicated—even after they cleaned the bird excrement out of the dish, since they were convinced that bird poop was behind the noise. It took a while for them to gure out that what they were hearing was the trace of the birth of the universe, the cosmic background microwave radiation. This discovery revived the big bang theory, a languishing idea that was posited by earlier researchers. I found the following comments on Bell Labs’ website commenting on how this “discovery” was one of the century’s greatest advances: Dan Stanzione, then Bell Labs president and Lucent’s chief operating o cer when Penzias [one of the radio astronomers involved in the discovery] retired, said Penzias “embodies the creativity and technical excellence that are the hallmarks of Bell Labs.” He called him a Renaissance gure who “extended our fragile understanding of creation, and advanced the frontiers of science in many important areas.” Renaissance shmenaissance. The two fellows were looking for bird poop! Not only were they not looking for anything remotely like the evidence of the big bang but, as usual in these cases, they did not immediately see the importance of their nd. Sadly, the physicist Ralph Alpher, the person who initially conceived of the idea, in a paper coauthored with heavyweights George Gamow and Hans Bethe, was surprised to read about the discovery in The New York Times. In fact, in the languishing papers positing the birth of the universe, scientists were doubtful whether such radiation could ever be measured. As happens so often in discovery, those looking for evidence did not nd it; those not looking for it found it and were hailed as discoverers. We have a paradox. Not only have forecasters generally failed dismally to foresee the drastic changes brought about by unpredictable discoveries, but incremental change has turned out to be generally slower than forecasters expected. When a new technology emerges, we either grossly underestimate or severely overestimate its importance. Thomas Watson, the founder of IBM, once predicted that there would be no need for more than just a handful of computers. That the reader of this book is probably reading these lines not on a screen but in the pages of that anachronistic device, the book, would seem quite an aberration to certain pundits of the “digital revolution.” That you are reading them in archaic, messy, and inconsistent English, French, or Swahili, instead of in Esperanto, de es the predictions of half a century ago that the world would soon be communicating in a logical, unambiguous, and Platonically designed lingua franca. Likewise, we are not spending long weekends in space stations as was universally predicted three decades ago. In an example of corporate arrogance, after the rst moon landing the now-defunct airline Pan Am took advance bookings for round-trips between earth and the moon. Nice prediction, except that the company failed to foresee that it would be out of business not long after. A Solution Waiting for a Problem Engineers tend to develop tools for the pleasure of developing tools, not to induce nature to yield its secrets. It so happens that some of these tools bring us more knowledge; because of the silent evidence e ect, we forget to consider tools that accomplished nothing but keeping engineers o the streets. Tools lead to unexpected discoveries, which themselves lead to other unexpected discoveries. But rarely do our tools seem to work as intended; it is only the engineer’s gusto and love for the building of toys and machines that contribute to the augmentation of our knowledge. Knowledge does not progress from tools designed to verify or help theories, but rather the opposite. The computer was not built to allow us to develop new, visual, geometric mathematics, but for some other purpose. It happened to allow us to discover mathematical objects that few cared to look for. Nor was the computer invented to let you chat with your friends in Siberia, but it has caused some long-distance relationships to bloom. As an essayist, I can attest that the Internet has helped me to spread my ideas by bypassing journalists. But this was not the stated purpose of its military designer. The laser is a prime illustration of a tool made for a given purpose (actually no real purpose) that then found applications that were not even dreamed of at the time. It was a typical “solution looking for a problem.” Among the early applications was the surgical stitching of detached retinas. Half a century later, The Economist asked Charles Townes, the alleged inventor of the laser, if he had had retinas on his mind. He had not. He was satisfying his desire to split light beams, and that was that. In fact, Townes’s colleagues teased him quite a bit about the irrelevance of his discovery. Yet just consider the e ects of the laser in the world around you: compact disks, eyesight corrections, microsurgery, data storage and retrieval—all unforeseen applications of the technology.* We build toys. Some of those toys change the world. Keep Searching In the summer of 2005 I was the guest of a biotech company in California that had found inordinate success. I was greeted with T-shirts and pins showing a bell-curve buster and the announcement of the formation of the Fat Tails Club (“fat tails” is a technical term for Black Swans). This was my rst encounter with a rm that lived o Black Swans of the positive kind. I was told that a scientist managed the company and that he had the instinct, as a scientist, to just let scientists look wherever their instinct took them. Commercialization came later. My hosts, scientists at heart, understood that research involves a large element of serendipity, which can pay o big as long as one knows how serendipitous the business can be and structures it around that fact. Viagra, which changed the mental outlook and social mores of retired men, was meant to be a hypertension drug. Another hypertension drug led to a hair-growth medication. My friend Bruce Goldberg, who understands randomness, calls these unintended side applications “corners.” While many worry about unintended consequences, technology adventurers thrive on them. The biotech company seemed to follow implicitly, though not explicitly, Louis Pasteur’s adage about creating luck by sheer exposure. “Luck favors the prepared,” Pasteur said, and, like all great discoverers, he knew something about accidental discoveries. The best way to get maximal exposure is to keep researching. Collect opportunities—on that, later. To predict the spread of a technology implies predicting a large element of fads and social contagion, which lie outside the objective utility of the technology itself (assuming there is such an animal as objective utility). How many wonderfully useful ideas have ended up in the cemetery, such as the Segway, an electric scooter that, it was prophesized, would change the morphology of cities, and many others. As I was mentally writing these lines I saw a Time magazine cover at an airport stand announcing the “meaningful inventions” of the year. These inventions seemed to be meaningful as of the issue date, or perhaps for a couple of weeks after. Journalists can teach us how to not learn. HOW TO PREDICT YOUR PREDICTIONS! This brings us to Sir Doktor Professor Karl Raimund Popper’s attack on historicism. As I said in Chapter 5, this was his most signi cant insight, but it remains his least known. People who do not really know his work tend to focus on Popperian falsi cation, which addresses the veri cation or nonveri cation of claims. This focus obscures his central idea: he made skepticism a method, he made of a skeptic someone constructive. Just as Karl Marx wrote, in great irritation, a diatribe called The Misery of Philosophy in response to Proudhon’s The Philosophy of Misery, Popper, irritated by some of the philosophers of his time who believed in the scienti c understanding of history, wrote, as a pun, The Misery of Historicism (which has been translated as The Poverty of Historicism).* Popper’s insight concerns the limitations in forecasting historical events and the need to downgrade “soft” areas such as history and social science to a level slightly above aesthetics and entertainment, like butter y or coin collecting. (Popper, having received a classical Viennese education, didn’t go quite that far; I do. I am from Amioun.) What we call here soft historical sciences are narrative dependent studies. Popper’s central argument is that in order to predict historical events you need to predict technological innovation, itself fundamentally unpredictable. “Fundamentally” unpredictable? I will explain what he means using a modern framework. Consider the following property of knowledge: If you expect that you will know tomorrow with certainty that your boyfriend has been cheating on you all this time, then you know today with certainty that your boyfriend is cheating on you and will take action today, say, by grabbing a pair of scissors and angrily cutting all his Ferragamo ties in half. You won’t tell yourself, This is what I will gure out tomorrow, but today is di erent so I will ignore the information and have a pleasant dinner. This point can be generalized to all forms of knowledge. There is actually a law in statistics called the law of iterated expectations, which I outline here in its strong form: if I expect to expect something at some date in the future, then I already expect that something at present. Consider the wheel again. If you are a Stone Age historical thinker called on to predict the future in a comprehensive report for your chief tribal planner, you must project the invention of the wheel or you will miss pretty much all of the action. Now, if you can prophesy the invention of the wheel, you already know what a wheel looks like, and thus you already know how to build a wheel, so you are already on your way. The Black Swan needs to be predicted! But there is a weaker form of this law of iterated knowledge. It can be phrased as follows: to understand the future to the point of being able to predict it, you need to incorporate elements from this future itself. If you know about the discovery you are about to make in the future, then you have almost made it. Assume that you are a special scholar in Medieval University’s Forecasting Department specializing in the projection of future history (for our purposes, the remote twentieth century). You would need to hit upon the inventions of the steam machine, electricity, the atomic bomb, and the Internet, as well as the institution of the airplane onboard massage and that strange activity called the business meeting, in which well-fed, but sedentary, men voluntarily restrict their blood circulation with an expensive device called a necktie. This incapacity is not trivial. The mere knowledge that something has been invented often leads to a series of inventions of a similar nature, even though not a single detail of this invention has been disseminated—there is no need to nd the spies and hang them publicly. In mathematics, once a proof of an arcane theorem has been announced, we frequently witness the proliferation of similar proofs coming out of nowhere, with occasional accusations of leakage and plagiarism. There may be no plagiarism: the information that the solution exists is itself a big piece of the solution. By the same logic, we are not easily able to conceive of future inventions (if we were, they would have already been invented). On the day when we are able to foresee inventions we will be living in a state where everything conceivable has been invented. Our own condition brings to mind the apocryphal story from 1899 when the head of the U.S. patent o ce resigned because he deemed that there was nothing left to discover—except that on that day the resignation would be justi ed.* Popper was not the rst to go after the limits to our knowledge. In Germany, in the late nineteenth century, Emil du Bois-Reymond claimed that ignoramus et ignorabimus—we are ignorant and will remain so. Somehow his ideas went into oblivion. But not before causing a reaction: the mathematician David Hilbert set to defy him by drawing a list of problems that mathematicians would need to solve over the next century. Even du Bois-Reymond was wrong. We are not even good at understanding the unknowable. Consider the statements we make about things that we will never come to know—we con dently underestimate what knowledge we may acquire in the future. Auguste Comte, the founder of the school of positivism, which is (unfairly) accused of aiming at the scientization of everything in sight, declared that mankind would forever remain ignorant of the chemical composition of the xed stars. But, as Charles Sanders Peirce reported, “The ink was scarcely dry upon the printed page before the spectroscope was discovered and that which he had deemed absolutely unknowable was well on the way of getting ascertained.” Ironically, Comte’s other projections, concerning what we would come to learn about the workings of society, were grossly—and dangerously—overstated. He assumed that society was like a clock that would yield its secrets to us. I’ll summarize my argument here: Prediction requires knowing about technologies that will be discovered in the future. But that very knowledge would almost automatically allow us to start developing those technologies right away. Ergo, we do not know what we will know. Some might say that the argument, as phrased, seems obvious, that we always think that we have reached de nitive knowledge but don’t notice that those past societies we laugh at also thought the same way. My argument is trivial, so why don’t we take it into account? The answer lies in a pathology of human nature. Remember the psychological discussions on asymmetries in the perception of skills in the previous chapter? We see aws in others and not in ourselves. Once again we seem to be wonderful at self-deceit machines. Monsieur le professeur Henri Poincaré. Somehow they stopped making this kind of thinker. Courtesy of Université Nancy-2. THE NTH BILLIARD BALL Henri Poincaré, in spite of his fame, is regularly considered to be an undervalued scienti c thinker, given that it took close to a century for some of his ideas to be appreciated. He was perhaps the last great thinking mathematician (or possibly the reverse, a mathematical thinker). Every time I see a T-shirt bearing the picture of the modern icon Albert Einstein, I cannot help thinking of Poincaré—Einstein is worthy of our reverence, but he has displaced many others. There is so little room in our consciousness; it is winner-take-all up there. Third Republic–Style Decorum Again, Poincaré is in a class by himself. I recall my father recommending Poincaré’s essays, not just for their scienti c content, but for the quality of his French prose. The grand master wrote these wonders as serialized articles and composed them like extemporaneous speeches. As in every masterpiece, you see a mixture of repetitions, digressions, everything a “me too” editor with a prepackaged mind would condemn—but these make his text even more readable owing to an iron consistency of thought. Poincaré became a proli c essayist in his thirties. He seemed in a hurry and died prematurely, at fty-eight; he was in such a rush that he did not bother correcting typos and grammatical errors in his text, even after spotting them, since he found doing so a gross misuse of his time. They no longer make geniuses like that—or they no longer let them write in their own way. Poincaré’s reputation as a thinker waned rapidly after his death. His idea that concerns us took almost a century to resurface, but in another form. It was indeed a great mistake that I did not carefully read his essays as a child, for in his magisterial La science et l’hypothèse, I discovered later, he angrily disparages the use of the bell curve. I will repeat that Poincaré was the true kind of philosopher of science: his philosophizing came from his witnessing the limits of the subject itself, which is what true philosophy is all about. I love to tick o French literary intellectuals by naming Poincaré as my favorite French philosopher. “Him a philosophe? What do you mean, monsieur?” It is always frustrating to explain to people that the thinkers they put on the pedestals, such as Henri Bergson or Jean-Paul Sartre, are largely the result of fashion production and can’t come close to Poincaré in terms of sheer in uence that will continue for centuries to come. In fact, there is a scandal of prediction going on here, since it is the French Ministry of National Education that decides who is a philosopher and which philosophers need to be studied. I am looking at Poincaré’s picture. He was a bearded, portly and imposing, well-educated patrician gentleman of the French Third Republic, a man who lived and breathed general science, looked deep into his subject, and had an astonishing breadth of knowledge. He was part of the class of mandarins that gained respectability in the late nineteenth century: upper middle class, powerful, but not exceedingly rich. His father was a doctor and professor of medicine, his uncle was a prominent scientist and administrator, and his cousin Raymond became a president of the republic of France. These were the days when the grandchildren of businessmen and wealthy landowners headed for the intellectual professions. However, I can hardly imagine him on a T-shirt, or sticking out his tongue like in that famous picture of Einstein. There is something non-playful about him, a Third Republic style of dignity. In his day, Poincaré was thought to be the king of mathematics and science, except of course by a few narrow-minded mathematicians like Charles Hermite who considered him too intuitive, too intellectual, or too “hand- waving.” When mathematicians say “hand-waving,” disparagingly, about someone’s work, it means that the person has: a) insight, b) realism, c) something to say, and it means that d) he is right because that’s what critics say when they can’t nd anything more negative. A nod from Poincaré made or broke a career. Many claim that Poincaré gured out relativity before Einstein—and that Einstein got the idea from him—but that he did not make a big deal out of it. These claims are naturally made by the French, but there seems to be some validation from Einstein’s friend and biographer Abraham Pais. Poincaré was too aristocratic in both background and demeanor to complain about the ownership of a result. Poincaré is central to this chapter because he lived in an age when we had made extremely rapid intellectual progress in the elds of prediction—think of celestial mechanics. The scienti c revolution made us feel that we were in possession of tools that would allow us to grasp the future. Uncertainty was gone. The universe was like a clock and, by studying the movements of the pieces, we could project into the future. It was only a matter of writing down the right models and having the engineers do the calculations. The future was a mere extension of our technological certainties. The Three Body Problem Poincaré was the rst known big-gun mathematician to understand and explain that there are fundamental limits to our equations. He introduced nonlinearities, small e ects that can lead to severe consequences, an idea that later became popular, perhaps a bit too popular, as chaos theory. What’s so poisonous about this popularity? Because Poincaré’s entire point is about the limits that nonlinearities put on forecasting; they are not an invitation to use mathematical techniques to make extended forecasts. Mathematics can show us its own limits rather clearly. There is (as usual) an element of the unexpected in this story. Poincaré initially responded to a competition organized by the mathematician Gösta Mittag-Le er to celebrate the sixtieth birthday of King Oscar of Sweden. Poincaré’s memoir, which was about the stability of the solar system, won the prize that was then the highest scienti c honor (as these were the happy days before the Nobel Prize). A problem arose, however, when a mathematical editor checking the memoir before publication realized that there was a calculation error, and that, after consideration, it led to the opposite conclusion—unpredictability, or, more technically, nonintegrability. The memoir was discreetly pulled and reissued about a year later. Poincaré’s reasoning was simple: as you project into the future you may need an increasing amount of precision about the dynamics of the process that you are modeling, since your error rate grows very rapidly. The problem is that near precision is not possible since the degradation of your forecast compounds abruptly—you would eventually need to gure out the past with in nite precision. Poincaré showed this in a very simple case, famously known as the “three body problem.” If you have only two planets in a solar-style system, with nothing else a ecting their course, then you may be able to inde nitely predict the behavior of these planets, no sweat. But add a third body, say a comet, ever so small, between the planets. Initially the third body will cause no drift, no impact; later, with time, its e ects on the two other bodies may become explosive. Small di erences in where this tiny body is located will eventually dictate the future of the behemoth planets. FIGURE 2: PRECISION AND FORECASTING One of the readers of a draft of this book, David Cowan, gracefully drew this picture of scattering, which shows how, at the second bounce, variations in the initial conditions can lead to extremely divergent results. As the initial imprecision in the angle is multiplied, every additional bounce will be further magni ed. This causes a severe multiplicative e ect where the error grows out disproportionately. Explosive forecasting di culty comes from complicating the mechanics, ever so slightly. Our world, unfortunately, is far more complicated than the three body problem; it contains far more than three objects. We are dealing with what is now called a dynamical system—and the world, we will see, is a little too much of a dynamical system. Think of the di culty in forecasting in terms of branches growing out of a tree; at every fork we have a multiplication of new branches. To see how our intuitions about these nonlinear multiplicative e ects are rather weak, consider this story about the chessboard. The inventor of the chessboard requested the following compensation: one grain of rice for the rst square, two for the second, four for the third, eight, then sixteen, and so on, doubling every time, sixty-four times. The king granted this request, thinking that the inventor was asking for a pittance—but he soon realized that he was outsmarted. The amount of rice exceeded all possible grain reserves! This multiplicative di culty leading to the need for greater and greater precision in assumptions can be illustrated with the following simple exercise concerning the prediction of the movements of billiard balls on a table. I use the example as computed by the mathematician Michael Berry. If you know a set of basic parameters concerning the ball at rest, can compute the resistance of the table (quite elementary), and can gauge the strength of the impact, then it is rather easy to predict what would happen at the rst hit. The second impact becomes more complicated, but possible; you need to be more careful about your knowledge of the initial states, and more precision is called for. The problem is that to correctly compute the ninth impact, you need to take into account the gravitational pull of someone standing next to the table (modestly, Berry’s computations use a weight of less than 150 pounds). And to compute the fty-sixth impact, every single elementary particle of the universe needs to be present in your assumptions! An electron at the edge of the universe, separated from us by 10 billion light-years, must gure in the calculations, since it exerts a meaningful e ect on the outcome. Now, consider the additional burden of having to incorporate predictions about where these variables will be in the future. Forecasting the motion of a billiard ball on a pool table requires knowledge of the dynamics of the entire universe, down to every single atom! We can easily predict the movements of large objects like planets (though not too far into the future), but the smaller entities can be di cult to gure out—and there are so many more of them. Note that this billiard-ball story assumes a plain and simple world; it does not even take into account these crazy social matters possibly endowed with free will. Billiard balls do not have a mind of their own. Nor does our example take into account relativity and quantum e ects. Nor did we use the notion (often invoked by phonies) called the “uncertainty principle.” We are not concerned with the limitations of the precision in measurements done at the subatomic level. We are just dealing with billiard balls! In a dynamical system, where you are considering more than a ball on its own, where trajectories in a way depend on one another, the ability to project into the future is not just reduced, but is subjected to a fundamental limitation. Poincaré proposed that we can only work with qualitative matters —some property of systems can be discussed, but not computed. You can think rigorously, but you cannot use numbers. Poincaré even invented a eld for this, analysis in situ, now part of topology. Prediction and forecasting are a more complicated business than is commonly accepted, but it takes someone who knows mathematics to understand that. To accept it takes both understanding and courage. In the 1960s the MIT meteorologist Edward Lorenz rediscovered Poincaré’s results on his own—once again, by accident. He was producing a computer model of weather dynamics, and he ran a simulation that projected a weather system a few days ahead. Later he tried to repeat the same simulation with the exact same model and what he thought were the same input parameters, but he got wildly di erent results. He initially attributed these di erences to a computer bug or a calculation error. Computers then were heavier and slower machines that bore no resemblance to what we have today, so users were severely constrained by time. Lorenz subsequently realized that the consequential divergence in his results arose not from error, but from a small rounding in the input parameters. This became known as the butter y e ect, since a butter y moving its wings in India could cause a hurricane in New York, two years later. Lorenz’s ndings generated interest in the eld of chaos theory. Naturally researchers found predecessors to Lorenz’s discovery, not only in the work of Poincaré, but also in that of the insightful and intuitive Jacques Hadamard, who thought of the same point around 1898, and then went on to live for almost seven more decades—he died at the age of ninety-eight.* They Still Ignore Hayek Popper and Poincaré’s ndings limit our ability to see into the future, making it a very complicated re ection of the past—if it is a re ection of the past at all. A potent application in the social world comes from a friend of Sir Karl, the intuitive economist Friedrich Hayek. Hayek is one of the rare celebrated members of his “profession” (along with J. M. Keynes and G.L.S. Shackle) to focus on true uncertainty, on the limitations of knowledge, on the unread books in Eco’s library. In 1974 he received the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, but if you read his acceptance speech you will be in for a bit of a surprise. It was eloquently called “The Pretense of Knowledge,” and he mostly railed about other economists and about the idea of the planner. He argued against the use of the tools of hard science in the social ones, and depressingly, right before the big boom for these methods in economics. Subsequently, the prevalent use of complicated equations made the environment for true empirical thinkers worse than it was before Hayek wrote his speech. Every year a paper or a book appears, bemoaning the fate of economics and complaining about its attempts to ape physics. The latest I’ve seen is about how economists should shoot for the role of lowly philosophers rather than that of high priests. Yet, in one ear and out the other. For Hayek, a true forecast is done organically by a system, not by at. One single institution, say, the central planner, cannot aggregate knowledge; many important pieces of information will be missing. But society as a whole will be able to integrate into its functioning these multiple pieces of information. Society as a whole thinks outside the box. Hayek attacked socialism and managed economies as a product of what I have called nerd knowledge, or Platonicity—owing to the growth of scienti c knowledge, we overestimate our ability to understand the subtle changes that constitute the world, and what weight needs to be imparted to each such change. He aptly called this “scientism.” This disease is severely ingrained in our institutions. It is why I fear governments and large corporations—it is hard to distinguish between them. Governments make forecasts; companies produce projections; every year various forecasters project the level of mortgage rates and the stock market at the end of the following year. Corporations survive not because they have made good forecasts, but because, like the CEOs visiting Wharton I mentioned earlier, they may have been the lucky ones. And, like a restaurant owner, they may be hurting themselves, not us—perhaps helping us and subsidizing our consumption by giving us goods in the process, like cheap telephone calls to the rest of the world funded by the overinvestment during the dotcom era. We consumers can let them forecast all they want if that’s what is necessary for them to get into business. Let them go hang themselves if they wish. As a matter of fact, as I mentioned in Chapter 8, we New Yorkers are all bene ting from the quixotic overcon dence of corporations and restaurant entrepreneurs. This is the bene t of capitalism that people discuss the least. But corporations can go bust as often as they like, thus subsidizing us consumers by transferring their wealth into our pockets—the more bankruptcies, the better it is for us—unless they are “too big to fail” and require subsidies, which is an argument in favor of letting companies go bust early. Government is a more serious business and we need to make sure we do not pay the price for its folly. As individuals we should love free markets because operators in them can be as incompetent as they wish. The only criticism one might have of Hayek is that he makes a hard and qualitative distinction between social sciences and physics. He shows that the methods of physics do not translate to its social science siblings, and he blames the engineering-oriented mentality for this. But he was writing at a time when physics, the queen of science, seemed to zoom in our world. It turns out that even the natural sciences are far more complicated than that. He was right about the social sciences, he is certainly right in trusting hard scientists more than social theorizers, but what he said about the weaknesses of social knowledge applies to all knowledge. All knowledge. Why? Because of the con rmation problem, one can argue that we know very little about our natural world; we advertise the read books and forget about the unread ones. Physics has been successful, but it is a narrow eld of hard science in which we have been successful, and people tend to generalize that success to all science. It would be preferable if we were better at understanding cancer or the (highly nonlinear) weather than the origin of the universe. How Not to Be a Nerd Let us dig deeper into the problem of knowledge and continue the comparison of Fat Tony and Dr. John in Chapter 9. Do nerds tunnel, meaning, do they focus on crisp categories and miss sources of uncertainty? Remember from the Prologue my presentation of Platoni cation as a top- down focus on a world composed of these crisp categories.* Think of a bookworm picking up a new language. He will learn, say, Serbo- Croatian or !Kung by reading a grammar book cover to cover, and memorizing the rules. He will have the impression that some higher grammatical authority set the linguistic regulations so that nonlearned ordinary people could subsequently speak the language. In reality, languages grow organically; grammar is something people without anything more exciting to do in their lives codify into a book. While the scholastic-minded will memorize declensions, the a-Platonic nonnerd will acquire, say, Serbo- Croatian by picking up potential girlfriends in bars on the outskirts of Sarajevo, or talking to cabdrivers, then tting (if needed) grammatical rules to the knowledge he already possesses. Consider again the central planner. As with language, there is no grammatical authority codifying social and economic events; but try to convince a bureaucrat or social scientist that the world might not want to follow his “scienti c” equations. In fact, thinkers of the Austrian school, to which Hayek belonged, used the designations tacit or implicit precisely for that part of knowledge that cannot be written down, but that we should avoid repressing. They made the distinction we saw earlier between “know- how” and “know-what”—the latter being more elusive and more prone to nerdi cation. To clarify, Platonic is top-down, formulaic, closed-minded, self-serving, and commoditized; a-Platonic is bottom-up, open-minded, skeptical, and empirical. The reason for my singling out the great Plato becomes apparent with the following example of the master’s thinking: Plato believed that we should use both hands with equal dexterity. It would not “make sense” otherwise. He considered favoring one limb over the other a deformation caused by the “folly of mothers and nurses.” Asymmetry bothered him, and he projected his ideas of elegance onto reality. We had to wait until Louis Pasteur to gure out that chemical molecules were either left- or right-handed and that this mattered considerably. One can nd similar ideas among several disconnected branches of thinking. The earliest were (as usual) the empirics, whose bottom-up, theory- free, “evidence-based” medical approach was mostly associated with Philnus of Cos, Serapion of Alexandria, and Glaucias of Tarentum, later made skeptical by Menodotus of Nicomedia, and currently well-known by its vocal practitioner, our friend the great skeptical philosopher Sextus Empiricus. Sextus who, we saw earlier, was perhaps the rst to discuss the Black Swan. The empirics practiced the “medical art” without relying on reasoning; they wanted to bene t from chance observations by making guesses, and experimented and tinkered until they found something that worked. They did minimal theorizing. Their methods are being revived today as evidence-based medicine, after two millennia of persuasion. Consider that before we knew of bacteria, and their role in diseases, doctors rejected the practice of hand washing because it made no sense to them, despite the evidence of a meaningful decrease in hospital deaths. Ignaz Semmelweis, the mid-nineteenth-century doctor who promoted the idea of hand washing, wasn’t vindicated until decades after his death. Similarly it may not “make sense” that acupuncture works, but if pushing a needle in someone’s toe systematically produces relief from pain (in properly conducted empirical tests), then it could be that there are functions too complicated for us to understand, so let’s go with it for now while keeping our minds open. Academic Libertarianism To borrow from Warren Bu ett, don’t ask the barber if you need a haircut —and don’t ask an academic if what he does is relevant. So I’ll end this discussion of Hayek’s libertarianism with the following observation. As I’ve said, the problem with organized knowledge is that there is an occasional divergence of interests between academic guilds and knowledge itself. So I cannot for the life of me understand why today’s libertarians do not go after tenured faculty (except perhaps because many libertarians are academics). We saw that companies can go bust, while governments remain. But while governments remain, civil servants can be demoted and congressmen and senators can be eventually voted out of o ce. In academia a tenured faculty is permanent—the business of knowledge has permanent “owners.” Simply, the charlatan is more the product of control than the result of freedom and lack of structure. Prediction and Free Will If you know all possible conditions of a physical system you can, in theory (though not, as we saw, in practice), project its behavior into the future. But this only concerns inanimate objects. We hit a stumbling block when social matters are involved. It is another matter to project a future when humans are involved, if you consider them living beings and endowed with free will. If I can predict all of your actions, under given circumstances, then you may not be as free as you think you are. You are an automaton responding to environmental stimuli. You are a slave of destiny. And the illusion of free will could be reduced to an equation that describes the result of interactions among molecules. It would be like studying the mechanics of a clock: a genius with extensive knowledge of the initial conditions and the causal chains would be able to extend his knowledge to the future of your actions. Wouldn’t that be sti ing? However, if you believe in free will you can’t truly believe in social science and economic projection. You cannot predict how people will act. Except, of course, if there is a trick, and that trick is the cord on which neoclassical economics is suspended. You simply assume that individuals will be rational in the future and thus act predictably. There is a strong link between rationality, predictability, and mathematical tractability. A rational individual will perform a unique set of actions in speci ed circumstances. There is one and only one answer to the question of how “rational” people satisfying their best interests would act. Rational actors must be coherent: they cannot prefer apples to oranges, oranges to pears, then pears to apples. If they did, then it would be di cult to generalize their behavior. It would also be di cult to project their behavior in time. In orthodox economics, rationality became a straitjacket. Platoni ed economists ignored the fact that people might prefer to do something other than maximize their economic interests. This led to mathematical techniques such as “maximization,” or “optimization,” on which Paul Samuelson built much of his work. Optimization consists in nding the mathematically optimal policy that an economic agent could pursue. For instance, what is the “optimal” quantity you should allocate to stocks? It involves complicated mathematics and thus raises a barrier to entry by non-mathematically trained scholars. I would not be the rst to say that this optimization set back social science by reducing it from the intellectual and re ective discipline that it was becoming to an attempt at an “exact science.” By “exact science,” I mean a second-rate engineering problem for those who want to pretend that they are in the physics department—so-called physics envy. In other words, an intellectual fraud. Optimization is a case of sterile modeling that we will discuss further in Chapter 17. It had no practical (or even theoretical) use, and so it became principally a competition for academic positions, a way to make people compete with mathematical muscle. It kept Platoni ed economists out of the bars, solving equations at night. The tragedy is that Paul Samuelson, a quick mind, is said to be one of the most intelligent scholars of his generation. This was clearly a case of very badly invested intelligence. Characteristically, Samuelson intimidated those who questioned his techniques with the statement “Those who can, do science, others do methodology.” If you knew math, you could “do science.” This is reminiscent of psychoanalysts who silence their critics by accusing them of having trouble with their fathers. Alas, it turns out that it was Samuelson and most of his followers who did not know much math, or did not know how to use what math they knew, how to apply it to reality. They only knew enough math to be blinded by it. Tragically, before the proliferation of empirically blind idiot savants, interesting work had been begun by true thinkers, the likes of J. M. Keynes, Friedrich Hayek, and the great Benoît Mandelbrot, all of whom were displaced because they moved economics away from the precision of second- rate physics. Very sad. One great underestimated thinker is G.L.S. Shackle, now almost completely obscure, who introduced the notion of “unknowledge,” that is, the unread books in Umberto Eco’s library. It is unusual to see Shackle’s work mentioned at all, and I had to buy his books from secondhand dealers in London. Legions of empirical psychologists of the heuristics and biases school have shown that the model of rational behavior under uncertainty is not just grossly inaccurate but plain wrong as a description of reality. Their results also bother Platoni ed economists because they reveal that there are several ways to be irrational. Tolstoy said that happy families were all alike, while each unhappy one is unhappy in its own way. People have been shown to make errors equivalent to preferring apples to oranges, oranges to pears, and pears to apples, depending on how the relevant questions are presented to them. The sequence matters! Also, as we have seen with the anchoring example, subjects’ estimates of the number of dentists in Manhattan are in uenced by which random number they have just been presented with—the anchor. Given the randomness of the anchor, we will have randomness in the estimates. So if people make inconsistent choices and decisions, the central core of economic optimization fails. You can no longer produce a “general theory,” and without one you cannot predict. You have to learn to live without a general theory, for Pluto’s sake! THE GRUENESS OF EMERALD Recall the turkey problem. You look at the past and derive some rule about the future. Well, the problems in projecting from the past can be even worse than what we have already learned, because the same past data can con rm a theory and also its exact opposite! If you survive until tomorrow, it could mean that either a) you are more likely to be immortal or b) that you are closer to death. Both conclusions rely on the exact same data. If you are a turkey being fed for a long period of time, you can either naïvely assume that feeding confirms your safety or be shrewd and consider that it confirms the danger of being turned into supper. An acquaintance’s unctuous past behavior may indicate his genuine a ection for me and his concern for my welfare; it may also con rm his mercenary and calculating desire to get my business one day. FIGURE 3 A series of a seemingly growing bacterial population (or of sales records, or of any variable observed through time—such as the total feeding of the turkey in Chapter 4). FIGURE 4 Easy to t the trend—there is one and only one linear model that ts the data. You can project a continuation into the future. FIGURE 5 We look at a broader scale. Hey, other models also t it rather well. FIGURE 6 And the real “generating process” is extremely simple but it had nothing to do with a linear model! Some parts of it appear to be linear and we are fooled by extrapolating in a direct line.* So not only can the past be misleading, but there are also many degrees of freedom in our interpretation of past events. For the technical version of this idea, consider a series of dots on a page representing a number through time—the graph would resemble Figure 1 showing the rst thousand days in Chapter 4. Let’s say your high school teacher asks you to extend the series of dots. With a linear model, that is, using a ruler, you can run only a straight line, a single straight line from the past to the future. The linear model is unique. There is one and only one straight line that can project from a series of points. But it can get trickier. If you do not limit yourself to a straight line, you nd that there is a huge family of curves that can do the job of connecting the dots. If you project from the past in a linear way, you continue a trend. But possible future deviations from the course of the past are in nite. This is what the philosopher Nelson Goodman called the riddle of induction: We project a straight line only because we have a linear model in our head—the fact that a number has risen for 1,000 days straight should make you more con dent that it will rise in the future. But if you have a nonlinear model in your head, it might con rm that the number should decline on day 1,001. Let’s say that you observe an emerald. It was green yesterday and the day before yesterday. It is green again today. Normally this would con rm the “green” property: we can assume that the emerald will be green tomorrow. But to Goodman, the emerald’s color history could equally con rm the “grue” property. What is this grue property? The emerald’s grue property is to be green until some speci ed date, say, December 31, 2006, and then blue thereafter. The riddle of induction is another version of the narrative fallacy—you face an in nity of “stories” that explain what you have seen. The severity of Goodman’s riddle of induction is as follows: if there is no longer even a single unique way to “generalize” from what you see, to make an inference about the unknown, then how should you operate? The answer, clearly, will be that you should employ “common sense,” but your common sense may not be so well developed with respect to some Extremistan variables. THAT GREAT ANTICIPATION MACHINE The reader is entitled to wonder, So, NNT, why on earth do we plan? Some people do it for monetary gain, others because it’s “their job.” But we also do it without such intentions—spontaneously. Why? The answer has to do with human nature. Planning may come with the package of what makes us human, namely, our consciousness. There is supposed to be an evolutionary dimension to our need to project matters into the future, which I will rapidly summarize here, since it can be an excellent candidate explanation, an excellent conjecture, though, since it is linked to evolution, I would be cautious. The idea, as promoted by the philosopher Daniel Dennett, is as follows: What is the most potent use of our brain? It is precisely the ability to project conjectures into the future and play the counterfactual game—“If I punch him in the nose, then he will punch me back right away, or, worse, call his lawyer in New York.” One of the advantages of doing so is that we can let our conjectures die in our stead. Used correctly and in place of more visceral reactions, the ability to project e ectively frees us from immediate, rst-order natural selection—as opposed to more primitive organisms that were vulnerable to death and only grew by the improvement in the gene pool through the selection of the best. In a way, projecting allows us to cheat evolution: it now takes place in our head, as a series of projections and counterfactual scenarios. This ability to mentally play with conjectures, even if it frees us from the laws of evolution, is itself supposed to be the product of evolution—it is as if evolution has put us on a long leash whereas other animals live on the very short leash of immediate dependence on their environment. For Dennett, our brains are “anticipation machines;” for him the human mind and consciousness are emerging properties, those properties necessary for our accelerated development. Why do we listen to experts and their forecasts? A candidate explanation is that society reposes on specialization, e ectively the division of knowledge. You do not go to medical school the minute you encounter a big health problem; it is less taxing (and certainly safer) for you to consult someone who has already done so. Doctors listen to car mechanics (not for health matters, just when it comes to problems with their cars); car mechanics listen to doctors. We have a natural tendency to listen to the expert, even in elds where there may be no experts. * Most of the debate between creationists and evolutionary theorists (of which I do not partake) lies in the following: creationists believe that the world comes from some form of design while evolutionary theorists see the world as a result of random changes by an aimless process. But it is hard to look at a computer or a car and consider them the result of aimless process. Yet they are. * Recall from Chapter 4 how Algazel and Averroës traded insults through book titles. Perhaps one day I will be lucky enough to read an attack on this book in a diatribe called The White Swan. * Such claims are not uncommon. For instance the physicist Albert Michelson imagined, toward the end of the nineteenth century, that what was left for us to discover in the sciences of nature was no more than ne-tuning our precisions by a few decimal places. * There are more limits I haven’t even attempted to discuss here. I am not even bringing up the class of incomputability people call NP completeness. * This idea pops up here and there in history, under di erent names. Alfred North Whitehead called it the “fallacy of misplaced concreteness,” e.g., the mistake of confusing a model with the physical entity that it means to describe. * These graphs also illustrate a statistical version of the narrative fallacy— you nd a model that ts the past. “Linear regression” or “R-square” can ultimately fool you beyond measure, to the point where it is no longer funny. You can t the linear part of the curve and claim a high R-square, meaning that your model ts the data very well and has high predictive powers. All that o hot air: you only t the linear segment of the series. Always remember that “R-square” is un t for Extremistan; it is only good for academic promotion. Chapter Twelve EPISTEMOCRACY, A DREAM This is only an essay—Children and philosophers vs. adults and nonphilosophers—Science as an autistic enterprise—The past too has a past —Mispredict and live a long, happy life (if you survive) Someone with a low degree of epistemic arrogance is not too visible, like a shy person at a cocktail party. We are not predisposed to respect humble people, those who try to suspend judgment. Now contemplate epistemic humility. Think of someone heavily introspective, tortured by the awareness of his own ignorance. He lacks the courage of the idiot, yet has the rare guts to say “I don’t know.” He does not mind looking like a fool or, worse, an ignoramus. He hesitates, he will not commit, and he agonizes over the consequences of being wrong. He introspects, introspects, and introspects until he reaches physical and nervous exhaustion. This does not necessarily mean that he lacks con dence, only that he holds his own knowledge to be suspect. I will call such a person an epistemocrat; the province where the laws are structured with this kind of human fallibility in mind I will call an epistemocracy. The major modern epistemocrat is Montaigne. Monsieur de Montaigne, Epistemocrat At the age of thirty-eight, Michel Eyquem de Montaigne retired to his estate, in the countryside of southwestern France. Montaigne, which means mountain in Old French, was the name of the estate. The area is known today for the Bordeaux wines, but in Montaigne’s time not many people invested their mental energy and sophistication in wine. Montaigne had stoic tendencies and would not have been strongly drawn to such pursuits anyway. His idea was to write a modest collection of “attempts,” that is, essays. The very word essay conveys the tentative, the speculative, and the nonde nitive. Montaigne was well grounded in the classics and wanted to meditate on life, death, education, knowledge, and some not uninteresting biological aspects of human nature (he wondered, for example, whether cripples had more vigorous libidos owing to the richer circulation of blood in their sexual organs). The tower that became his study was inscribed with Greek and Latin sayings, almost all referring to the vulnerability of human knowledge. Its windows o ered a wide vista of the surrounding hills. Montaigne’s subject, o cially, was himself, but this was mostly as a means to facilitate the discussion; he was not like those corporate executives who write biographies to make a boastful display of their honors and accomplishments. He was mainly interested in discovering things about himself, making us discover things about himself, and presenting matters that could be generalized—generalized to the entire human race. Among the inscriptions in his study was a remark by the Latin poet Terence: Homo sum, humani a me nil alienum puto—I am a man, and nothing human is foreign to me. Montaigne is quite refreshing to read after the strains of a modern education since he fully accepted human weaknesses and understood that no philosophy could be e ective unless it took into account our deeply ingrained imperfections, the limitations of our rationality, the aws that make us human. It is not that he was ahead of his time; it would be better said that later scholars (advocating rationality) were backward. He was a thinking, ruminating fellow, and his ideas did not spring up in his tranquil study, but while on horseback. He went on long rides and came back with ideas. Montaigne was neither one of the academics of the Sorbonne nor a professional man of letters, and he was not these things on two planes. First, he was a doer; he had been a magistrate, a businessman, and the mayor of Bordeaux before he retired to mull over his life and, mostly, his own knowledge. Second, he was an antidogmatist: he was a skeptic with charm, a fallible, noncommittal, personal, introspective writer, and, primarily, someone who, in the great classical tradition, wanted to be a man. Had he been in a di erent period, he would have been an empirical skeptic—he had skeptical tendencies of the Pyrrhonian variety, the antidogmatic kind like Sextus Empiricus, particularly in his awareness of the need to suspend judgment. Epistemocracy Everyone has an idea of utopia. For many it means equality, universal justice, freedom from oppression, freedom from work (for some it may be the more modest, though no more attainable, society with commuter trains free of lawyers on cell phones). To me utopia is an epistemocracy, a society in which anyone of rank is an epistemocrat, and where epistemocrats manage to be elected. It would be a society governed from the basis of the awareness of ignorance, not knowledge. Alas, one cannot assert authority by accepting one’s own fallibility. Simply, people need to be blinded by knowledge—we are made to follow leaders who can gather people together because the advantages of being in groups trump the disadvantages of being alone. It has been more pro table for us to bind together in the wrong direction than to be alone in the right one. Those who have followed the assertive idiot rather than the introspective wise person have passed us some of their genes. This is apparent from a social pathology: psychopaths rally followers. Once in a while you encounter members of the human species with so much intellectual superiority that they can change their minds e ortlessly. Note here the following Black Swan asymmetry. I believe that you can be dead certain about some things, and ought to be so. You can be more con dent about discon rmation than con rmation. Karl Popper was accused of promoting self-doubt while writing in an aggressive and con dent tone (an accusation that is occasionally addressed to this author by people who don’t follow my logic of skeptical empiricism). Fortunately, we have learned a lot since Montaigne about how to carry on the skeptical-empirical enterprise. The Black Swan asymmetry allows you to be con dent about what is wrong, not about what you believe is right. Karl Popper was once asked whether one “could falsify falsi cation” (in other words, if one could be skeptical about skepticism). His answer was that he threw students out of his lectures for asking far more intelligent questions than that one. Quite tough, Sir Karl was. THE PAST’S PAST, AND THE PAST’S FUTURE Some truths only hit children—adults and nonphilosophers get sucked into the minutiae of practical life and need to worry about “serious matters,” so they abandon these insights for seemingly more relevant questions. One of these truths concerns the larger di erence in texture and quality between the past and the future. Thanks to my studying this distinction all my life, I understand

Use Quizgecko on...
Browser
Browser