Black Swan 2 PDF - A Look at Risk and Knowledge

Document Details

scrollinondubs

Uploaded by scrollinondubs

Stanford School of Medicine

Nassim Nicholas Taleb

Tags

black swans risk management financial markets decision making

Summary

This document delves into the concept of Black Swans, rare and unpredictable events that have significant impact. It analyzes historical examples, such as financial crises and sudden technological advancements, to illustrate how these events challenge traditional risk assessment models. The author explores the philosophical perspective on these events by examining the views of historical figures.

Full Transcript

Captain Smith’s ship sank in 1912 in what became the most talked-about shipwreck in history.* Trained to Be Dull Similarly, think of a bank chairman whose institution makes steady pro ts over a long time, only to lose everything in a single reversal of fortune. Traditionally, bankers of the le...

Captain Smith’s ship sank in 1912 in what became the most talked-about shipwreck in history.* Trained to Be Dull Similarly, think of a bank chairman whose institution makes steady pro ts over a long time, only to lose everything in a single reversal of fortune. Traditionally, bankers of the lending variety have been pear-shaped, clean- shaven, and dress in possibly the most comforting and boring manner, in dark suits, white shirts, and red ties. Indeed, for their lending business, banks hire dull people and train them to be even more dull. But this is for show. If they look conservative, it is because their loans only go bust on rare, very rare, occasions. There is no way to gauge the e ectiveness of their lending activity by observing it over a day, a week, a month, or … even a century! In the summer of 1982, large American banks lost close to all their past earnings (cumulatively), about everything they ever made in the history of American banking—everything. They had been lending to South and Central American countries that all defaulted at the same time—“an event of an exceptional nature.” So it took just one summer to gure out that this was a sucker’s business and that all their earnings came from a very risky game. All that while the bankers led everyone, especially themselves, into believing that they were “conservative.” They are not conservative; just phenomenally skilled at self- deception by burying the possibility of a large, devastating loss under the rug. In fact, the travesty repeated itself a decade later, with the “risk-conscious” large banks once again under nancial strain, many of them near-bankrupt, after the real-estate collapse of the early 1990s in which the now defunct savings and loan industry required a taxpayer-funded bailout of more than half a trillion dollars. The Federal Reserve bank protected them at our expense: when “conservative” bankers make pro ts, they get the bene ts; when they are hurt, we pay the costs. After graduating from Wharton, I initially went to work for Bankers Trust (now defunct). There, the chairman’s o ce, rapidly forgetting about the story of 1982, broadcast the results of every quarter with an announcement explaining how smart, pro table, conservative (and good looking) they were. It was obvious that their pro ts were simply cash borrowed from destiny with some random payback time. I have no problem with risk taking, just please, please, do not call yourself conservative and act superior to other businesses who are not as vulnerable to Black Swans. Another recent event is the almost-instant bankruptcy, in 1998, of a nancial investment company (hedge fund) called Long-Term Capital Management (LTCM), which used the methods and risk expertise of two “Nobel economists,” who were called “geniuses” but were in fact using phony, bell curve–style mathematics while managing to convince themselves that it was great science and thus turning the entire nancial establishment into suckers. One of the largest trading losses ever in history took place in almost the blink of an eye, with no warning signal (more, much more on that in Chapter 17).* A Black Swan Is Relative to Knowledge From the standpoint of the turkey, the nonfeeding of the one thousand and rst day is a Black Swan. For the butcher, it is not, since its occurrence is not unexpected. So you can see here that the Black Swan is a sucker’s problem. In other words, it occurs relative to your expectation. You realize that you can eliminate a Black Swan by science (if you’re able), or by keeping an open mind. Of course, like the LTCM people, you can create Black Swans with science, by giving people con dence that the Black Swan cannot happen —this is when science turns normal citizens into suckers. Note that these events do not have to be instantaneous surprises. Some of the historical fractures I mention in Chapter 1 have lasted a few decades, like, say, the computer that brought consequential e ects on society without its invasion of our lives being noticeable from day to day. Some Black Swans can come from the slow building up of incremental changes in the same direction, as with books that sell large amounts over years, never showing up on the bestseller lists, or from technologies that creep up on us slowly, but surely. Likewise, the growth of Nasdaq stocks in the late 1990s took a few years—but the growth would seem sharper if you were to plot it on a long historical line. Matters should be seen on some relative, not absolute, timescale: earthquakes last minutes, 9/11 lasted hours, but historical changes and technological implementations are Black Swans that can take decades. In general, positive Black Swans take time to show their e ect while negative ones happen very quickly—it is much easier and much faster to destroy than to build. (During the Lebanese war, my parents’ house in Amioun and my grandfather’s house in a nearby village were destroyed in just a few hours, dynamited by my grandfather’s enemies who controlled the area. It took seven thousand times longer—two years—to rebuild them. This asymmetry in timescales explains the di culty in reversing time.) A BRIEF HISTORY OF THE BLACK SWAN PROBLEM This turkey problem (a.k.a. the problem of induction) is a very old one, but for some reason it is likely to be called “Hume’s problem” by your local philosophy professor. People imagine us skeptics and empiricists to be morose, paranoid, and tortured in our private lives, which may be the exact opposite of what history (and my private experience) reports. Like many of the skeptics I hang around with, Hume was jovial and a bon vivant, eager for literary fame, salon company, and pleasant conversation. His life was not devoid of anecdotes. He once fell into a swamp near the house he was building in Edinburgh. Owing to his reputation among the locals as an atheist, a woman refused to pull him out of it until he recited the Lord’s Prayer and the Belief, which, being practical-minded, he did. But not before he argued with her about whether Christians were obligated to help their enemies. Hume looked unprepossessing. “He exhibited that preoccupied stare of the thoughtful scholar that so commonly impresses the undiscerning as imbecile,” writes a biographer. Strangely, Hume during his day was not mainly known for the works that generated his current reputation—he became rich and famous through writing a bestselling history of England. Ironically, when Hume was alive, his philosophical works, to which we now attach his fame, “fell deadborn o the presses,” while the works for which he was famous at the time are now harder to nd. Hume wrote with such clarity that he puts to shame almost all current thinkers, and certainly the entire German graduate curriculum. Unlike Kant, Fichte, Schopenhauer, and Hegel, Hume is the kind of thinker who is sometimes read by the person mentioning his work. I often hear “Hume’s problem” mentioned in connection with the problem of induction, but the problem is old, older than the interesting Scotsman, perhaps as old as philosophy itself, maybe as old as olive-grove conversations. Let us go back into the past, as it was formulated with no less precision by the ancients. Sextus the (Alas) Empirical The violently antiacademic writer, and antidogma activist, Sextus Empiricus operated close to a millennium and a half before Hume, and formulated the turkey problem with great precision. We know very little about him; we do not know whether he was a philosopher or more of a copyist of philosophical texts by authors obscure to us today. We surmise that he lived in Alexandria in the second century of our era. He belonged to a school of medicine called “empirical,” since its practitioners doubted theories and causality and relied on past experience as guidance in their treatment, though not putting much trust in it. Furthermore, they did not trust that anatomy revealed function too obviously. The most famous proponent of the empirical school, Menodotus of Nicomedia, who merged empiricism and philosophical skepticism, was said to keep medicine an art, not a “science,” and insulate its practice from the problems of dogmatic science. The practice of medicine explains the addition of empiricus (“the empirical”) to Sextus’s name. Sextus represented and jotted down the ideas of the school of the Pyrrhonian skeptics who were after some form of intellectual therapy resulting from the suspension of belief. Do you face the possibility of an adverse event? Don’t worry. Who knows, it may turn out to be good for you. Doubting the consequences of an outcome will allow you to remain imperturbable. The Pyrrhonian skeptics were docile citizens who followed customs and traditions whenever possible, but taught themselves to systematically doubt everything, and thus attain a level of serenity. But while conservative in their habits, they were rabid in their ght against dogma. Among the surviving works of Sextus’s is a diatribe with the beautiful title Adversos Mathematicos, sometimes translated as Against the Professors. Much of it could have been written last Wednesday night! Where Sextus is mostly interesting for my ideas is in his rare mixing of philosophy and decision making in his practice. He was a doer, hence classical scholars don’t say nice things about him. The methods of empirical medicine, relying on seemingly purposeless trial and error, will be central to my ideas on planning and prediction, on how to bene t from the Black Swan. In 1998, when I went out on my own, I called my research laboratory and trading rm Empirica, not for the same antidogmatist reasons, but on account of the far more depressing reminder that it took at least another fourteen centuries after the works of the school of empirical medicine before medicine changed and nally became adogmatic, suspicious of theorizing, profoundly skeptical, and evidence-based! Lesson? That awareness of a problem does not mean much—particularly when you have special interests and self-serving institutions in play. Algazel The third major thinker who dealt with the problem was the eleventh- century Arabic-language skeptic Al-Ghazali, known in Latin as Algazel. His name for a class of dogmatic scholars was ghabi, literally “the imbeciles,” an Arabic form that is funnier than “moron” and more expressive than “obscurantist.” Algazel wrote his own Against the Professors, a diatribe called Tahafut al falasifah, which I translate as “The Incompetence of Philosophers.” It was directed at members of the school called falasifah—the Arabic intellectual establishment was the direct heir of the classical philosophy of the academy, and they managed to reconcile it with Islam through rational argument. Algazel’s attack on “scienti c” knowledge started a debate with Averroës, the medieval philosopher who ended up having the most profound in uence of any medieval thinker (on Jews and Christians, though not on Moslems). The debate between Algazel and Averroës was nally, but sadly, won by both. In its aftermath, many Arab religious thinkers integrated and exaggerated Algazel’s skepticism of the scienti c method, preferring to leave causal considerations to God (in fact it was a stretch of his idea). The West embraced Averroës’s rationalism, built upon Aristotle’s, which survived through Aquinas and the Jewish philosophers who called themselves Averroan for a long time. Many thinkers blame the Arabs’ later abandonment of scienti c method on Algazel’s huge in uence—though apparently this took place a few centuries later. He ended up fueling Su mysticism, in which the worshipper attempts to enter into communion with God, severing all connections with earthly matters. All of this came from the Black Swan problem. The Skeptic, Friend of Religion While the ancient skeptics advocated learned ignorance as the rst step in honest inquiries toward truth, later medieval skeptics, both Moslems and Christians, used skepticism as a tool to avoid accepting what today we call science. Belief in the importance of the Black Swan problem, worries about induction, and skepticism can make some religious arguments more appealing, though in stripped-down, anticlerical, theistic form. This idea of relying on faith, not reason, was known as deism. So there is a tradition of Black Swan skeptics who found solace in religion, best represented by Pierre Bayle, a French-speaking Protestant erudite, philosopher, and theologian, who, exiled in Holland, built an extensive philosophical architecture related to the Pyrrhonian skeptics. Bayle’s writings exerted some considerable in uence on Hume, introducing him to ancient skepticism—to the point where Hume took ideas wholesale from Bayle. Bayle’s Dictionnaire historique et critique was the most read piece of scholarship of the eighteenth century, but like many of my French heroes (such as Frédéric Bastiat), Bayle does not seem to be part of the French curriculum and is nearly impossible to nd in the original French language. Nor is the fourteenth-century Algazelist Nicolas of Autrecourt. Indeed, it is not a well-known fact that the most complete exposition of the ideas of skepticism, until recently, remains the work of a powerful Catholic bishop who was an august member of the French Academy. Pierre-Daniel Huet wrote his Philosophical Treatise on the Weaknesses of the Human Mind in 1690, a remarkable book that tears through dogmas and questions human perception. Huet presents arguments against causality that are quite potent— he states, for instance, that any event can have an in nity of possible causes. Both Huet and Bayle were erudites and spent their lives reading. Huet, who lived into his nineties, had a servant follow him with a book to read aloud to him during meals and breaks and thus avoid lost time. He was deemed the most read person in his day. Let me insist that erudition is important to me. It signals genuine intellectual curiosity. It accompanies an open mind and the desire to probe the ideas of others. Above all, an erudite can be dissatis ed with his own knowledge, and such dissatisfaction is a wonderful shield against Platonicity, the simpli cations of the ve-minute manager, or the philistinism of the overspecialized scholar. Indeed, scholarship without erudition can lead to disasters. I Don’t Want to Be a Turkey But promoting philosophical skepticism is not quite the mission of this book. If awareness of the Black Swan problem can lead us into withdrawal and extreme skepticism, I take here the exact opposite direction. I am interested in deeds and true empiricism. So, this book was not written by a Su mystic, or even by a skeptic in the ancient or medieval sense, or even (we will see) in a philosophical sense, but by a practitioner whose principal aim is to not be a sucker in things that matter, period. Hume was radically skeptical in the philosophical cabinet, but abandoned such ideas when it came to daily life, since he could not handle them. I am doing here the exact opposite: I am skeptical in matters that have implications for daily life. In a way, all I care about is making a decision without being the turkey. Many middlebrows have asked me over the past twenty years, “How do you, Taleb, cross the street given your extreme risk consciousness?” or have stated the more foolish “You are asking us to take no risks.” Of course I am not advocating total risk phobia (we will see that I favor an aggressive type of risk taking): all I will be showing you in this book is how to avoid crossing the street blindfolded. They Want to Live in Mediocristan I have just presented the Black Swan problem in its historical form: the central di culty of generalizing from available information, or of learning from the past, the known, and the seen. I have also presented the list of those who, I believe, are the most relevant historical gures. You can see that it is extremely convenient for us to assume that we live in Mediocristan. Why? Because it allows you to rule out these Black Swan surprises! The Black Swan problem either does not exist or is of small consequence if you live in Mediocristan! Such an assumption magically drives away the problem of induction, which since Sextus Empiricus has been plaguing the history of thinking. The statistician can do away with epistemology. Wishful thinking! We do not live in Mediocristan, so the Black Swan needs a di erent mentality. As we cannot push the problem under the rug, we will have to dig deeper into it. This is not a terminal di culty—and we can even bene t from it. Now, there are other themes arising from our blindness to the Black Swan: 1. We focus on preselected segments of the seen and generalize from it to the unseen: the error of con rmation. 2. We fool ourselves with stories that cater to our Platonic thirst for distinct patterns: the narrative fallacy. 3. We behave as if the Black Swan does not exist: human nature is not programmed for Black Swans. 4. What we see is not necessarily all that is there. History hides Black Swans from us and gives us a mistaken idea about the odds of these events: this is the distortion of silent evidence. 5. We “tunnel”: that is, we focus on a few well-de ned sources of uncertainty, on too speci c a list of Black Swans (at the expense of the others that do not easily come to mind). I will discuss each of the points in the next ve chapters. Then, in the conclusion of Part One, I will show how, in e ect, they are the same topic. * I am safe since I never wear ties (except at funerals). * Since Russell’s original example used a chicken, this is the enhanced North American adaptation. * Statements like those of Captain Smith are so common that it is not even funny. In September 2006, a fund called Amaranth, ironically named after a ower that “never dies,” had to shut down after it lost close to $7 billion in a few days, the most impressive loss in trading history (another irony: I shared o ce space with the traders). A few days prior to the event, the company made a statement to the e ect that investors should not worry because they had twelve risk managers —people who use models of the past to produce risk measures on the odds of such an event. Even if they had one hundred and twelve risk managers, there would be no meaningful di erence; they still would have blown up. Clearly you cannot manufacture more information than the past can deliver; if you buy one hundred copies of The New York Times, I am not too certain that it would help you gain incremental knowledge of the future. We just don’t know how much information there is in the past. * The main tragedy of the high impact-low probability event comes from the mismatch between the time taken to compensate someone and the time one needs to be comfortable that he is not making a bet against the rare event. People have an incentive to bet against it, or to game the system since they can be paid a bonus re ecting their yearly performance when in fact all they are doing is producing illusory pro ts that they will lose back one day. Indeed, the tragedy of capitalism is that since the quality of the returns is not observable from past data, owners of companies, namely shareholders, can be taken for a ride by the managers who show returns and cosmetic pro tability but in fact might be taking hidden risks. Chapter Five CONFIRMATION SHMONFIRMATION! I have so much evidence—Can Zoogles be (sometimes) Boogles?— Corroboration shmorroboration—Popper’s idea As much as it is ingrained in our habits and conventional wisdom, con rmation can be a dangerous error. Assume I told you that I had evidence that the football player O. J. Simpson (who was accused of killing his wife in the 1990s) was not a criminal. Look, the other day I had breakfast with him and he didn’t kill anybody. I am serious, I did not see him kill a single person. Wouldn’t that confirm his innocence? If I said such a thing you would certainly call a shrink, an ambulance, or perhaps even the police, since you might think that I spent too much time in trading rooms or in cafés thinking about this Black Swan topic, and that my logic may represent such an immediate danger to society that I myself need to be locked up immediately. You would have the same reaction if I told you that I took a nap the other day on the railroad track in New Rochelle, New York, and was not killed. Hey, look at me, I am alive, I would say, and that is evidence that lying on train tracks is risk-free. Yet consider the following. Look again at Figure 1 in Chapter 4; someone who observed the turkey’s rst thousand days (but not the shock of the thousand and rst) would tell you, and rightly so, that there is no evidence of the possibility of large events, i.e., Black Swans. You are likely to confuse that statement, however, particularly if you do not pay close attention, with the statement that there is evidence of no possible Black Swans. Even though it is in fact vast, the logical distance between the two assertions will seem very narrow in your mind, so that one can be easily substituted for the other. Ten days from now, if you manage to remember the rst statement at all, you will be likely to retain the second, inaccurate version—that there is proof of no Black Swans. I call this confusion the round-trip fallacy, since these statements are not interchangeable. Such confusion of the two statements partakes of a trivial, very trivial (but crucial), logical error—but we are not immune to trivial, logical errors, nor are professors and thinkers particularly immune to them (complicated equations do not tend to cohabit happily with clarity of mind). Unless we concentrate very hard, we are likely to unwittingly simplify the problem because our minds routinely do so without our knowing it. It is worth a deeper examination here. Many people confuse the statement “almost all terrorists are Moslems” with “almost all Moslems are terrorists.” Assume that the rst statement is true, that 99 percent of terrorists are Moslems. This would mean that only about.001 percent of Moslems are terrorists, since there are more than one billion Moslems and only, say, ten thousand terrorists, one in a hundred thousand. So the logical mistake makes you (unconsciously) overestimate the odds of a randomly drawn individual Moslem person (between the age of, say, fteen and fty) being a terrorist by close to fty thousand times! The reader might see in this round-trip fallacy the unfairness of stereotypes —minorities in urban areas in the United States have su ered from the same confusion: even if most criminals come from their ethnic subgroup, most of their ethnic subgroup are not criminals, but they still su er from discrimination by people who should know better. “I never meant to say that the Conservatives are generally stupid. I meant to say that stupid people are generally Conservative,” John Stuart Mill once complained. This problem is chronic: if you tell people that the key to success is not always skills, they think that you are telling them that it is never skills, always luck. Our inferential machinery, that which we use in daily life, is not made for a complicated environment in which a statement changes markedly when its wording is slightly modi ed. Consider that in a primitive environment there is no consequential di erence between the statements most killers are wild animals and most wild animals are killers. There is an error here, but it is almost inconsequential. Our statistical intuitions have not evolved for a habitat in which these subtleties can make a big di erence. Zoogles Are Not All Boogles All zoogles are boogles. You saw a boogle. Is it a zoogle? Not necessarily, since not all boogles are zoogles; adolescents who make a mistake in answering this kind of question on their SAT test might not make it to college. Yet another person can get very high scores on the SATs and still feel a chill of fear when someone from the wrong side of town steps into the elevator. This inability to automatically transfer knowledge and sophistication from one situation to another, or from theory to practice, is a quite disturbing attribute of human nature. Let us call it the domain specificity of our reactions. By domain-speci c I mean that our reactions, our mode of thinking, our intuitions, depend on the context in which the matter is presented, what evolutionary psychologists call the “domain” of the object or the event. The classroom is a domain; real life is another. We react to a piece of information not on its logical merit, but on the basis of which framework surrounds it, and how it registers with our social- emotional system. Logical problems approached one way in the classroom might be treated di erently in daily life. Indeed they are treated di erently in daily life. Knowledge, even when it is exact, does not often lead to appropriate actions because we tend to forget what we know, or forget how to process it properly if we do not pay attention, even when we are experts. Statisticians, it has been shown, tend to leave their brains in the classroom and engage in the most trivial inferential errors once they are let out on the streets. In 1971, the psychologists Danny Kahneman and Amos Tversky plied professors of statistics with statistical questions not phrased as statistical questions. One was similar to the following (changing the example for clarity): Assume that you live in a town with two hospitals—one large, the other small. On a given day 60 percent of those born in one of the two hospitals are boys. Which hospital is it likely to be? Many statisticians made the equivalent of the mistake (during a casual conversation) of choosing the larger hospital, when in fact the very basis of statistics is that large samples are more stable and should uctuate less from the long-term average—here, 50 percent for each of the sexes—than smaller samples. These statisticians would have unked their own exams. During my days as a quant I counted hundreds of such severe inferential mistakes made by statisticians who forgot that they were statisticians. For another illustration of the way we can be ludicrously domain-speci c in daily life, go to the luxury Reebok Sports Club in New York City, and look at the number of people who, after riding the escalator for a couple of oors, head directly to the StairMasters. This domain speci city of our inferences and reactions works both ways: some problems we can understand in their applications but not in textbooks; others we are better at capturing in the textbook than in the practical application. People can manage to e ortlessly solve a problem in a social situation but struggle when it is presented as an abstract logical problem. We tend to use di erent mental machinery—so-called modules—in di erent situations: our brain lacks a central all-purpose computer that starts with logical rules and applies them equally to all possible situations. And as I’ve said, we can commit a logical mistake in reality but not in the classroom. This asymmetry is best visible in cancer detection. Take doctors examining a patient for signs of cancer; tests are typically done on patients who want to know if they are cured or if there is “recurrence.” (In fact, recurrence is a misnomer; it simply means that the treatment did not kill all the cancerous cells and that these undetected malignant cells have started to multiply out of control.) It is not feasible, in the present state of technology, to examine every single one of the patient’s cells to see if all of them are nonmalignant, so the doctor takes a sample by scanning the body with as much precision as possible. Then she makes an assumption about what she did not see. I was once taken aback when a doctor told me after a routine cancer checkup, “Stop worrying, we have evidence of cure.” “Why?” I asked. “There is evidence of no cancer” was the reply. “How do you know?” I asked. He replied, “The scan is negative.” Yet he went around calling himself doctor! An acronym used in the medical literature is NED, which stands for No Evidence of Disease. There is no such thing as END, Evidence of No Disease. Yet my experience discussing this matter with plenty of doctors, even those who publish papers on their results, is that many slip into the round-trip fallacy during conversation. Doctors in the midst of the scienti c arrogance of the 1960s looked down at mothers’ milk as something primitive, as if it could be replicated by their laboratories—not realizing that mothers’ milk might include useful components that could have eluded their scienti c understanding—a simple confusion of absence of evidence of the bene ts of mothers’ milk with evidence of absence of the bene ts (another case of Platonicity as “it did not make sense” to breast-feed when we could simply use bottles). Many people paid the price for this naïve inference: those who were not breast-fed as infants turned out to be at an increased risk of a collection of health problems, including a higher likelihood of developing certain types of cancer—there had to be in mothers’ milk some necessary nutrients that still elude us. Furthermore, bene ts to mothers who breast-feed were also neglected, such as a reduction in the risk of breast cancer. Likewise with tonsils: the removal of tonsils may lead to a higher incidence of throat cancer, but for decades doctors never suspected that this “useless” tissue might actually have a use that escaped their detection. The same with the dietary ber found in fruits and vegetables: doctors in the 1960s found it useless because they saw no immediate evidence of its necessity, and so they created a malnourished generation. Fiber, it turns out, acts to slow down the absorption of sugars in the blood and scrapes the intestinal tract of precancerous cells. Indeed medicine has caused plenty of damage throughout history, owing to this simple kind of inferential confusion. I am not saying here that doctors should not have beliefs, only that some kinds of de nitive, closed beliefs need to be avoided—this is what Menodotus and his school seemed to be advocating with their brand of skeptical-empirical medicine that avoided theorizing. Medicine has gotten better—but many kinds of knowledge have not. Evidence By a mental mechanism I call naïve empiricism, we have a natural tendency to look for instances that con rm our story and our vision of the world—these instances are always easy to nd. Alas, with tools, and fools, anything can be easy to nd. You take past instances that corroborate your theories and you treat them as evidence. For instance, a diplomat will show you his “accomplishments,” not what he failed to do. Mathematicians will try to convince you that their science is useful to society by pointing out instances where it proved helpful, not those where it was a waste of time, or, worse, those numerous mathematical applications that in icted a severe cost on society owing to the highly unempirical nature of elegant mathematical theories. Even in testing a hypothesis, we tend to look for instances where the hypothesis proved true. Of course we can easily nd con rmation; all we have to do is look, or have a researcher do it for us. I can find confirmation for just about anything, the way a skilled London cabbie can nd tra c to increase the fare, even on a holiday. Some people go further and give me examples of events that we have been able to foresee with some success—indeed there are a few, like landing a man on the moon and the economic growth of the twenty- rst century. One can nd plenty of “counterevidence” to the points in this book, the best being that newspapers are excellent at predicting movie and theater schedules. Look, I predicted yesterday that the sun would rise today, and it did! NEGATIVE EMPIRICISM The good news is that there is a way around this naïve empiricism. I am saying that a series of corroborative facts is not necessarily evidence. Seeing white swans does not con rm the nonexistence of black swans. There is an exception, however: I know what statement is wrong, but not necessarily what statement is correct. If I see a black swan I can certify that all swans are not white! If I see someone kill, then I can be practically certain that he is a criminal. If I don’t see him kill, I cannot be certain that he is innocent. The same applies to cancer detection: the nding of a single malignant tumor proves that you have cancer, but the absence of such a nding cannot allow you to say with certainty that you are cancer-free. We can get closer to the truth by negative instances, not by veri cation! It is misleading to build a general rule from observed facts. Contrary to conventional wisdom, our body of knowledge does not increase from a series of con rmatory observations, like the turkey’s. But there are some things I can remain skeptical about, and others I can safely consider certain. This makes the consequences of observations one-sided. It is not much more di cult than that. This asymmetry is immensely practical. It tells us that we do not have to be complete skeptics, just semiskeptics. The subtlety of real life over the books is that, in your decision making, you need be interested only in one side of the story: if you seek certainty about whether the patient has cancer, not certainty about whether he is healthy, then you might be satis ed with negative inference, since it will supply you the certainty you seek. So we can learn a lot from data—but not as much as we expect. Sometimes a lot of data can be meaningless; at other times one single piece of information can be very meaningful. It is true that a thousand days cannot prove you right, but one day can prove you to be wrong. The person who is credited with the promotion of this idea of one-sided semiskepticism is Sir Doktor Professor Karl Raimund Popper, who may be the only philosopher of science who is actually read and discussed by actors in the real world (though not as enthusiastically by professional philosophers). As I am writing these lines, a black-and-white picture of him is hanging on the wall of my study. It was a gift I got in Munich from the essayist Jochen Wegner, who, like me, considers Popper to be about all “we’ve got” among modern thinkers—well, almost. He writes to us, not to other philosophers. “We” are the empirical decision makers who hold that uncertainty is our discipline, and that understanding how to act under conditions of incomplete information is the highest and most urgent human pursuit. Popper generated a large-scale theory around this asymmetry, based on a technique called “falsi cation” (to falsify is to prove wrong) meant to distinguish between science and nonscience, and people immediately started splitting hairs about its technicalities, even though it is not the most interesting, or the most original, of Popper’s ideas. This idea about the asymmetry of knowledge is so liked by practitioners, because it is obvious to them; it is the way they run their business. The philosopher maudit Charles Sanders Peirce, who, like an artist, got only posthumous respect, also came up with a version of this Black Swan solution when Popper was wearing diapers —some people even called it the Peirce-Popper approach. Popper’s far more powerful and original idea is the “open” society, one that relies on skepticism as a modus operandi, refusing and resisting de nitive truths. He accused Plato of closing our minds, according to the arguments I described in the Prologue. But Popper’s biggest idea was his insight concerning the fundamental, severe, and incurable unpredictability of the world, and that I will leave for the chapter on prediction.* Of course, it is not so easy to “falsify,” i.e., to state that something is wrong with full certainty. Imperfections in your testing method may yield a mistaken “no.” The doctor discovering cancer cells might have faulty equipment causing optical illusions; or he could be a bell-curve-using economist disguised as a doctor. An eyewitness to a crime might be drunk. But it remains the case that you know what is wrong with a lot more confidence than you know what is right. All pieces of information are not equal in importance. Popper introduced the mechanism of conjectures and refutations, which works as follows: you formulate a (bold) conjecture and you start looking for the observation that would prove you wrong. This is the alternative to our search for con rmatory instances. If you think the task is easy, you will be disappointed—few humans have a natural ability to do this. I confess that I am not one of them; it does not come naturally to me.* Counting to Three Cognitive scientists have studied our natural tendency to look only for corroboration; they call this vulnerability to the corroboration error the confirmation bias. There are some experiments showing that people focus only on the books read in Umberto Eco’s library. You can test a given rule either directly, by looking at instances where it works, or indirectly, by focusing on where it does not work. As we saw earlier, discon rming instances are far more powerful in establishing truth. Yet we tend to not be aware of this property. The rst experiment I know of concerning this phenomenon was done by the psychologist P. C. Wason. He presented subjects with the three-number sequence 2, 4, 6, and asked them to try to guess the rule generating it. Their method of guessing was to produce other three-number sequences, to which the experimenter would respond “yes” or “no” depending on whether the new sequences were consistent with the rule. Once con dent with their answers, the subjects would formulate the rule. (Note the similarity of this experiment to the discussion in Chapter 1 of the way history presents itself to us: assuming history is generated according to some logic, we see only the events, never the rules, but need to guess how it works.) The correct rule was “numbers in ascending order,” nothing more. Very few subjects discovered it because in order to do so they had to o er a series in descending order (that the experimenter would say “no” to). Wason noticed that the subjects had a rule in mind, but gave him examples aimed at con rming it instead of trying to supply series that were inconsistent with their hypothesis. Subjects tenaciously kept trying to con rm the rules that they had made up. This experiment inspired a collection of similar tests, of which another example: Subjects were asked which questions to ask to nd out whether a person was extroverted or not, purportedly for another type of experiment. It was established that subjects supplied mostly questions for which a “yes” answer would support the hypothesis. But there are exceptions. Among them gure chess grand masters, who, it has been shown, actually do focus on where a speculative move might be weak; rookies, by comparison, look for con rmatory instances instead of falsifying ones. But don’t play chess to practice skepticism. Scientists believe that it is the search for their own weaknesses that makes them good chess players, not the practice of chess that turns them into skeptics. Similarly, the speculator George Soros, when making a nancial bet, keeps looking for instances that would prove his initial theory wrong. This, perhaps, is true self- con dence: the ability to look at the world without the need to nd signs that stroke one’s ego.* Sadly, the notion of corroboration is rooted in our intellectual habits and discourse. Consider this comment by the writer and critic John Updike: “When Julian Jaynes … speculates that until late in the second millennium B.C. men had no consciousness but were automatically obeying the voices of gods, we are astounded but compelled to follow this remarkable thesis through all the corroborative evidence.” Jaynes’s thesis may be right, but, Mr. Updike, the central problem of knowledge (and the point of this chapter) is that there is no such animal as corroborative evidence. Saw Another Red Mini! The following point further illustrates the absurdity of con rmation. If you believe that witnessing an additional white swan will bring con rmation that there are no black swans, then you should also accept the statement, on purely logical grounds, that the sighting of a red Mini Cooper should con rm that there are no black swans. Why? Just consider that the statement “all swans are white” is equivalent to “all nonwhite objects are not swans.” What con rms the latter statement should con rm the former. Therefore, a mind with a con rmation bent would infer that the sighting of a nonwhite object that is not a swan should bring such con rmation. This argument, known as Hempel’s raven paradox, was rediscovered by my friend the (thinking) mathematician Bruno Dupire during one of our intense meditating walks in London—one of those intense walk-discussions, intense to the point of our not noticing the rain. He pointed to a red Mini and shouted, “Look, Nassim, look! No Black Swan!” Not Everything We are not naïve enough to believe that someone will be immortal because we have never seen him die, or that someone is innocent of murder because we have never seen him kill. The problem of naïve generalization does not plague us everywhere. But such smart pockets of inductive skepticism tend to involve events that we have encountered in our natural environment, matters from which we have learned to avoid foolish generalization. For instance, when children are presented with the picture of a single member of a group and are asked to guess the properties of other unseen members, they are capable of selecting which attributes to generalize. Show a child a photograph of someone overweight, tell her that he is a member of a tribe, and ask her to describe the rest of the population: she will (most likely) not jump to the conclusion that all the members of the tribe are weight- challenged. But she would respond di erently to generalizations involving skin color. If you show her people of dark complexion and ask her to describe their co-tribesmen, she will assume that they too have dark skin. So it seems that we are endowed with speci c and elaborate inductive instincts showing us the way. Contrary to the opinion held by the great David Hume, and that of the British empiricist tradition, that belief arises from custom, as they assumed that we learn generalizations solely from experience and empirical observations, it was shown from studies of infant behavior that we come equipped with mental machinery that causes us to selectively generalize from experiences (i.e., to selectively acquire inductive learning in some domains but remain skeptical in others). By doing so, we are not learning from a mere thousand days, but bene ting, thanks to evolution, from the learning of our ancestors—which found its way into our biology. Back to Mediocristan And we may have learned things wrong from our ancestors. I speculate here that we probably inherited the instincts adequate for survival in the East African Great Lakes region where we presumably hail from, but these instincts are certainly not well adapted to the present, post-alphabet, intensely informational, and statistically complex environment. Indeed our environment is a bit more complex than we (and our institutions) seem to realize. How? The modern world, being Extremistan, is dominated by rare—very rare—events. It can deliver a Black Swan after thousands and thousands of white ones, so we need to withhold judgment for longer than we are inclined to. As I said in Chapter 3, it is impossible— biologically impossible—to run into a human several hundred miles tall, so our intuitions rule these events out. But the sales of a book or the magnitude of social events do not follow such strictures. It takes a lot more than a thousand days to accept that a writer is ungifted, a market will not crash, a war will not happen, a project is hopeless, a country is “our ally,” a company will not go bust, a brokerage-house security analyst is not a charlatan, or a neighbor will not attack us. In the distant past, humans could make inferences far more accurately and quickly. Furthermore, the sources of Black Swans today have multiplied beyond measurability.* In the primitive environment they were limited to newly encountered wild animals, new enemies, and abrupt weather changes. These events were repeatable enough for us to have built an innate fear of them. This instinct to make inferences rather quickly, and to “tunnel” (i.e., focus on a small number of sources of uncertainty, or causes of known Black Swans) remains rather ingrained in us. This instinct, in a word, is our predicament. * Neither Peirce nor Popper was the rst to come up with this asymmetry. The philosopher Victor Brochard mentioned the importance of negative empiricism in 1878, as if it were a matter held by the empiricists to be the sound way to do business—ancients understood it implicitly. Out-of-print books deliver many surprises. * As I said in the Prologue, the likely not happening is also a Black Swan. So discon rming the likely is equivalent to con rming the unlikely. * This con rmation problem pervades our modern life, since most con icts have at their root the following mental bias: when Arabs and Israelis watch news reports they see di erent stories in the same succession of events. Likewise, Democrats and Republicans look at di erent parts of the same data and never converge to the same opinions. Once your mind is inhabited with a certain view of the world, you will tend to only consider instances proving you to be right. Paradoxically, the more information you have, the more justi ed you will feel in your views. * Clearly, weather-related and geodesic events (such as tornadoes and earthquakes) have not changed much over the past millennium, but what have changed are the socioeconomic consequences of such occurrences. Today, an earthquake or hurricane commands more and more severe economic consequences than it did in the past because of the interlocking relationships between economic entities and the intensi cation of the “network e ects” that we will discuss in Part Three. Matters that used to have mild e ects now command a high impact. Tokyo’s 1923 earthquake caused a drop of about a third in Japan’s GNP. Extrapolating from the tragedy of Kobe in 1994, we can easily infer that the consequences of another such earthquake in Tokyo would be far costlier than that of its predecessor. Chapter Six THE NARRATIVE FALLACY The cause of the because—How to split a brain—Effective methods of pointing at the ceiling—Dopamine will help you win—I will stop riding motorcycles (but not today)—Both empirical and psychologist? Since when? ON THE CAUSES OF MY REJECTION OF CAUSES During the fall of 2004, I attended a conference on aesthetics and science in Rome, perhaps the best possible location for such a meeting since aesthetics permeates everything there, down to one’s personal behavior and tone of voice. At lunch, a prominent professor from a university in southern Italy greeted me with extreme enthusiasm. I had listened earlier that morning to his impassioned presentation; he was so charismatic, so convinced, and so convincing that, although I could not understand much of what he said, I found myself fully agreeing with everything. I could only make out a sentence here and there, since my knowledge of Italian worked better in cocktail parties than in intellectual and scholarly venues. At some point during his speech, he turned all red with anger—thus convincing me (and the audience) that he was de nitely right. He assailed me during lunch to congratulate me for showing the e ects of those causal links that are more prevalent in the human mind than in reality. The conversation got so animated that we stood together near the bu et table, blocking the other delegates from getting close to the food. He was speaking accented French (with his hands), I was answering in primitive Italian (with my hands), and we were so vivacious that the other guests were afraid to interrupt a conversation of such importance and animation. He was emphatic about my previous book on randomness, a sort of angry trader’s reaction against blindness to luck in life and in the markets, which had been published there under the musical title Giocati dal caso. I had been lucky to have a translator who knew almost more about the topic than I did, and the book found a small following among Italian intellectuals. “I am a huge fan of your ideas, but I feel slighted. These are truly mine too, and you wrote the book that I (almost) planned to write,” he said. “You are a lucky man; you presented in such a comprehensive way the e ect of chance on society and the overestimation of cause and e ect. You show how stupid we are to systematically try to explain skills.” He stopped, then added, in a calmer tone: “But, mon cher ami, let me tell you quelque chose [uttered very slowly, with his thumb hitting his index and middle ngers]: had you grown up in a Protestant society where people are told that e orts are linked to rewards and individual responsibility is emphasized, you would never have seen the world in such a manner. You were able to see luck and separate cause and e ect because of your Eastern Orthodox Mediterranean heritage.” He was using the French à cause. And he was so convincing that, for a minute, I agreed with his interpretation. We like stories, we like to summarize, and we like to simplify, i.e., to reduce the dimension of matters. The rst of the problems of human nature that we examine in this section, the one just illustrated above, is what I call the narrative fallacy. (It is actually a fraud, but, to be more polite, I will call it a fallacy.) The fallacy is associated with our vulnerability to overinterpretation and our predilection for compact stories over raw truths. It severely distorts our mental representation of the world; it is particularly acute when it comes to the rare event. Notice how my thoughtful Italian fellow traveler shared my militancy against overinterpretation and against the overestimation of cause, yet was unable to see me and my work without a reason, a cause, tagged to both, as anything other than part of a story. He had to invent a cause. Furthermore, he was not aware of his having fallen into the causation trap, nor was I immediately aware of it myself. The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding. This chapter will cover, just like the preceding one, a single problem, but seemingly in di erent disciplines. The problem of narrativity, although extensively studied in one of its versions by psychologists, is not so “psychological”: something about the way disciplines are designed masks the point that it is more generally a problem of information. While narrativity comes from an ingrained biological need to reduce dimensionality, robots would be prone to the same process of reduction. Information wants to be reduced. To help the reader locate himself: in studying the problem of induction in the previous chapter, we examined what could be inferred about the unseen, what lies outside our information set. Here, we look at the seen, what lies within the information set, and we examine the distortions in the act of processing it. There is plenty to say on this topic, but the angle I take concerns narrativity’s simpli cation of the world around us and its e ects on our perception of the Black Swan and wild uncertainty. SPLITTING BRAINS Ferreting out antilogics is an exhilarating activity. For a few months, you experience the titillating sensation that you’ve just entered a new world. After that, the novelty fades, and your thinking returns to business as usual. The world is dull again until you nd another subject to be excited about (or manage to put another hotshot in a state of total rage). For me, one such antilogic came with the discovery—thanks to the literature on cognition—that, counter to what everyone believes, not theorizing is an act—that theorizing can correspond to the absence of willed activity, the “default” option. It takes considerable e ort to see facts (and remember them) while withholding judgment and resisting explanations. And this theorizing disease is rarely under our control: it is largely anatomical, part of our biology, so ghting it requires ghting one’s own self. So the ancient skeptics’ precepts to withhold judgment go against our nature. Talk is cheap, a problem with advice-giving philosophy we will see in Chapter 13. Try to be a true skeptic with respect to your interpretations and you will be worn out in no time. You will also be humiliated for resisting to theorize. (There are tricks to achieving true skepticism; but you have to go through the back door rather than engage in a frontal attack on yourself.) Even from an anatomical perspective, it is impossible for our brain to see anything in raw form without some interpretation. We may not even always be conscious of it. Post hoc rationalization. In an experiment, psychologists asked women to select from among twelve pairs of nylon stockings the ones they preferred. The researchers then asked the women their reasons for their choices. Texture, “feel,” and color featured among the selected reasons. All the pairs of stockings were, in fact, identical. The women supplied back t, post hoc explanations. Does this suggest that we are better at explaining than at understanding? Let us see. A series of famous experiments on split-brain patients gives us convincing physical—that is, biological—evidence of the automatic aspect of the act of interpretation. There appears to be a sense-making organ in us—though it may not be easy to zoom in on it with any precision. Let us see how it is detected. Split-brain patients have no connection between the left and the right sides of their brains, which prevents information from being shared between the two cerebral hemispheres. These patients are jewels, rare and invaluable for researchers. You literally have two di erent persons, and you can communicate with each one of them separately; the di erences between the two individuals give you some indication about the specialization of each of the hemispheres. This splitting is usually the result of surgery to remedy more serious conditions like severe epilepsy; no, scientists in Western countries (and most Eastern ones) are no longer allowed to cut human brains in half, even if it is for the pursuit of knowledge and wisdom. Now, say that you induced such a person to perform an act—raise his nger, laugh, or grab a shovel—in order to ascertain how he ascribes a reason to his act (when in fact you know that there is no reason for it other than your inducing it). If you ask the right hemisphere, here isolated from the left side, to perform the action, then ask the other hemisphere for an explanation, the patient will invariably o er some interpretation: “I was pointing at the ceiling in order to …,” “I saw something interesting on the wall,” or, if you ask this author, I will o er my usual “because I am originally from the Greek Orthodox village of Amioun, northern Lebanon,” et cetera. Now, if you do the opposite, namely instruct the isolated left hemisphere of a right-handed person to perform an act and ask the right hemisphere for the reasons, you will be plainly told, “I don’t know.” Note that the left hemisphere is where language and deduction generally reside. I warn the reader hungry for “science” against attempts to build a neural map: all I’m trying to show is the biological basis of this tendency toward causality, not its precise location. There are reasons for us to be suspicious of these “right brain/left brain” distinctions and subsequent pop-science generalizations about personality. Indeed, the idea that the left brain controls language may not be so accurate: the left brain seems more precisely to be where pattern interpretation resides, and it may control language only insofar as language has a pattern-interpretation attribute. Another di erence between the hemispheres is that the right brain deals with novelty. It tends to see the gestalt (the general, or the forest), in a parallel mode, while the left brain is concerned with the trees, in a serial mode. To see an illustration of our biological dependence on a story, consider the following experiment. First, read this: A BIRD IN THE THE HAND IS WORTH TWO IN THE BUSH Do you see anything unusual? Try again.* The Sydney-based brain scientist Alan Snyder (who has a Philadelphia accent) made the following discovery. If you inhibit the left hemisphere of a right-handed person (more technically, by directing low-frequency magnetic pulses into the left frontotemporal lobes), you lower his rate of error in reading the above caption. Our propensity to impose meaning and concepts blocks our awareness of the details making up the concept. However, if you zap people’s left hemispheres, they become more realistic—they can draw better and with more verisimilitude. Their minds become better at seeing the objects themselves, cleared of theories, narratives, and prejudice. Why is it hard to avoid interpretation? It is key that, as we saw with the vignette of the Italian scholar, brain functions often operate outside our awareness. You interpret pretty much as you perform other activities deemed automatic and outside your control, like breathing. What makes nontheorizing cost you so much more energy than theorizing? First, there is the impenetrability of the activity. I said that much of it takes place outside of our awareness: if you don’t know that you are making the inference, how can you stop yourself unless you stay in a continuous state of alert? And if you need to be continuously on the watch, doesn’t that cause fatigue? Try it for an afternoon and see. A Little More Dopamine In addition to the story of the left-brain interpreter, we have more physiological evidence of our ingrained pattern seeking, thanks to our growing knowledge of the role of neurotransmitters, the chemicals that are assumed to transport signals between di erent parts of the brain. It appears that pattern perception increases along with the concentration in the brain of the chemical dopamine. Dopamine also regulates moods and supplies an internal reward system in the brain (not surprisingly, it is found in slightly higher concentrations in the left side of the brains of right-handed persons than on the right side). A higher concentration of dopamine appears to lower skepticism and result in greater vulnerability to pattern detection; an injection of L-dopa, a substance used to treat patients with Parkinson’s disease, seems to increase such activity and lowers one’s suspension of belief. The person becomes vulnerable to all manner of fads, such as astrology, superstitions, economics, and tarot-card reading. Actually, as I am writing this, there is news of a pending lawsuit by a patient going after his doctor for more than $200,000—an amount he allegedly lost while gambling. The patient claims that the treatment of his Parkinson’s disease caused him to go on wild betting sprees in casinos. It turns out that one of the side e ects of L-dopa is that a small but signi cant minority of patients become compulsive gamblers. Since such gambling is associated with their seeing what they believe to be clear patterns in random numbers, this illustrates the relation between knowledge and randomness. It also shows that some aspects of what we call “knowledge” (and what I call narrative) are an ailment. Once again, I warn the reader that I am not focusing on dopamine as the reason for our overinterpreting; rather, my point is that there is a physical and neural correlate to such operation and that our minds are largely victims of our physical embodiment. Our minds are like inmates, captive to our biology, unless we manage a cunning escape. It is the lack of our control of such inferences that I am stressing. Tomorrow, someone may discover another chemical or organic basis for our perception of patterns, or counter what I said about the left-brain interpreter by showing the role of a more complex structure; but it would not negate the idea that perception of causation has a biological foundation. Andrey Nikolayevich’s Rule There is another, even deeper reason for our inclination to narrate, and it is not psychological. It has to do with the e ect of order on information storage and retrieval in any system, and it’s worth explaining here because of what I consider the central problems of probability and information theory. The rst problem is that information is costly to obtain. The second problem is that information is also costly to store—like real estate in New York. The more orderly, less random, patterned, and narratized a series of words or symbols, the easier it is to store that series in one’s mind or jot it down in a book so your grandchildren can read it someday. Finally, information is costly to manipulate and retrieve. With so many brain cells—one hundred billion (and counting)—the attic is quite large, so the di culties probably do not arise from storage-capacity limitations, but may be just indexing problems. Your conscious, or working, memory, the one you are using to read these lines and make sense of their meaning, is considerably smaller than the attic. Consider that your working memory has di culty holding a mere phone number longer than seven digits. Change metaphors slightly and imagine that your consciousness is a desk in the Library of Congress: no matter how many books the library holds, and makes available for retrieval, the size of your desk sets some processing limitations. Compression is vital to the performance of conscious work. Consider a collection of words glued together to constitute a 500-page book. If the words are purely random, picked up from the dictionary in a totally unpredictable way, you will not be able to summarize, transfer, or reduce the dimensions of that book without losing something signi cant from it. You need 100,000 words to carry the exact message of a random 100,000 words with you on your next trip to Siberia. Now consider the opposite: a book lled with the repetition of the following sentence: “The chairman of [insert here your company name] is a lucky fellow who happened to be in the right place at the right time and claims credit for the company’s success, without making a single allowance for luck,” running ten times per page for 500 pages. The entire book can be accurately compressed, as I have just done, into 34 words (out of 100,000); you could reproduce it with total delity out of such a kernel. By nding the pattern, the logic of the series, you no longer need to memorize it all. You just store the pattern. And, as we can see here, a pattern is obviously more compact than raw information. You looked into the book and found a rule. It is along these lines that the great probabilist Andrey Nikolayevich Kolmogorov de ned the degree of randomness; it is called “Kolmogorov complexity.” We, members of the human variety of primates, have a hunger for rules because we need to reduce the dimension of matters so they can get into our heads. Or, rather, sadly, so we can squeeze them into our heads. The more random information is, the greater the dimensionality, and thus the more di cult to summarize. The more you summarize, the more order you put in, the less randomness. Hence the same condition that makes us simplify pushes us to think that the world is less random than it actually is. And the Black Swan is what we leave out of simpli cation. Both the artistic and scienti c enterprises are the product of our need to reduce dimensions and in ict some order on things. Think of the world around you, laden with trillions of details. Try to describe it and you will nd yourself tempted to weave a thread into what you are saying. A novel, a story, a myth, or a tale, all have the same function: they spare us from the complexity of the world and shield us from its randomness. Myths impart order to the disorder of human perception and the perceived “chaos of human experience.”* Indeed, many severe psychological disorders accompany the feeling of loss of control of—being able to “make sense” of—one’s environment. Platonicity a ects us here once again. The very same desire for order, interestingly, applies to scienti c pursuits—it is just that, unlike art, the (stated) purpose of science is to get to the truth, not to give you a feeling of organization or make you feel better. We tend to use knowledge as therapy. A Better Way to Die To view the potency of narrative, consider the following statement: “The king died and the queen died.” Compare it to “The king died, and then the queen died of grief.” This exercise, presented by the novelist E. M. Forster, shows the distinction between mere succession of information and a plot. But notice the hitch here: although we added information to the second statement, we e ectively reduced the dimension of the total. The second sentence is, in a way, much lighter to carry and easier to remember; we now have one single piece of information in place of two. As we can remember it with less e ort, we can also sell it to others, that is, market it better as a packaged idea. This, in a nutshell, is the de nition and function of a narrative. To see how the narrative can lead to a mistake in the assessment of the odds, do the following experiment. Give someone a well-written detective story—say, an Agatha Christie novel with a handful of characters who can all be plausibly deemed guilty. Now question your subject about the probabilities of each character’s being the murderer. Unless she writes down the percentages to keep an exact tally of them, they should add up to well over 100 percent (even well over 200 percent for a good novel). The better the detective writer, the higher that number. REMEMBRANCE OF THINGS NOT QUITE PAST Our tendency to perceive—to impose—narrativity and causality are symptoms of the same disease—dimension reduction. Moreover, like causality, narrativity has a chronological dimension and leads to the perception of the ow of time. Causality makes time ow in a single direction, and so does narrativity. But memory and the arrow of time can get mixed up. Narrativity can viciously a ect the remembrance of past events as follows: we will tend to more easily remember those facts from our past that t a narrative, while we tend to neglect others that do not appear to play a causal role in that narrative. Consider that we recall events in our memory all the while knowing the answer of what happened subsequently. It is literally impossible to ignore posterior information when solving a problem. This simple inability to remember not the true sequence of events but a reconstructed one will make history appear in hindsight to be far more explainable than it actually was—or is. Conventional wisdom holds that memory is like a serial recording device like a computer diskette. In reality, memory is dynamic—not static—like a paper on which new texts (or new versions of the same text) will be continuously recorded, thanks to the power of posterior information. (In a remarkable insight, the nineteenth-century Parisian poet Charles Baudelaire compared our memory to a palimpsest, a type of parchment on which old texts can be erased and new ones written over them.) Memory is more of a self-serving dynamic revision machine: you remember the last time you remembered the event and, without realizing it, change the story at every subsequent remembrance. So we pull memories along causative lines, revising them involuntarily and unconsciously. We continuously renarrate past events in the light of what appears to make what we think of as logical sense after these events occur. By a process called reverberation, a memory corresponds to the strengthening of connections from an increase of brain activity in a given sector of the brain—the more activity, the stronger the memory. While we believe that the memory is xed, constant, and connected, all this is very far from truth. What makes sense according to information obtained subsequently will be remembered more vividly. We invent some of our memories—a sore point in courts of law since it has been shown that plenty of people have invented child-abuse stories by dint of listening to theories. The Madman’s Narrative We have far too many possible ways to interpret past events for our own good. Consider the behavior of paranoid people. I have had the privilege to work with colleagues who have hidden paranoid disorders that come to the surface on occasion. When the person is highly intelligent, he can astonish you with the most far-fetched, yet completely plausible interpretations of the most innocuous remark. If I say to them, “I am afraid that …,” in reference to an undesirable state of the world, they may interpret it literally, that I am experiencing actual fright, and it triggers an episode of fear on the part of the paranoid person. Someone hit with such a disorder can muster the most insigni cant of details and construct an elaborate and coherent theory of why there is a conspiracy against him. And if you gather, say, ten paranoid people, all in the same state of episodic delusion, the ten of them will provide ten distinct, yet coherent, interpretations of events. When I was about seven, my schoolteacher showed us a painting of an assembly of impecunious Frenchmen in the Middle Ages at a banquet held by one of their benefactors, some benevolent king, as I recall. They were holding the soup bowls to their lips. The schoolteacher asked me why they had their noses in the bowls and I answered, “Because they were not taught manners.” She replied, “Wrong. The reason is that they are hungry.” I felt stupid at not having thought of this, but I could not understand what made one explanation more likely than the other, or why we weren’t both wrong (there was no, or little, silverware at the time, which seems the most likely explanation). Beyond our perceptional distortions, there is a problem with logic itself. How can someone have no clue yet be able to hold a set of perfectly sound and coherent viewpoints that match the observations and abide by every single possible rule of logic? Consider that two people can hold incompatible beliefs based on the exact same data. Does this mean that there are possible families of explanations and that each of these can be equally perfect and sound? Certainly not. One may have a million ways to explain things, but the true explanation is unique, whether or not it is within our reach. In a famous argument, the logician W. V. Quine showed that there exist families of logically consistent interpretations and theories that can match a given series of facts. Such insight should warn us that mere absence of nonsense may not be su cient to make something true. Quine’s problem is related to his nding di culty in translating statements between languages, simply because one could interpret any sentence in an in nity of ways. (Note here that someone splitting hairs could nd a self- canceling aspect to Quine’s own writing. I wonder how he expects us to understand this very point in a nonin nity of ways). This does not mean that we cannot talk about causes; there are ways to escape the narrative fallacy. How? By making conjectures and running experiments, or as we will see in Part Two (alas), by making testable predictions.* The psychology experiments I am discussing here do so: they select a population and run a test. The results should hold in Tennessee, in China, even in France. Narrative and Therapy If narrativity causes us to see past events as more predictable, more expected, and less random than they actually were, then we should be able to make it work for us as therapy against some of the stings of randomness. Say some unpleasant event, such as a car accident for which you feel indirectly responsible, leaves you with a bad lingering aftertaste. You are tortured by the thought that you caused injuries to your passengers; you are continuously aware that you could have avoided the accident. Your mind keeps playing alternative scenarios branching out of a main tree: if you did not wake up three minutes later than usual, you would have avoided the car accident. It was not your intension to injure your passengers, yet your mind is inhabited with remorse and guilt. People in professions with high randomness (such as in the markets) can su er more than their share of the toxic e ect of look-back stings: I should have sold my portfolio at the top; I could have bought that stock years ago for pennies and I would now be driving a pink convertible; et cetera. If you are a professional, you can feel that you “made a mistake,” or, worse, that “mistakes were made,” when you failed to do the equivalent of buying the winning lottery ticket for your investors, and feel the need to apologize for your “reckless” investment strategy (that is, what seems reckless in retrospect). How can you get rid of such a persistent throb? Don’t try to willingly avoid thinking about it: this will almost surely back re. A more appropriate solution is to make the event appear more unavoidable. Hey, it was bound to take place and it seems futile to agonize over it. How can you do so? Well, with a narrative. Patients who spend fteen minutes every day writing an account of their daily troubles feel indeed better about what has befallen them. You feel less guilty for not having avoided certain events; you feel less responsible for it. Things appear as if they were bound to happen. If you work in a randomness-laden profession, as we see, you are likely to su er burnout e ects from that constant second-guessing of your past actions in terms of what played out subsequently. Keeping a diary is the least you can do in these circumstances. TO BE WRONG WITH INFINITE PRECISION We harbor a crippling dislike for the abstract. One day in December 2003, when Saddam Hussein was captured, Bloomberg News ashed the following headline at 13:01: U.S. TREASURIES RISE; HUSSEIN CAPTURE MAY NOT CURB TERRORISM. Whenever there is a market move, the news media feel obligated to give the “reason.” Half an hour later, they had to issue a new headline. As these U.S. Treasury bonds fell in price (they uctuate all day long, so there was nothing special about that), Bloomberg News had a new reason for the fall: Saddam’s capture (the same Saddam). At 13:31 they issued the next bulletin: U.S. TREASURIES FALL; HUSSEIN CAPTURE BOOSTS ALLURE OF RISKY ASSETS. So it was the same capture (the cause) explaining one event and its exact opposite. Clearly, this can’t be; these two facts cannot be linked. Do media journalists repair to the nurse’s o ce every morning to get their daily dopamine injection so that they can narrate better? (Note the irony that the word dope, used to designate the illegal drugs athletes take to improve performance, has the same root as dopamine.) It happens all the time: a cause is proposed to make you swallow the news and make matters more concrete. After a candidate’s defeat in an election, you will be supplied with the “cause” of the voters’ disgruntlement. Any conceivable cause can do. The media, however, go to great lengths to make the process “thorough” with their armies of fact-checkers. It is as if they wanted to be wrong with in nite precision (instead of accepting being approximately right, like a fable writer). Note that in the absence of any other information about a person you encounter, you tend to fall back on her nationality and background as a salient attribute (as the Italian scholar did with me). How do I know that this attribution to the background is bogus? I did my own empirical test by checking how many traders with my background who experienced the same war became skeptical empiricists, and found none out of twenty-six. This nationality business helps you make a great story and satis es your hunger for ascription of causes. It seems to be the dump site where all explanations go until one can ferret out a more obvious one (such as, say, some evolutionary argument that “makes sense”). Indeed, people tend to fool themselves with their self-narrative of “national identity,” which, in a breakthrough paper in Science by sixty- ve authors, was shown to be a total ction. (“National traits” might be great for movies, they might help a lot with war, but they are Platonic notions that carry no empirical validity—yet, for example, both the English and the non-English erroneously believe in an English “national temperament.”) Empirically, sex, social class, and profession seem to be better predictors of someone’s behavior than nationality (a male from Sweden resembles a male from Togo more than a female from Sweden; a philosopher from Peru resembles a philosopher from Scotland more than a janitor from Peru; and so on). The problem of overcausation does not lie with the journalist, but with the public. Nobody would pay one dollar to buy a series of abstract statistics reminiscent of a boring college lecture. We want to be told stories, and there is nothing wrong with that—except that we should check more thoroughly whether the story provides consequential distortions of reality. Could it be that ction reveals truth while non ction is a harbor for the liar? Could it be that fables and stories are closer to the truth than is the thoroughly fact- checked ABC News? Just consider that the newspapers try to get impeccable facts, but weave them into a narrative in such a way as to convey the impression of causality (and knowledge). There are fact-checkers, not intellect-checkers. Alas. But there is no reason to single out journalists. Academics in narrative disciplines do the same thing, but dress it up in a formal language—we will catch up to them in Chapter 10, on prediction. Besides narrative and causality, journalists and public intellectuals of the sound-bite variety do not make the world simpler. Instead, they almost invariably make it look far more complicated than it is. The next time you are asked to discuss world events, plead ignorance, and give the arguments I o ered in this chapter casting doubt on the visibility of the immediate cause. You will be told that “you overanalyze,” or that “you are too complicated.” All you will be saying is that you don’t know! Dispassionate Science Now, if you think that science is an abstract subject free of sensationalism and distortions, I have some sobering news. Empirical researchers have found evidence that scientists too are vulnerable to narratives, emphasizing titles and “sexy” attention-grabbing punch lines over more substantive matters. They too are human and get their attention from sensational matters. The way to remedy this is through meta-analyses of scienti c studies, in which an überresearcher peruses the entire literature, which includes the less-advertised articles, and produces a synthesis. THE SENSATIONAL AND THE BLACK SWAN Let us see how narrativity a ects our understanding of the Black Swan. Narrative, as well as its associated mechanism of salience of the sensational fact, can mess up our projection of the odds. Take the following experiment conducted by Kahneman and Tversky, the pair introduced in the previous chapter: the subjects were forecasting professionals who were asked to imagine the following scenarios and estimate their odds. 1. A massive ood somewhere in America in which more than a thousand people die. 2. An earthquake in California, causing massive ooding, in which more than a thousand people die. Respondents estimated the rst event to be less likely than the second. An earthquake in California, however, is a readily imaginable cause, which greatly increases the mental availability—hence the assessed probability—of the ood scenario. Likewise, if I asked you how many cases of lung cancer are likely to take place in the country, you would supply some number, say half a million. Now, if instead I asked you many cases of lung cancer are likely to take place because of smoking, odds are that you would give me a much higher number (I would guess more than twice as high). Adding the because makes these matters far more plausible, and far more likely. Cancer from smoking seems more likely than cancer without a cause attached to it—an unspeci ed cause means no cause at all. I return to the example of E. M. Forster’s plot from earlier in this chapter, but seen from the standpoint of probability. Which of these two statements seems more likely? Joey seemed happily married. He killed his wife. Joey seemed happily married. He killed his wife to get her inheritance. Clearly the second statement seems more likely at rst blush, which is a pure mistake of logic, since the rst, being broader, can accommodate more causes, such as he killed his wife because he went mad, because she cheated with both the postman and the ski instructor, because he entered a state of delusion and mistook her for a nancial forecaster. All this can lead to pathologies in our decision making. How? Just imagine that, as shown by Paul Slovic and his collaborators, people are more likely to pay for terrorism insurance than for plain insurance (which covers, among other things, terrorism). The Black Swans we imagine, discuss, and worry about do not resemble those likely to be Black Swans. We worry about the wrong “improbable” events, as we will see next. Black Swan Blindness The rst question about the paradox of the perception of Black Swans is as follows: How is it that some Black Swans are overblown in our minds when the topic of this book is that we mainly neglect Black Swans? The answer is that there are two varieties of rare events: a) the narrated Black Swans, those that are present in the current discourse and that you are likely to hear about on television, and b) those nobody talks about, since they escape models—those that you would feel ashamed discussing in public because they do not seem plausible. I can safely say that it is entirely compatible with human nature that the incidences of Black Swans would be overestimated in the rst case, but severely underestimated in the second one. Indeed, lottery buyers overestimate their chances of winning because they visualize such a potent payo —in fact, they are so blind to the odds that they treat odds of one in a thousand and one in a million almost in the same way. Much of the empirical research agrees with this pattern of overestimation and underestimation of Black Swans. Kahneman and Tversky initially showed that people overreact to low-probability outcomes when you discuss the event with them, when you make them aware of it. If you ask someone, “What is the probability of death from a plane crash?” for instance, they will raise it. However, Slovic and his colleagues found, in insurance patterns, neglect of these highly improbable events in people’s insurance purchases. They call it the “preference for insuring against probable small losses”—at the expense of the less probable but larger impact ones. Finally, after years of searching for empirical tests of our scorn of the abstract, I found researchers in Israel that ran the experiments I had been waiting for. Greg Barron and Ido Erev provide experimental evidence that agents underweigh small probabilities when they engage in sequential experiments in which they derive the probabilities themselves, when they are not supplied with the odds. If you draw from an urn with a very small number of red balls and a high number of black ones, and if you do not have a clue about the relative proportions, you are likely to underestimate the number of red balls. It is only when you are supplied with their frequency— say, by telling you that 3 percent of the balls are red—that you overestimate it in your betting decision. I’ve spent a lot of time wondering how we can be so myopic and shorttermist yet survive in an environment that is not entirely from Mediocristan. One day, looking at the gray beard that makes me look ten years older than I am and thinking about the pleasure I derive from exhibiting it, I realized the following. Respect for elders in many societies might be a kind of compensation for our short-term memory. The word senate comes from senatus, “aged” in Latin; sheikh in Arabic means both a member of the ruling elite and “elder.” Elders are repositories of complicated inductive learning that includes information about rare events. Elders can scare us with stories— which is why we become overexcited when we think of a specific Black Swan. I was excited to nd out that this also holds true in the animal kingdom: a paper in Science showed that elephant matriarchs play the role of superadvisers on rare events. We learn from repetition—at the expense of events that have not happened before. Events that are nonrepeatable are ignored before their occurrence, and overestimated after (for a while). After a Black Swan, such as September 11, 2001, people expect it to recur when in fact the odds of that happening have arguably been lowered. We like to think about specific and known Black Swans when in fact the very nature of randomness lies in its abstraction. As I said in the Prologue, it is the wrong de nition of a god. The economist Hyman Minsky sees the cycles of risk taking in the economy as following a pattern: stability and absence of crises encourage risk taking, complacency, and lowered awareness of the possibility of problems. Then a crisis occurs, resulting in people being shell-shocked and scared of investing their resources. Strangely, both Minsky and his school, dubbed Post- Keynesian, and his opponents, the libertarian “Austrian” economists, have the same analysis, except that the rst group recommends governmental intervention to smooth out the cycle, while the second believes that civil servants should not be trusted to deal with such matters. While both schools of thought seem to ght each other, they both emphasize fundamental uncertainty and stand outside the mainstream economic departments (though they have large followings among businessmen and nonacademics). No doubt this emphasis on fundamental uncertainty bothers the Platoni ers. All the tests of probability I discussed in this section are important; they show how we are fooled by the rarity of Black Swans but not by the role they play in the aggregate, their impact. In a preliminary study, the psychologist Dan Goldstein and I subjected students at the London Business School to examples from two domains, Mediocristan and Extremistan. We selected height, weight, and Internet hits per website. The subjects were good at guessing the role of rare events in Mediocristan-style environments. But their intuitions failed when it came to variables outside Mediocristan, showing that we are e ectively not skilled at intuitively gauging the impact of the improbable, such as the contribution of a blockbuster to total book sales. In one experiment they underestimated by thirty-three times the effect of a rare event. Next, let us see how this lack of understanding of abstract matters a ects us. The Pull of the Sensational Indeed, abstract statistical information does not sway us as much as the anecdote—no matter how sophisticated the person. I will give a few instances. The Italian Toddler. Toddler In the late 1970s, a toddler fell into a well in Italy. The rescue team could not pull him out of the hole and the child stayed at the bottom of the well, helplessly crying. Understandably, the whole of Italy was concerned with his fate; the entire country hung on the frequent news updates. The child’s cries produced acute pains of guilt in the powerless rescuers and reporters. His picture was prominently displayed on magazines and newspapers, and you could hardly walk in the center of Milan without being reminded of his plight. Meanwhile, the civil war was raging in Lebanon, with an occasional hiatus in the con ict. While in the midst of their mess, the Lebanese were also absorbed in the fate of that child. The Italian child. Five miles away, people were dying from the war, citizens were threatened with car bombs, but the fate of the Italian child ranked high among the interests of the population in the Christian quarter of Beirut. “Look how cute that poor thing is,” I was told. And the entire town expressed relief upon his eventual rescue. As Stalin, who knew something about the business of mortality, supposedly said, “One death is a tragedy; a million is a statistic.” Statistics stay silent in us. Terrorism kills, but the biggest killer remains the environment, responsible for close to 13 million deaths annually. But terrorism causes outrage, which makes us overestimate the likelihood of a potential terrorist attack—and react more violently to one when it happens. We feel the sting of man-made damage far more than that caused by nature. Central Park. Park You are on a plane on your way to spend a long (bibulous) weekend in New York City. You are sitting next to an insurance salesman who, being a salesman, cannot stop talking. For him, not talking is the e ortful activity. He tells you that his cousin (with whom he will celebrate the holidays) worked in a law o ce with someone whose brother-in-law’s business partner’s twin brother was mugged and killed in Central Park. Indeed, Central Park in glorious New York City. That was in 1989, if he remembers it well (the year is now 2007). The poor victim was only thirty- eight and had a wife and three children, one of whom had a birth defect and needed special care at Cornell Medical Center. Three children, one of whom needed special care, lost their father because of his foolish visit to Central Park. Well, you are likely to avoid Central Park during your stay. You know you can get crime statistics from the Web or from any brochure, rather than anecdotal information from a verbally incontinent salesman. But you can’t help it. For a while, the name Central Park will conjure up the image of that that poor, undeserving man lying on the polluted grass. It will take a lot of statistical information to override your hesitation. Motorcycle Riding. Riding Likewise, the death of a relative in a motorcycle accident is far more likely to in uence your attitude toward motorcycles than volumes of statistical analyses. You can e ortlessly look up accident statistics on the Web, but they do not easily come to mind. Note that I ride my red Vespa around town, since no one in my immediate environment has recently su ered an accident—although I am aware of this problem in logic, I am incapable of acting on it. Now, I do not disagree with those recommending the use of a narrative to get attention. Indeed, our consciousness may be linked to our ability to concoct some form of story about ourselves. It is just that narrative can be lethal when used in the wrong places. THE SHORTCUTS Next I will go beyond narrative to discuss the more general attributes of thinking and reasoning behind our crippling shallowness. These defects in reasoning have been cataloged and investigated by a powerful research tradition represented by a school called the Society of Judgment and Decision Making (the only academic and professional society of which I am a member, and proudly so; its gatherings are the only ones where I do not have tension in my shoulders or anger ts). It is associated with the school of research started by Daniel Kahneman, Amos Tversky, and their friends, such as Robyn Dawes and Paul Slovic. It is mostly composed of empirical psychologists and cognitive scientists whose methodology hews strictly to running very precise, controlled experiments (physics-style) on humans and making catalogs of how people react, with minimal theorizing. They look for regularities. Note that empirical psychologists use the bell curve to gauge errors in their testing methods, but as we will see more technically in Chapter 15, this is one of the rare adequate applications of the bell curve in social science, owing to the nature of the experiments. We have seen such types of experiments earlier in this chapter with the ood in California, and with the identi cation of the con rmation bias in Chapter 5. These researchers have mapped our activities into (roughly) a dual mode of thinking, which they separate as “System 1” and “System 2,” or the experiential and the cogitative. The distinction is straightforward. System 1, 1 the experiential one, is e ortless, automatic, fast, opaque (we do not know that we are using it), parallel-processed, and can lend itself to errors. It is what we call “intuition,” and performs these quick acts of prowess that became popular under the name blink, after the title of Malcolm Gladwell’s bestselling book. System 1 is highly emotional, precisely because it is quick. It produces shortcuts, called “heuristics,” that allow us to function rapidly and e ectively. Dan Goldstein calls these heuristics “fast and frugal.” Others prefer to call them “quick and dirty.” Now, these shortcuts are certainly virtuous, since they are rapid, but, at times, they can lead us into some severe mistakes. This main idea generated an entire school of research called the heuristics and biases approach (heuristics corresponds to the study of shortcuts, biases stand for mistakes). System 2, 2 the cogitative one, is what we normally call thinking. It is what you use in a classroom, as it is e ortful (even for Frenchmen), reasoned, slow, logical, serial, progressive, and self-aware (you can follow the steps in your reasoning). It makes fewer mistakes than the experiential system, and, since you know how you derived your result, you can retrace your steps and correct them in an adaptive manner. Most of our mistakes in reasoning come from using System 1 when we are in fact thinking that we are using System 2. How? Since we react without thinking and introspection, the main property of System 1 is our lack of awareness of using it! Recall the round-trip error, our tendency to confuse “no evidence of Black Swans” with “evidence of no Black Swans;” it shows System 1 at work. You have to make an e ort (System 2) to override your rst reaction. Clearly Mother Nature makes you use the fast System 1 to get out of trouble, so that you do not sit down and cogitate whether there is truly a tiger attacking you or if it is an optical illusion. You run immediately, before you become “conscious” of the presence of the tiger. Emotions are assumed to be the weapon System 1 uses to direct us and force us to act quickly. It mediates risk avoidance far more e ectively than our cognitive system. Indeed, neurobiologists who have studied the emotional system show how it often reacts to the presence of danger long before we are consciously aware of it—we experience fear and start reacting a few milliseconds before we realize that we are facing a snake. Much of the trouble with human nature resides in our inability to use much of System 2, or to use it in a prolonged way without having to take a long beach vacation. In addition, we often just forget to use it. Beware the Brain Note that neurobiologists make, roughly, a similar distinction to that between System 1 and System 2, except that they operate along anatomical lines. Their distinction di erentiates between parts of the brain, the cortical part, which we are supposed to use for thinking, and which distinguishes us from other animals, and the fast-reacting limbic brain, which is the center of emotions, and which we share with other mammals. As a skeptical empiricist, I do not want to be the turkey, so I do not want to focus solely on speci c organs in the brain, since we do not observe brain functions very well. Some people try to identify what are called the neural correlates of, say, decision making, or more aggressively the neural “substrates” of, say, memory. The brain might be more complicated machinery than we think; its anatomy has fooled us repeatedly in the past. We can, however, assess regularities by running precise and thorough experiments on how people react under certain conditions, and keep a tally of what we see. For an example that justi es skepticism about unconditional reliance on neurobiology, and vindicates the ideas of the empirical school of medicine to which Sextus belonged, let’s consider the intelligence of birds. I kept reading in various texts that the cortex is where animals do their “thinking,” and that the creatures with the largest cortex have the highest intelligence—we humans have the largest cortex, followed by bank executives, dolphins, and our cousins the apes. Well, it turns out that some birds, such as parrots, have a high level of intelligence, equivalent to that of dolphins, but that the intelligence of birds correlates with the size of another part of the brain, called the hyperstriatum. So neurobiology with its attribute of “hard science” can sometimes (though not always) fool you into a Platoni ed, reductive statement. I am amazed that the “empirics,” skeptical about links between anatomy and function, had such insight—no wonder their school played a very small part in intellectual history. As a skeptical empiricist I prefer t

Use Quizgecko on...
Browser
Browser