Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Summary

This text discusses strategies for avoiding the negative consequences of Black Swan events. It cautions against relying on conventional risk management methods which are not suitable for extreme situations. The author emphasizes that relying on positive advice for success is often misguided and focuses on avoiding losses instead.

Full Transcript

VII WHAT TO DO WITH THE FOURTH QUADRANT NOT USING THE WRONG MAP: THE NOTION OF IATROGENICS So for now I can produce phronetic rules (in the Aristotelian sense of phronesis, decision-making wisdom). Perhaps the story of my life lies in the following dilemma. To paraphrase Danny...

VII WHAT TO DO WITH THE FOURTH QUADRANT NOT USING THE WRONG MAP: THE NOTION OF IATROGENICS So for now I can produce phronetic rules (in the Aristotelian sense of phronesis, decision-making wisdom). Perhaps the story of my life lies in the following dilemma. To paraphrase Danny Kahneman, for psychological comfort some people would rather use a map of the Pyrénées while lost in the Alps than use nothing at all. They do not do so explicitly, but they actually do worse than that while dealing with the future and using risk measures. They would prefer a defective forecast to nothing. So providing a sucker with a probabilistic measure does a wonderful job of making him take more risks. I was planning to do a test with Dan Goldstein (as part of our general research programs to understand the intuitions of humans in Extremistan). Danny (he is great to walk with, but he does not do aimless strolling, “flâner”) insisted that doing our own experiments was not necessary. There is plenty of research on anchoring that proves the toxicity of giving someone a wrong numerical estimate of risk. Numerous experiments provide evidence that professionals are signi cantly in uenced by numbers that they know to be irrelevant to their decision, like writing down the last four digits of one’s social security number before making a numerical estimate of potential market moves. German judges, very respectable people, who rolled dice before sentencing issued sentences 50 percent longer when the dice showed a high number, without being conscious of it. Negative Advice Simply, don’t get yourself into the Fourth Quadrant, the Black Swan Domain. But it is hard to heed this sound advice. Psychologists distinguish between acts of commission (what we do) and acts of omission. Although these are economically equivalent for the bottom line (a dollar not lost is a dollar earned), they are not treated equally in our minds. However, as I said, recommendations of the style “Do not do” are more robust empirically. How do you live long? By avoiding death. Yet people do not realize that success consists mainly in avoiding losses, not in trying to derive pro ts. Positive advice is usually the province of the charlatan. Bookstores are full of books on how someone became successful; there are almost no books with the title What I Learned Going Bust, or Ten Mistakes to Avoid in Life. Linked to this need for positive advice is the preference we have to do something rather than nothing, even in cases when doing something is harmful. I was recently on TV and some empty-suit type kept bugging me for precise advice on how to pull out of the crisis. It was impossible to communicate my “what not to do” advice, or to point out that my eld is error avoidance not emergency room surgery, and that it could be a stand-alone discipline, just as worthy. Indeed, I spent twelve years trying to explain that in many instances it was better—and wiser—to have no models than to have the mathematical acrobatics we had. Unfortunately such lack of rigor pervades the place where we expect it the least: institutional science. Science, particularly its academic version, has never liked negative results, let alone the statement and advertising of its own limits. The reward system is not set up for it. You get respect for doing funambulism or spectator sports—following the right steps to become “the Einstein of Economics” or “the next Darwin” rather than give society something real by debunking myths or by cataloguing where our knowledge stops. Let me return to Gödel’s limit. In some instances we accept the limits of knowledge, trumpeting, say, Gödel’s “breakthrough” mathematical limit because it shows elegance in formulation and mathematical prowess—though the importance of this limit is dwarfed by our practical limits in forecasting climate changes, crises, social turmoil, or the fate of the endowment funds that will nance research into such future “elegant” limits. This is why I claim that my Fourth Quadrant solution is the most applied of such limits. Iatrogenics and the Nihilism Label Let’s consider medicine (that sister of philosophy), which only started saving lives less than a century ago (I am generous), and to a lesser extent than initially advertised in the popular literature, as the drops in mortality seem to arise much more from awareness of sanitation and the (random) discovery of antibiotics rather than from therapeutic contributions. Doctors, driven by the beastly illusion of control, spent a long time killing patients, not considering that “doing nothing” could be a valid option (it was “nihilistic”)—and research compiled by Spyros Makridakis shows that they still do to some extent, particularly in the overdiagnoses of some diseases. The nihilism label has always been used to harm. Practitioners who were conservative and considered the possibility of letting nature do its job, or who stated the limits of our medical understanding, were until the 1960s accused of “therapeutic nihilism.” It was deemed “unscienti c” to avoid embarking on a course of action based on an incomplete understanding of the human body—to say, “This is the limit; this is where my body of knowledge stops.” It has been used against this author by intellectual fraudsters trying to sell products. The very term iatrogenics, i.e., the study of the harm caused by the healer, is not widespread—I have never seen it used outside medicine. In spite of my lifelong obsession with what is called type 1 error, or the false positive, I was only introduced to the concept of iatrogenic harm very recently, thanks to a conversation with the essayist Bryan Appleyard. How can such a major idea remain hidden from our consciousness? Even in medicine, that is, modern medicine, the ancient concept “Do no harm” sneaked in very late. The philosopher of science Georges Canguilhem wondered why it was not until the 1950s that the idea came to us. This, to me, is a mystery: how professionals can cause harm for such a long time in the name of knowledge and get away with it. Sadly, further investigation shows that these iatrogenics were mere rediscoveries after science grew too arrogant by the Enlightenment. Alas, once again, the elders knew better—Greeks, Romans, Byzantines, and Arabs had a built-in respect for the limits of knowledge. There is a treatise by the medieval Arab philosopher and doctor Al-Ruhawi which betrays the familiarity of these Mediterranean cultures with iatrogenics. I have also in the past speculated that religion saved lives by taking the patient away from the doctor. You could satisfy your illusion of control by going to the Temple of Apollo rather than seeing the doctor. What is interesting is that the ancient Mediterraneans may have understood the trade-o very well and may have accepted religion partly as a tool to tame the illusion of control. You cannot do anything with knowledge unless you know where it stops, and the costs of using it. Post-Enlightenment science, and its daughter superstar science, were lucky to have done well in (linear) physics, chemistry, and engineering. But at some point we need to give up on elegance to focus on something that was given short shrift for a very long time: the maps showing what current knowledge and current methods do not do for us; and a rigorous study of generalized scienti c iatrogenics, what harm can be caused by science (or, better, an exposition of what harm has been done by science). I nd it the most respectable of pursuits. Iatrogenics of Regulators. Regulators Alas, the call for more (unconditional) regulation of economic activity appears to be a normal response. My worst nightmares have been the results of regulators. It was they who promoted the reliance on ratings by credit agencies and the “risk measurement” that fragilized the system as bankers used them to build positions that turned sour. Yet every time there is a problem, we do the Soviet-Harvard thing of more regulation, which makes investment bankers, lawyers, and former-regulators-turned-Wall-Street-advisers rich. They also serve the interest of other groups. PHRONETIC RULES: WHAT IS WISE TO DO (OR NOT DO) IN REAL LIFE TO MITIGATE THE FOURTH QUADRANT IF YOU CAN’T BARBELL? The most obvious way to exit the Fourth Quadrant is by “truncating,” cutting certain exposures by purchasing insurance, when available, putting oneself in the “barbell” situation described in Chapter 13. But if you are not able to barbell, and cannot avoid the exposure, as with, say, climate notions, exposure to epidemics, and similar items from the previous table, then we can subscribe to the following rules of “wisdom” to increase robustness. 1. Have respect for time and nondemonstrative knowledge. knowledge. Recall my respect for Mother Earth—simply because of its age. It takes much, much longer for a series of data in the Fourth Quadrant to reveal its properties. I had been railing that compensation for bank executives, who are squarely in the Fourth Quadrant, is done on a short- term window, say yearly, for things that blow up every ve, ten, or fteen years, causing a mismatch between observation window and window of a su cient length to reveal the properties. Bankers get rich in spite of long-term negative returns. Things that have worked for a long time are preferable—they are more likely to have reached their ergodic states. At the worst, we don’t know how long they’ll last.* Remember that the burden of proof lies on someone disturbing a complex system, not on the person protecting the status quo. 2. Avoid optimization; learn to love redundancy. redundancy. I’ve discussed redundancy and optimization in Section I. A few more things to say. Redundancy (in terms of having savings and cash under the mattress) is the opposite of debt. Psychologists tell us that getting rich does not bring happiness—if you spend your savings. But if you hide it under the mattress, you are less vulnerable to a Black Swan. Also, for example, one can buy insurance, or construct it, to robustify a portfolio. Overspecialization also is not a great idea. Consider what can happen to you if your job disappears completely. Someone who is a Wall Street analyst (of the forecasting kind) moonlighting as a belly dancer will do a lot better in a nancial crisis than someone who is just an analyst. 3. Avoid prediction of small-probability payoffs— though not necessarily of ordinary ones. ones. Obviously, payo s from remote events are more di cult to predict. 4. Beware the “atypicality” of remote events. events. There are suckers’ methods called “scenario analysis” and “stress testing”—usually based on the past (or on some “make sense” theory). Yet (I showed earlier how) past shortfalls do not predict subsequent shortfalls, so we do not know what exactly to stress-test for. Likewise, “prediction markets” do not function here, since bets do not protect an open-ended exposure. They might work for a binary election, but not in the Fourth Quadrant. 5. Beware moral hazard with bonus payments. payments. It’s optimal to make a series of bonuses by betting on hidden risks in the Fourth Quadrant, then blow up and write a thank-you letter. This is called the moral hazard argument. Bankers are always rich because of this bonus mismatch. In fact, society ends up paying for it. The same applies to company executives. 6. Avoid some risk metrics. metrics. Conventional metrics, based on Mediocristan, adjusted for large deviations, don’t work. This is where suckers fall in the trap—one far more extensive than just assuming something other than the Gaussian bell curve. Words like “standard deviation” are not stable and do not measure anything in the Fourth Quadrant. Neither do “linear regression” (the errors are in the Fourth Quadrant), “Sharpe ratio,” Markowitz optimal portfolio, ANOVA shmanova, Least square, and literally anything mechanistically pulled out of a statistics textbook. My problem has been that people can accept the role of rare events, agree with me, and still use these metrics, which leads me to wonder if this is a psychological disorder. 7. Positive or negative Black Swan? Clearly the Fourth Quadrant can present positive or negative exposures to the Black Swan; if the exposure is negative, the true mean is more likely to be underestimated by measurement of past realizations, and the total potential is likewise poorly gauged. Life expectancy of humans is not as long as we suspect (under globalization) because the data are missing something central: the big epidemic (which far outweighs the gains from cures). The same, as we saw, with the return on risky investments. On the other hand, research ventures show a less rosy past history. A biotech company (usually) faces positive uncertainty, while a bank faces almost exclusively negative shocks. Model errors bene t those exposed to positive Black Swans. In my new research, I call that being “concave” or “convex” to model error. 8. Do not confuse absence of volatility with absence of risk. risk. Conventional metrics using volatility as an indicator of stability fool us, because the evolution into Extremistan is marked by a lowering of volatility—and a greater risk of big jumps. This has fooled a chairman of the Federal Reserve called Ben Bernanke—as well as the entire banking system. It will fool again. 9. Beware presentations of risk numbers. numbers. I presented earlier the results showing how risk perception is subjected to framing issues that are acute in the Fourth Quadrant. They are much more benign elsewhere. * Most of the smear campaign I mentioned earlier revolves around misrepresentation of the insurance-style properties and performance of the hedging strategies for the barbell and “portfolio robusti cation” associated with Black Swan ideas, a misrepresentation perhaps made credible by the fact that when one observes returns on a short-term basis, one sees nothing relevant except shallow frequent variations (mainly losses). People just forget to cumulate properly and remember frequency rather than total. The real returns, according to the press, were around 60 percent in 2000 and more than 100 percent in 2008, with relatively shallow losses and pro ts otherwise, so it would be child’s play to infer that returns would be in the triple digits over the past decade (all you need is one good jump). The Standard and Poor’s 500 was down 23 percent over the same ten-year period. VIII THE TEN PRINCIPLES FOR A BLACK-SWAN-ROBUST SOCIETY* I wrote the following “ten principles” mostly for economic life to cope with the Fourth Quadrant, in the aftermath of the crisis. 1. What is fragile should break early, while it’s still small. small Nothing should ever become too big to fail. Evolution in economic life helps those with the maximum amount of hidden risks become the biggest. 2. No socialization of losses and privatization of gains. gains Whatever may need to be bailed out should be nationalized; whatever does not need a bailout should be free, small, and risk-bearing. We got ourselves into the worst of capitalism and socialism. In France, in the 1980s, the socialists took over the banks. In the United States in the 2000s, the banks took over the government. This is surreal. 3. People who were driving a school bus blindfolded (and crashed it) should never be given a new bus. bus The economics establishment (universities, regulators, central bankers, government o cials, various organizations sta ed with economists) lost its legitimacy with the failure of the system in 2008. It is irresponsible and foolish to put our trust in their ability to get us out of this mess. It is also irresponsible to listen to advice from the “risk experts” and business school academia still promoting their measurements, which failed us (such as Value-at-Risk). Find the smart people whose hands are clean. 4. Don’t let someone making an “incentive” bonus manage a nuclear plant—or your financial risks. risks Odds are he would cut every corner on safety to show “pro ts” from these savings while claiming to be “conservative.” Bonuses don’t accommodate the hidden risks of blowups. It is the asymmetry of the bonus system that got us here. No incentives without disincentives: capitalism is about rewards and punishments, not just rewards. 5. Compensate complexity with simplicity. simplicity Complexity from globalization and highly networked economic life needs to be countered by simplicity in nancial products. The complex economy is already a form of leverage. It’s the leverage of e ciency. Adding debt to that system produces wild and dangerous gyrations and provides no room for error. Complex systems survive thanks to slack and redundancy, not debt and optimization. Capitalism cannot avoid fads and bubbles. Equity bubbles (as in 2000) have proved to be mild; debt bubbles are vicious. 6. Do not give children dynamite sticks, even if they come with a warning label. label Complex nancial products need to be banned because nobody understands them, and few are rational enough to know it. We need to protect citizens from themselves, from bankers selling them “hedging” products, and from gullible regulators who listen to economic theorists. 7. Only Ponzi schemes should depend on confidence. Governments should never need to “restore confidence.” In a Ponzi scheme (the most famous being the one perpetrated by Bernard Mado ), a person borrows or takes funds from a new investor to repay an existing investor trying to exit the investment. Cascading rumors are a product of complex systems. Governments cannot stop the rumors. Simply, we need to be in a position to shrug o rumors, be robust to them. 8. Do not give an addict more drugs if he has withdrawal pains. pains Using leverage to cure the problems of too much leverage is not homeopathy, it’s denial. The debt crisis is not a temporary problem, it’s a structural one. We need rehab. 9. Citizens should not depend on financial assets as a repository of value and should not rely on fallible “expert” advice for their retirement. retirement Economic life should be de nancialized. We should learn not to use markets as warehouses of value: they do not harbor the certainties that normal citizens can require, in spite of “expert” opinions. Investments should be for entertainment. Citizens should experience anxiety from their own businesses (which they control), not from their investments (which they do not control). 10. Make an omelet with the broken eggs. eggs Finally, the crisis of 2008 was not a problem to x with makeshift repairs, any more than a boat with a rotten hull can be xed with ad hoc patches. We need to rebuild the new hull with new (stronger) material; we will have to remake the system before it does so itself. Let us move voluntarily into a robust economy by helping what needs to be broken break on its own, converting debt into equity, marginalizing the economics and business school establishments, shutting down the “Nobel” in economics, banning leveraged buyouts, putting bankers where they belong, clawing back the bonuses of those who got us here (by claiming restitution of the funds paid to, say, Robert Rubin or banksters whose wealth has been subsidized by taxpaying schoolteachers), and teaching people to navigate a world with fewer certainties. Then we will see an economic life closer to our biological environment: smaller rms, a richer ecology, no speculative leverage—a world in which entrepreneurs, not bankers, take the risks, and in which companies are born and die every day without making the news. After this foray into business economics, let us now move to something less vulgar. * This passage was published as an editorial in 2009 in the Financial Times. Some editor—who no doubt had not read The Black Swan— changed my “Black-Swan-robust” into “Black-Swan-proof.” There is no such thing as Black Swan proof, but robust is good enough. IX AMOR FATI: HOW TO BECOME INDESTRUCTIBLE And now, reader, time to part again. I am in Amioun, the village of my ancestors. Sixteen out of sixteen great-great-grandparents, eight out of eight great-grandparents, and four out of four grandparents are buried in the area, almost all within a four- mile radius. Not counting the great-uncles, cousins, and other relatives. They are all resting in cemeteries in the middle of groves of olive trees in the Koura valley at the base of Mount Lebanon, which rises so dramatically that you can see the snow above you only twenty miles away. Today, at dusk, I went to St. Sergius, locally called Mar Sarkis, from the Aramaic, the cemetery of my side of the family, to say hello to my father and my uncle Dédé, who so much disliked my sloppy dressing during my rioting days. I am sure Dédé is still o ended with me; the last time he saw me in Paris he calmly dropped that I was dressed like an Australian: so the real reason for my visit to the cemetery was more self- serving. I wanted to prepare myself for where I will go next. This is my plan B. I kept looking at the position of my own grave. A Black Swan cannot so easily destroy a man who has an idea of his nal destination. I felt robust. * * * I am carrying Seneca on all my travels, in the original, as I relearned Latin—reading him in English, that language desecrated by economists and the bureaucrats of the Federal Reserve Bank of the United States, did not feel right. Not on this occasion. It would be equivalent to reading Yeats in Swahili. Seneca was the great teacher and practitioner of Stoicism, who transformed Greek-Phoenician Stoicism from metaphysical and theoretical discourse into a practical and moral program of living, a way to reach the summum bonum, an untranslatable expression depicting a life of supreme moral qualities, as perceived by the Romans. But, even apart from this unreachable aim, he has practical advice, perhaps the only advice I can see transfer from words to practice. Seneca is the one who (with some help from Cicero) taught Montaigne that to philosophize is to learn how to die. Seneca is the one who taught Nietzsche the amor fati, “love fate,” which prompted Nietzsche to just shrug and ignore adversity, mistreatment by his critics, and his disease, to the point of being bored by them. For Seneca, Stoicism is about dealing with loss, and nding ways to overcome our loss aversion—how to become less dependent on what you have. Recall the “prospect theory” of Danny Kahneman and his colleagues: if I gave you a nice house and a Lamborghini, put a million dollars in your bank account, and provided you with a social network, then, a few months later, took everything away, you would be much worse o than if nothing had happened in the rst place. Seneca’s credibility as a moral philosopher (to me) came from the fact that, unlike other philosophers, he did not denigrate the value of wealth, ownership, and property because he was poor. Seneca was said to be one of the wealthiest men of his day. He just made himself ready to lose everything every day. Every day. Although his detractors claim that in real life he was not the Stoic sage he claimed to be, mainly on account of his habit of seducing married women (with non-Stoic husbands), he came quite close to it. A powerful man, he just had many detractors— and, if he fell short of his Stoic ideal, he came much closer to it than his contemporaries. And, just as it is harder to have good qualities when one is rich than when one is poor, it is harder to be a Stoic when one is wealthy, powerful, and respected than when one is destitute, miserable, and lonely. Nihil Perditi In Seneca’s Epistle IX, Stilbo’s country was captured by Demetrius, called the Sacker of Cities. Stilbo’s children and his wife were killed. Stilbo was asked what his losses were. Nihil perditi, I have lost nothing, he answered. Omnia mea mecum sunt! My goods are all with me. The man had reached the Stoic self-su ciency, the robustness to adverse events, called apatheia in Stoic jargon. In other words, nothing that might be taken from him did he consider to be a good. Which includes one’s own life. Seneca’s readiness to lose everything extended to his own life. Suspected of partaking in a conspiracy, he was asked by the emperor Nero to commit suicide. The record says that he executed his own suicide in an exemplary way, unperturbed, as if he had prepared for it every day. Seneca ended his essays (written in the epistolary form) with vale, often mistranslated as “farewell.” It has the same root as “value” and “valor” and means both “be strong (i.e., robust)” and “be worthy.” Vale. NOTES BEHIND THE CURTAIN: ADDITIONAL NOTES, TECHNICAL COMMENTS, REFERENCES, AND READING RECOMMENDATIONS I separate topics thematically; so general references will mostly be found in the chapter in which they rst occur. I prefer to use a logical sequence here rather than stick to chapter division. PROLOGUE and CHAPTER 1 Bell curve: curve When I write bell curve I mean the Gaussian bell curve, a.k.a. normal distribution. All curves look like bells, so this is a nickname. Also, when I write the Gaussian basin I mean all distributions that are similar and for which the improbable is inconsequential and of low impact (more technically, nonscalable—all moments are nite). Note that the visual presentation of the bell curve in histogram form masks the contribution of the remote event, as such an event will be a point to the far right or far left of the center. Diamonds: Diamonds See Eco (2002). Platonicity: Platonicity I’m simply referring to incurring the risk of using a wrong form—not that forms don’t exist. I am not against essentialisms; I am often skeptical of our reverse engineering and identi cation of the right form. It is an inverse problem! Empiricist: Empiricist If I call myself an empiricist, or an empirical philosopher, it is because I am just suspicious of con rmatory generalizations and hasty theorizing. Do not confuse this with the British empiricist tradition. Also, many statisticians, as we will see with the Makridakis competition, call themselves “empirical” researchers, but are in fact just the opposite—they t theories to the past. Mention of Christ: Christ See Flavius Josephus’s The Jewish War. Great War and prediction: prediction Ferguson (2006b). Hindsight bias (retrospective distortion): distortion) See Fischho (1982b). Historical fractures: fractures Braudel (1985), p. 169, quotes a little known passage from Gautier. He writes, “‘This long history,’ wrote Emile-Félix Gautier, ‘lasted a dozen centuries, longer than the entire history of France. Encountering the rst Arab sword, the Greek language and thought, all that heritage went up in smoke, as if it never happened.’” For discussions of discontinuity, see also Gurvitch (1957), Braudel (1953), Harris (2004). Religions spread as bestsellers: bestsellers Veyne (1971). See also Veyne (2005). Clustering in political opinions: opinions Pinker (2002). Categories: Categories Rosch (1973, 1978). See also Umberto Eco’s Kant and the Platypus. Historiography and philosophy of history: history Bloch (1953), Carr (1961), Gaddis (2002), Braudel (1969, 1990), Bourdé and Martin (1989), Certeau (1975), Muqaddamat Ibn Khaldoun illustrate the search for causation, which we see already present in Herodotus. For philosophy of history, Aron (1961), Fukuyama (1992). For postmodern views, see Jenkins (1991). I show in Part Two how historiographers are unaware of the epistemological di erence between forward and backward processes (i.e., between projection and reverse engineering). Information and markets: markets See Shiller (1981, 1989), DeLong et al. (1991), and Cutler et al. (1989). The bulk of market moves does not have a “reason,” just a contrived explanation. Of descriptive value for crashes: crashes See Galbraith (1997), Shiller (2000), and Kindleberger (2001). CHAPTER 3 Movies: Movies See De Vany (2002). See also Salganik et al. (2006) for the contagion in music buying. Religion and domains of contagion: contagion See Boyer (2001). Wisdom (madness) of crowds: crowds Collectively, we can both get wiser or far more foolish. We may collectively have intuitions for Mediocristan-related matters, such as the weight of an ox (see Surowiecki, 2004), but my conjecture is that we fail in more complicated predictions (economic variables for which crowds incur pathologies—two heads are worse than one). For decision errors and groups, see Sniezek and Buckley (1993). Classic: Charles Mackay’s Extraordinary Popular Delusions and the Madness of Crowds. Increase in the severity of events: events Zajdenweber (2000). Modern life: life The nineteenth-century novelist Émile Zola welcomed the arrival of the market for culture in the late 1800s, of which he seemed to be one of the rst bene ciaries. He predicted that the writers’ and artists’ ability to exploit the commercial system freed them from a dependence on patrons’ whims. Alas, this was accompanied with more severe concentration—very few people bene ted from the system. Lahire (2006) shows how most writers, throughout history, have starved. Remarkably, we have ample data from France about the literary tradition. CHAPTER 4 Titanic: Titanic The quote is from Dave Ingram’s presentation at the Enterprise Risk Management Symposium in Chicago on May 2, 2005. For more on LTCM, see Lowenstein (2000), Dunbar (1999). Hume’s exposition: exposition Hume (1748, 2000). Sextus Empriricus: Empriricus “It is easy, I think, to reject the method of induction ( ). For since by way of it they want to make universals convincing on the basis of particulars, they will do this surveying all the particulars or some of them. But if some, the induction will be in rm, it being that some of the particulars omitted in the induction should be contrary to the universal; and if all, they will labor at an impossible task, since the particulars and in nite are indeterminate. Thus in either case it results, I think, that induction totters.” Outline of Pyrrhonism, Book II, p. 204. Bayle: Bayle The Dictionnaire historique et critique is long (twelve volumes, close to 6,000 pages) and heavy (40 pounds), yet it was an intellectual bestseller in its day, before being supplanted by the philosophes. It can be downloaded from the French Bibliothèque Nationale at www.bn.fr. Hume’s inspiration from Bayle: Bayle See Popkin (1951, 1955). Any reading of Bishop Huet (further down) would reveal the similarities with Hume. Pre-Bayle thinkers: thinkers Dissertation sur la recherche de la vérité, Simon Foucher, from around 1673. It is a delight to read. It makes the heuristics and biases tradition look like the continuation of the pre-Enlightenment prescienti c revolution atmosphere. Bishop Huet and the problem of induction: induction “Things cannot be known with perfect certainty because their causes are in nite,” wrote Pierre-Daniel Huet in his Philosophical Treatise on the Weaknesses of the Human Mind. Huet, former bishop of Avranches, wrote this under the name Théocrite de Pluvignac, Seigneur de la Roche, Gentilhomme de Périgord. The chapter has another exact presentation of what became later known as “Hume’s problem.” That was in 1690, when the future David Home (later Hume) was minus twenty-two, so of no possible in uence on Monseigneur Huet. Brochard’s work: work I rst encountered the mention of Brochard’s work (1888) in Nietzsche’s Ecce Homo, in a comment where he also describes the skeptics as straight talkers. “An excellent study by Victor Brochard, Les sceptiques grecs, in which my Laertiana are also employed. The skeptics! the only honourable type among the two and ve fold ambiguous philosopher crowd!” More trivia: Brochard taught Proust (see Kristeva, 1998). Brochard seems to have understood Popper’s problem (a few decades before Popper’s birth). He presents the views of the negative empiricism of Menodotus of Nicomedia in similar terms to what we would call today “Popperian” empiricism. I wonder if Popper knew anything about Menodotus. He does not seem to quote him anywhere. Brochard published his doctoral thesis, De l’erreur, in 1878 at the University of Paris, on the subject of error—wonderfully modern. Epilogism: Epilogism We know very little about Menodotus except for attacks on his beliefs by his detractor Galen in the extant Latin version of the Outline of Empiricism (Subfiguratio empírica), hard to translate: Memoriam et sensum et vocans epilogismum hoc tertium, multotiens autem et preter memoriam nihil aliud ponens quam epilogismum. (In addition to perception and recollection, the third method is epilogism sensum, as the practitioner has, besides memory, nothing other than epilogism senses; Perilli’s correction. But there is hope. Perilli (2004) reports that, according to a letter by the translator Is-haq Bin Hunain, there may be a “transcription” of Menodotus’s work in Arabic somewhere for a scholar to nd. Pascal: Pascal Pascal too had an idea of the con rmation problem and the asymmetry of inference. In his preface to the Traité du vide, Pascal writes (and I translate): In the judgment they made that nature did not tolerate a vacuum, they only meant nature in the state in which they knew it, since, so claim so in general, it would not be su cient to witness it in a hundred di erent encounters, nor in a thousand, not in any other number no matter how large, since it would be a single case that would deny the general de nition, and if one was contrary, a single one … Hume’s biographer: biographer Mossner (1970). For a history of skepticism, Victor Cousin’s lectures Leçons d’histoire de la philosophie à la Sorbonne (1828) and Hippolyte Taine’s Les philosophes classiques, 9th edition (1868, 1905). Popkin (2003) is a modern account. Also see Heckman (2003) and Bevan (1913). I have seen nothing in the modern philosophy of probability linking it to skeptical inquiry. Sextus: Sextus See Popkin (2003), Sextus, House (1980), Bayle, Huet, Annas and Barnes (1985), and Julia Anna and Barnes’s introduction in Sextus Empiricus (2000). Favier (1906) is hard to nd; the only copy I located, thanks to Gur Huberman’s e orts, was rotten—it seems that it has not been consulted in the past hundred years. Menodotus of Nicomedia and the marriage between empiricism and skepticism: skepticism According to Brochard (1887), Menodotus is responsible for the mixing of empiricism and Pyrrhonism. See also Favier (1906). See skepticism about this idea in Dye (2004), and Perilli (2004). Function not structure; empirical tripod: tripod There are three sources, and three only, for experience to rely upon: observation, history (i.e., recorded observation), and judgment by analogy. Algazel: Algazel See his Tahafut al falasifah, which is rebutted by Averroës, a.k.a. Ibn-Rushd, in Tahafut Attahafut. Religious skeptics: skeptics There is also a medieval Jewish tradition, with the Arabic-speaking poet Yehuda Halevi. See Floridi (2002). Algazel and the ultimate/proximate causation: causation “… their determining, from the sole observation, of the nature of the necessary relationship between the cause and the e ect, as if one could not witness the e ect without the attributed cause of the cause without the same e ect.” (Tahafut) At the core of Algazel’s idea is the notion that if you drink because you are thirsty, thirst should not be seen as a direct cause. There may be a greater scheme being played out; in fact, there is, but it can only be understood by those familiar with evolutionary thinking. See Tinbergen (1963, 1968) for a modern account of the proximate. In a way, Algazel builds on Aristotle to attack him. In his Physics, Aristotle had already seen the distinction between the di erent layers of cause (formal, e cient, nal, and material). Modern discussions on causality: causality See Reichenbach (1938), Granger (1999), and Pearl (2000). Children and natural induction: See Gelman and Coley (1990), Gelman and Hirschfeld (1999), and Sloman (1993). Natural induction: induction See Hespos (2006), Clark and Boyer (2006), Inagaki and Hatano (2006), Reboul (2006). See summary of earlier works in Plotkin (1998). CHAPTERS 5–7 “Economists”: “Economists” What I mean by “economists” are most members of the mainstream, neoclassical economics and nance establishment in universities—not fringe groups such as the Austrian or the Post-Keynesian schools. Small numbers: numbers Tversky and Kahneman (1971), Rabin (2000). Domain speci city: city Williams and Connolly (2006). We can see it in the usually overinterpreted Wason Selection Test: Wason (1960, 1968). See also Shaklee and Fischho (1982), Barron Beaty, and Hearshly (1988). Kahneman’s “They knew better” in Gilovich et al. (2002). Updike: Updike The blurb is from Jaynes (1976). Brain hemispheric specialization: specialization Gazzaniga and LeDoux (1978), Gazzaniga et al. (2005). Furthermore, Wolford, Miller, and Gazzaniga (2000) show probability matching by the left brain. When you supply the right brain with, say, a lever that produces desirable goods 60% of the time, and another lever 40%, the right brain will correctly push the rst lever as the optimal policy. If, on the other hand, you supply the left brain with the same options, it will push the rst lever 60 percent of the time and the other one 40—it will refuse to accept randomness. Goldberg (2005) argues that the specialty is along di erent lines: left-brain damage does not bear severe e ects in children, unlike right-brain lesions, while this is the reverse for the elderly. I thank Elkhonon Goldberg for referring me to Snyder’s work; Snyder (2001). The experiment is from Snyder et al. (2003). Sock selection and retro t explanation: explanation The experiment of the socks is presented in Carter (1999); the original paper appears to be Nisbett and Wilson (1977). See also Montier (2007). Astebro: Astebro Astebro (2003). See “Searching for the Invisible Man,” The Economist, March 9, 2006. To see how the overcon dence of entrepreneurs can explain the high failure rate, see Camerer (1995). Dopamine: Dopamine Brugger and Graves (1997), among many other papers. See also Mohr et al. (2003) on dopamine asymmetry. Entropy and information: information I am purposely avoiding the notion of entropy because the way it is conventionally phrased makes it ill-adapted to the type of randomness we experience in real life. Tsallis entropy works better with fat tails. Notes on George Perec: Perec Eco (1994). Narrativity and illusion of understanding: understanding Wilson, Gilbert, and Centerbar (2003): “Helplessness theory has demonstrated that if people feel that they cannot control or predict their environments, they are at risk for severe motivational and cognitive de cits, such as depression.” For the writing down of a diary, see Wilson (2002) or Wegner (2002). E. M. Forster’s example: example reference in Margalit (2002). National character: character Terracciano et al. (2005) and Robins (2005) for the extent of individual variations. The illusion of nationality trait, which I usually call the “nationality heuristic,” does connect to the halo e ect: see Rosenzweig (2006) and Cialdini (2001). See Anderson (1983) for the ontology of nationality. Consistency bias: bias What psychologists call the consistency bias is the e ect of revising memories in such a way to make sense with respect to subsequent information. See Schacter (2001). Memory not like storage on a computer: computer Rose (2003), Nader and LeDoux (1999). The myth of repressed memory: memory Loftus and Ketcham (2004). Chess players and discon rmation: rmation Cowley and Byrne (2004). Quine’s problem: problem Davidson (1983) argues in favor of local, but against total, skepticism. Narrativity: Narrativity Note that my discussion is not existential here, but merely practical, so my idea is to look at narrativity as an informational compression, nothing more involved philosophically (like whether a self is sequential or not). There is a literature on the “narrative self”—Bruner (2002) or whether it is necessary—see Strawson (1994) and his attack in Strawson (2004). The debate: Schechtman (1997), Taylor (1999), Phelan (2005). Synthesis in Turner (1996). “Postmodernists” and the desirability of narratives: narratives See McCloskey (1990) and Frankfurter and McGoun (1996). Narrativity of sayings and proverbs: proverbs Psychologists have long examined the gullibility of people in social settings when faced with well-sounding proverbs. For instance, experiments have been made since the 1960s where people are asked whether they believe that a proverb is right, while another cohort is presented with the opposite meaning. For a presentation of the hilarious results, see Myers (2002). Science as a narrative: narrative Indeed scienti c papers can succeed by the same narrativity bias that “makes a story.” You need to get attention. Bushman and Wells (2001). Discovering probabilities: probabilities Barron and Erev (2003) show how probabilities are underestimated when they are not explicitly presented. Also personal communication with Barron. Risk and probability: probability See Slovic, Fischho , and Lichtenstein (1976), Slovic et al. (1977), and Slovic (1987). For risk as analysis and risk as feeling theory, see Slovic et al. (2002, 2003), and Taleb (2004c). See Bar- Hillel and Wagenaar (1991) Link between narrative fallacy and clinical knowledge: knowledge Dawes (1999) has a message for economists: see here his work on interviews and the concoction of a narrative. See also Dawes (2001) on the retrospective e ect. Two systems of reasoning: reasoning See Sloman (1996, 2002), and the summary in Kahneman and Frederick (2002). Kahneman’s Nobel lecture sums it all up; it can be found at www.nobel.se. See also Stanovich and West (2000). Risk and emotions: emotions Given the growing recent interest in the emotional role in behavior, there has been a growing literature on the role of emotions in both risk bearing and risk avoidance: the “risk as feeling” theory. See Loewenstein et al. (2001) and Slovic et al. (2003a). For a survey see Slovic et al. (2003b) and see also Slovic (1987). For a discussion of the “a ect heuristic,” see Finucane et al. (2000). For modularity, see Bates (1994). Emotions and cognition: cognition For the e ect of emotions on cognition, see LeDoux (2002). For risk, see Bechara et al. (1994). Availability heuristic (how easily things come to mind): mind) See Tversky and Kahneman (1973). Real incidence of catastrophes: catastrophes For an insightful discussion, see Albouy (2002), Zajdenweber (2000), or Sunstein (2002). Terrorism exploitation of the sensational: sensational See the essay in Taleb (2004c). General books on psychology of decision making (heuristics and biases): biases) Baron (2000) is simply the most comprehensive on the subject. Kunda (1999) is a summary from the standpoint of social psychology (sadly, the author died prematurely); shorter: Plous (1993). Also Dawes (1988) and Dawes (2001). Note that a chunk of the original papers are happily compiled in Kahneman et al. (1982), Kahneman and Tversky (2000), Gilovich et al. (2002), and Slovic (2001a and 2001b). See also Myers (2002) for an account on intuition and Gigerenzer et al. (2000) for an ecological presentation of the subject. The most complete account in economics and nance is Montier (2007), where his beautiful summary pieces that fed me for the last four years are compiled—not being an academic, he gets straight to the point. See also Camerer, Loewenstein, and Rabin (2004) for a selection of technical papers. A recommended review article on clinical “expert” knowledge is Dawes (2001). More general psychology of decision presentations: presentations Klein (1998) proposes an alternative model of intuition. See Cialdini (2001) for social manipulation. A more specialized work, Camerer (2003), focuses on game theory. General review essays and comprehensive books in cognitive science: science Newell and Simon (1972), Varela (1988), Fodor (1983), Marr (1982), Eysenck and Keane (2000), Lako and Johnson (1980). The MIT Encyclopedia of Cognitive Science has review articles by main thinkers. Evolutionary theory and domains of adaptation: adaptation See the original Wilson (2000), Kreps and Davies (1993), and Burnham (1997, 2003). Very readable: Burnham and Phelan (2000). The compilation of Robert Trivers’s work is in Trivers (2002). See also Wrangham (1999) on wars. Politics: Politics “The Political Brain: A Recent Brain-imaging Study Shows That Our Political Predilections Are a Product of Unconscious Con rmation Bias,” by Michael Shermer, Scientific American, September 26, 2006. Neurobiology of decision making: making For a general understanding of our knowledge about the brain’s architecture: Gazzaniga et al. (2002). Gazzaniga (2005) provides literary summaries of some of the topics. More popular: Carter (1999). Also recommended: Ratey (2001), Ramachandran (2003), Ramachandran and Blakeslee (1998), Carter (1999, 2002), Conlan (1999), the very readable Lewis, Amini, and Lannon (2000), and Goleman (1995). See Glimcher (2002) for probability and the brain. For the emotional brain, the three books by Damasio (1994, 2000, 2003), in addition to LeDoux (1998) and the more detailed LeDoux (2002), are the classics. See also the shorter Evans (2002). For the role of vision in aesthetics, but also in interpretation, Zeki (1999). General works on memory: memory In psychology, Schacter (2001) is a review work of the memory biases with links to the hindsight e ects. In neurobiology, see Rose (2003) and Squire and Kandel (2000). A general textbook on memory (in empirical psychology) is Baddeley (1997). Intellectual colonies and social life: life See the account in Collins (1998) of the “lineages” of philosophers (although I don’t think he was aware enough of the Casanova problem to take into account the bias making the works of solo philosophers less likely to survive). For an illustration of the aggressiveness of groups, see Uglow (2003). Hyman Minsky’s work: work Minsky (1982). Asymmetry: Asymmetry Prospect theory (Kahneman and Tversky and Tversky and Kahneman ) accounts for the asymmetry between bad and good random events, but it also shows that the negative domain is convex while the positive domain is concave, meaning that a loss of 100 is less painful than 100 losses of 1 but that a gain of 100 is also far less pleasurable than 100 times a gain of 1. Neural correlates of the asymmetry: asymmetry See Davidson’s work in Goleman (2003), Lane et al. (1997), and Gehring and Willoughby (2002). Csikszentmihalyi (1993, 1998) further explains the attractiveness of steady payo s with his theory of “ ow.” Deferred rewards and its neural correlates: correlates McLure et al. (2004) show the brain activation in the cortex upon making a decision to defer, providing insight on the limbic impulse behind immediacy and the cortical activity in delaying. See also Loewenstein et al. (1992), Elster (1998), Berridge (2005). For the neurology of preferences in Capuchin monkeys, Chen et al. (2005). Bleed or blowup: blowup Gladwell (2002) and Taleb (2004c). Why bleed is painful can be explained by dull stress; Sapolsky et al. (2003) and Sapolsky (1998). For how companies like steady returns, Degeorge and Zeckhauser (1999). Poetics of hope: Mihailescu (2006). Discontinuities and jumps: jumps Classi ed by René Thom as constituting seven classes; Thom (1980). Evolution and small probabilities: probabilities Consider also the naïve evolutionary thinking positing the “optimality” of selection. The founder of sociobiology, the great E. O. Wilson, does not agree with such optimality when it comes to rare events. In Wilson (2002), he writes: The human brain evidently evolved to commit itself emotionally only to a small piece of geography, a limited band of kinsmen, and two or three generations into the future. To look neither far ahead nor far a eld is elemental in a Darwinian sense. We are innately inclined to ignore any distant possibility not yet requiring examination. It is, people say, just good common sense. Why do they think in this shortsighted way? The reason is simple: it is a hardwired part of our Paleolithic heritage. For hundreds of millennia, those who worked for short-term gain within a small circle of relatives and friends lived longer and left more o spring—even when their collective striving caused their chiefdoms and empires to crumble around them. The long view that might have saved their distant descendants required a vision and extended altruism instinctively di cult to marshal. See also Miller (2000): “Evolution has no foresight. It lacks the long- term vision of drug company management. A species can’t raise venture capital to pay its bills while its research team … This makes it hard to explain innovations.” Note that neither author considered my age argument. CHAPTER 8 Silent evidence bears the name wrong reference class in the nasty eld of philosophy of probability, anthropic bias in physics, and survivorship bias in statistics (economists present the interesting attribute of having rediscovered it a few times while being severely fooled by it). Con rmation: rmation Bacon says in On Truth, “No pleasure is comparable to the standing upon the vantage ground of truth (a hill not to be commanded and where the air is always clear and serene), and to see the errors, and wanderings, and mists, and tempests, in the vale below.” This easily shows how great intentions can lead to the con rmation fallacy. Bacon did not understand the empiricists: empiricists He was looking for the golden mean. Again, from On Truth: There are three sources of error and three species of false philosophy; the sophistic, the empiric and the superstitious. … Aristotle a ords the most eminent instance of the rst; for he corrupted natural philosophy by logic—thus he formed the world of categories. … Nor is much stress to be laid on his frequent recourse to experiment in his books on animals, his problems and other treatises, for he had already decided, without having properly consulted experience as the basis of his decisions and axioms. … The empiric school produces dogmas of a more deformed and monstrous nature than the sophistic or theoretic school; not being founded in the light of common notions (which however poor and superstitious, is yet in a manner universal and of general tendency), but in the con ned obscurity of a few experiments. Bacon’s misconception may be the reason it took us a while to understand that they treated history (and experiments) as mere and vague “guidance,” i.e., epilogy. Publishing: Publishing Allen (2005), Klebano (2002), Epstein (2001), de Bellaigue (2004), and Blake (1999). For a funny list of rejections, see Bernard (2002) and White (1982). Michael Korda’s memoir, Korda (2000), adds some color to the business. These books are anecdotal, but we will see later that books follow steep scale-invariant structures with the implication of a severe role for randomness. Anthropic bias: bias See the wonderful and comprehensive discussion in Bostrom (2002). In physics, see Barrow and Tipler (1986) and Rees (2004). Sornette (2004) has Gott’s derivation of survival as a power law. In nance, Sullivan et al. (1999) discuss survivorship bias. See also Taleb (2004a). Studies that ignore the bias and state inappropriate conclusions: Stanley and Danko (1996) and the more foolish Stanley (2000). Manuscripts and the Phoenicians: Phoenicians For survival and science, see Cisne (2005). Note that the article takes into account physical survival (like fossil), not cultural, which implies a selection bias. Courtesy Peter Bevelin. Stigler’s law of eponymy: eponymy Stigler (2002). French book statistics: statistics Lire, April 2005. Why dispersion matters: matters More technically, the distribution of the extremum (i.e., the maximum or minimum) of a random variable depends more on the variance of the process than on its mean. Someone whose weight tends to uctuate a lot is more likely to show you a picture of himself very thin than someone else whose weight is on average lower but remains constant. The mean (read skills) sometimes plays a very, very small role. Fossil record: record I thank the reader Frederick Colbourne for his comments on this subject. The literature calls it the “pull of the recent,” but has di culty estimating the e ects, owing to disagreements. See Jablonski et al. (2003). Undiscovered public knowledge: knowledge Here is another manifestation of silent evidence: you can actually do lab work sitting in an armchair, just by linking bits and pieces of research by people who labor apart from one another and miss on connections. Using bibliographic analysis, it is possible to nd links between published information that had not been known previously by researchers. I “discovered” the vindication of the armchair in Fuller (2005). For other interesting discoveries, see Spasser (1997) and Swanson (1986a, 1986b, 1987). Crime: Crime The de nition of economic “crime” is something that comes in hindsight. Regulations, once enacted, do not run retrospectively, so many activities causing excess are never sanctioned (e.g., bribery). Bastiat: Bastiat See Bastiat (1862–1864). Casanova: Casanova I thank the reader Milo Jones for pointing out to me the exact number of volumes. See Masters (1969). Reference point problem: problem Taking into account background information requires a form of thinking in conditional terms that, oddly, many scientists (especially the better ones) are incapable of handling. The di erence between the two odds is called, simply, conditional probability. We are computing the probability of surviving conditional on our being in the sample itself. Simply put, you cannot compute probabilities if your survival is part of the condition of the realization of the process. Plagues: Plagues See McNeill (1976). CHAPTER 9 Intelligence and Nobel: Nobel Simonton (1999). If IQ scores correlate, they do so very weakly with subsequent success. “Uncertainty”: “Uncertainty” Knight (1923). My de nition of such risk (Taleb, 2007c) is that it is a normative situation, where we can be certain about probabilities, i.e., no metaprobabilities. Whereas, if randomness and risk result from epistemic opacity, the di culty in seeing causes, then necessarily the distinction is bunk. Any reader of Cicero would recognize it as his probability; see epistemic opacity in his De Divinatione, Liber primus, LVI, 127: Qui enim teneat causas rerum futurarum, idem necesse est omnia teneat quae futura sint. Quod cum nemo facere nisi deus possit, relinquendum est homini, ut signis quibusdam consequentia declarantibus futura praesentiat. “He who knows the causes will understand the future, except that, given that nobody outside God possesses such faculty …” Philosophy and epistemology of probability: probability Laplace. Treatise, Keynes (1920), de Finetti (1931), Kyburg (1983), Levi (1970), Ayer, Hacking (1990, 2001), Gillies (2000), von Mises (1928), von Plato (1994), Carnap (1950), Cohen (1989), Popper (1971), Eatwell et al. (1987), and Gigerenzer et al. (1989). History of statistical knowledge and methods: methods I found no intelligent work in the history of statistics, i.e., one that does not fall prey to the ludic fallacy or Gaussianism. For a conventional account, see Bernstein (1996) and David (1962). General books on probability and information theory: theory Cover and Thomas (1991); less technical but excellent, Bayer (2003). For a probabilistic view of information theory: the posthumous Jaynes (2003) is the only mathematical book other than de Finetti’s work that I can recommend to the general reader, owing to his Bayesian approach and his allergy for the formalism of the idiot savant. Poker: Poker It escapes the ludic fallacy; see Taleb (2006a). Plato’s normative approach to left and right hands: hands See McManus (2002). Nietzsche’s bildungsphilister: bildungsphilister See van Tongeren (2002) and Hicks and Rosenberg (2003). Note that because of the con rmation bias academics will tell you that intellectuals “lack rigor,” and will bring examples of those who do, not those who don’t. Economics books that deal with uncertainty: uncertainty Carter et al. (1962), Shackle (1961, 1973), Hayek (1994). Hirshleifer and Riley (1992) ts uncertainty into neoclassical economics. Incomputability: Incomputability For earthquakes, see Freedman and Stark (2003) (courtesy of Gur Huberman). Academia and philistinism: philistinism There is a round-trip fallacy; if academia means rigor (which I doubt, since what I saw called “peer reviewing” is too often a masquerade), nonacademic does not imply nonrigorous. Why do I doubt the “rigor”? By the con rmation bias they show you their contributions yet in spite of the high number of laboring academics, a relatively minute fraction of our results come from them. A disproportionately high number of contributions come from freelance researchers and those dissingly called amateurs: Darwin, Freud, Marx, Mandelbrot, even the early Einstein. In uence on the part of an academic is usually accidental. This even held in the Middle Ages and the Renaissance, see Le Go (1985). Also, the Enlightenment gures (Voltaire, Rousseau, d’Holbach, Diderot, Montesquieu) were all nonacademics at a time when academia was large. CHAPTER 10 Overcon dence: dence Albert and Rai a (1982) (though apparently the paper languished for a decade before formal publication). Lichtenstein and Fischho (1977) showed that overcon dence can be in uenced by item di culty; it typically diminishes and turns into undercon dence in easy items (compare with Armelius ). Plenty of papers since have tried to pin down the conditions of calibration failures or robustness (be they task training, ecological aspects of the domain, level of education, or nationality): Dawes (1980), Koriat, Lichtenstein, and Fischho (1980), Mayseless and Kruglanski (1987), Dunning et al. (1990), Ayton and McClelland (1997), Gervais and Odean (1999), Gri n and Varey (1996), Juslin (1991, 1993, 1994), Juslin and Olsson (1997), Kadane and Lichtenstein (1982), May (1986), McClelland and Bolger (1994), Pfeifer (1994), Russo and Schoernaker (1992), Klayman et al. (1999). Note the decrease (unexpectedly) in overcon dence under group decisions: see Sniezek and Henry (1989)—and solutions in Plous (1995). I am suspicious here of the Mediocristan/ Extremistan distinction and the unevenness of the variables. Alas, I found no paper making this distinction. There are also solutions in Stoll (1996), Arkes et al. (1987). For overcon dence in nance, see Thorley (1999) and Barber and Odean (1999). For cross-boundaries e ects, Yates et al. (1996, 1998), Angele et al. (1982). For simultaneous overcon dence and undercon dence, see Erev, Wallsten, and Budescu (1994). Frequency vs. probability—the ecological problem: problem Ho rage and Gigerenzer (1998) think that overcon dence is less signi cant when the problem is expressed in frequencies as opposed to probabilities. In fact, there has been a debate about the di erence between “ecology” and laboratory; see Gigerenzer et al. (2000), Gigerenzer and Richter (1990), and Gigerenzer (1991). We are “fast and frugal” (Gigerenzer and Goldstein ). As far as the Black Swan is concerned, these problems of ecology do not arise: we do not live in an environment in which we are supplied with frequencies or, more generally, for which we are t. Also in ecology, Spariosu (2004) for the ludic aspect, Cosmides and Tooby (1990). Leary (1987) for Brunswikian ideas, as well as Brunswik (1952). Lack of awareness of ignorance: ignorance “In short, the same knowledge that underlies the ability to produce correct judgment is also the knowledge that underlies the ability to recognize correct judgment. To lack the former is to be de cient in the latter.” From Kruger and Dunning (1999). Expert problem in isolation: isolation I see the expert problem as indistinguishable from Matthew e ects and Extremism fat tails (more later), yet I found no such link in the literatures of sociology and psychology. Clinical knowledge and its problems: problems See Meehl (1954) and Dawes, Faust, and Meehl (1989). Most entertaining is the essay “Why I Do Not Attend Case Conferences” in Meehl (1973). See also Wagenaar and Keren (1985, 1986). Financial analysts, herding, and forecasting: forecasting See Guedj and Bouchaud (2006), Abarbanell and Bernard (1992), Chen et al. (2002), De Bondt and Thaler (1990), Easterwood and Nutt (1999), Friesen and Weller (2002), Foster (1977), Hong and Kubik (2003), Jacob et al. (1999), Lim (2001), Liu (1998), Maines and Hand (1996), Mendenhall (1991), Mikhail et al. (1997, 1999), Zitzewitz (2001), and El-Galfy and Forbes (2005). For a comparison with weather forecasters (unfavorable): Tyszka and Zielonka (2002). Economists and forecasting: forecasting Tetlock (2005), Makridakis and Hibon (2000), Makridakis et al. (1982), Makridakis et al. (1993), Gripaios (1994), Armstrong (1978, 1981); and rebuttals by McNees (1978), Tashman (2000), Blake et al. (1986), Onkal et al. (2003), Gillespie (1979), Baron (2004), Batchelor (1990, 2001), Dominitz and Grether (1999). Lamont (2002) looks for reputational factors: established forecasters get worse as they produce more radical forecasts to get attention—consistent with Tetlock’s hedgehog e ect. Ahiya and Doi (2001) look for herd behavior in Japan. See McNees (1995), Remus et al. (1997), O’Neill and Desai (2005), Bewley and Fiebig (2002), Angner (2006), Bénassy-Quéré (2002); Brender and Pisani (2001) look at the Bloomberg consensus; De Bondt and Kappler (2004) claim evidence of weak persistence in fty-two years of data, but I saw the slides in a presentation, never the paper, which after two years might never materialize. Overcon dence, Braun and Yaniv (1992). See Hahn (1993) for a general intellectual discussion. More general, Clemen (1986, 1989). For Game theory, Green (2005). Many operators, such as James Montier, and many newspapers and magazines (such as The Economist), run casual tests of prediction. Cumulatively, they must be taken seriously since they cover more variables. Popular culture: culture In 1931, Edward Angly exposed forecasts made by President Hoover in a book titled Oh Yeah? Another hilarious book is Cerf and Navasky (1998), where, incidentally, I got the pre-1973 oil-estimation story. E ects of information: information The major paper is Bruner and Potter (1964). I thank Danny Kahneman for discussions and pointing out this paper to me. See also Montier (2007), Oskamp (1965), and Benartzi (2001). These biases become ambiguous information (Gri n and Tversky ). For how they fail to disappear with expertise and training, see Kahneman and Tversky (1982) and Tversky and Kahneman (1982). See Kunda (1990) for how preference-consistent information is taken at face value, while preference-inconsistent information is processed critically. Planning fallacy: fallacy Kahneman and Tversky (1979) and Buehler, Gri n, and Ross (2002). The planning fallacy shows a consistent bias in people’s planning ability, even with matters of a repeatable nature—though it is more exaggerated with nonrepeatable events. Wars: Wars Trivers (2002). Are there incentives to delay?: delay? Flyvbjerg et al. (2002). Oskamp: Oskamp Oskamp (1965) and Montier (2007). Task characteristics and e ect on decision making: making Shanteau (1992). Epistēmē vs. vs Technē: Technē This distinction harks back to Aristotle, but it recurs then dies? down—it most recently recurs in accounts such as tacit knowledge in “know how.” See Ryle (1949), Polanyi (1958/1974), and Mokyr (2002). Catherine the Great: Great The number of lovers comes from Rounding (2006). Life expectancy: expectancy www.annuityadvantage.com/ lifeexpectancy.htm. For projects, I have used a probability of exceeding with a power-law exponent of 3/2: f= Kx3/2. Thus the conditional expectation of x, knowing that x exceeds a CHAPTERS 11–13 Serendipity: Serendipity See Koestler (1959) and Rees (2004). Rees also has powerful ideas on fore-castability. See also Popper’s comments in Popper (2002), and Waller (2002a), Cannon (1940), Mach (1896) (cited in Simonton ), and Merton and Barber (2004). See Simonton (2004) for a synthesis. For serendipity in medicine and anesthesiology, see Vale et al. (2005). “Renaissance man”: man” See www.bell-labs.com/project/ feature/archives/cosmology/. Laser: Laser As usual, there are controversies as to who “invented” the technology. After a successful discovery, precursors are rapidly found, owing to the retrospective distortion. Charles Townsend won the Nobel prize, but was sued by his student Gordon Gould, who held that he did the actual work (see The Economist, June 9, 2005). Darwin/Wallace: Darwin/Wallace Quammen (2006). Popper’s attack on historicism: historicism See Popper (2002). Note that I am reinterpreting Popper’s idea in a modern manner here, using my own experiences and knowledge, not commenting on comments about Popper’s work—with the consequent lack of delity to his message. In other words, these are not directly Popper’s arguments, but largely mine phrased in a Popperian framework. The conditional expectation of an unconditional expectation is an unconditional expectation. Forecast for the f uture a hundred years earlier: earlier Bellamy (1891) illustrates our mental projections of the future. However, some stories might be exaggerated: “A Patently False Patent Myth still! Did a patent o cial really once resign because he thought nothing was left to invent? Once such myths start they take on a life of their own.” Skeptical Inquirer, May–June, 2003. Observation by Peirce: Peirce Olsson (2006), Peirce (1955). Predicting and explaining: explaining See Thom (1993). Poincaré: Poincaré The three body problem can be found in Barrow- Green (1996), Rollet (2005), and Galison (2003). On Einstein, Pais (1982). More recent revelations in Hladik (2004). Billiard balls: balls Berry (1978) and Pisarenko and Sornette (2004). Very general discussion on “complexity”: “complexity” Benkirane (2002), Scheps (1996), and Ruelle (1991). For limits, Barrow (1998). Hayek: Hayek See www.nobel.se. See Hayek (1945, 1994). Is it that mechanisms do not correct themselves from railing by in uential people, but either by mortality of the operators, or something even more severe, by being put out of business? Alas, because of contagion, there seems to be little logic to how matters improve; luck plays a part in how soft sciences evolve. See Ormerod (2006) for network e ects in “intellectuals and socialism” and the power-law distribution in in uence owing to the scale-free aspect of the connections—and the consequential arbitrariness. Hayek seems to have been a prisoner of Weber’s old di erentiation between Natur-Wissenschaften and Geistes Wissenschaften—but thankfully not Popper. Insularity of economists: economists Pieters and Baumgartner (2002). One good aspect of the insularity of economists is that they can insult me all they want without any consequence: it appears that only economists read other economists (so they can write papers for other economists to read). For a more general case, see Wallerstein (1999). Note that Braudel fought “economic history.” It was history. Economics as religion: religion Nelson (2001) and Keen (2001). For methodology, see Blaug (1992). For high priests and lowly philosophers, see Boettke, Coyne, and Leeson (2006). Note that the works of Gary Becker and the Platonists of the Chicago School are all marred by the con rmation bias: Becker is quick to show you situations in which people are moved by economic incentives, but does not show you cases (vastly more numerous) in which people don’t care about such materialistic incentives. The smartest book I’ve seen in economics is Gave et al. (2005) since it transcends the constructed categories in academic economic discourse (one of the authors is the journalist Anatole Kaletsky). General theory: theory This fact has not deterred “general theorists.” One hotshot of the Platonifying variety explained to me during a long plane ride from Geneva to New York that the ideas of Kahneman and his colleagues must be rejected because they do not allow us to develop a general equilibrium theory, producing “time-inconsistent preferences.” For a minute I thought he was joking: he blamed the psychologists’ ideas and human incoherence for interfering with his ability to build his Platonic model. Samuelson: Samuelson For his optimization, see Samuelson (1983). Also Stiglitz (1994). Plato’s dogma on body symmetry: symmetry “Athenian Stranger to Cleinias: In that the right and left hand are supposed to be by nature di erently suited for our various uses of them; whereas no di erence is found in the use of the feet and the lower limbs; but in the use of the hands we are, as it were, maimed by the folly of nurses and mothers; for although our several limbs are by nature balanced, we create a di erence in them by bad habit,” in Plato’s Laws. See McManus (2002). Drug companies: companies Other such rms, I was told, are run by commercial persons who tell researchers where they nd a “market need” and ask them to “invent” drugs and cures accordingly—which accords with the methods of the dangerously misleading Wall Street security analysts. They formulate projections as if they know what they are going to nd. Models of the returns on innovations: innovations Sornette and Zajdenweber (1999) and Silverberg and Verspagen (2005). Evolution on a short leash: leash Dennet (2003) and Stanovich and West (2000). Montaigne: Montaigne We don’t get much from the biographies of a personal essayist; some information in Frame (1965) and Zweig (1960). Projectibility and the grue paradox: paradox See Goodman (1955). See also an application (or perhaps misapplication) in King and Zheng (2005). Constructionism: Constructionism See Berger and Luckmann (1966) and Hacking (1999). Certi cation vs, true skills or knowledge: knowledge See Donhardt (2004). There is also a franchise protection. Mathematics may not be so necessary a tool for economics, except to protect the franchise of those economists who know math. In my father’s days, the selection process for the mandarins was made using their abilities in Latin (or Greek). So the class of students groomed for the top was grounded in the classics and knew some interesting subjects. They were also trained in Cicero’s highly probabilistic view of things—and selected on erudition, which carries small side e ects. If anything it allows you to handle fuzzy matters. My generation was selected according to mathematical skills. You made it based on an engineering mentality; this produced mandarins with mathematical, highly structured, logical minds, and, accordingly, they will select their peers based on such criteria. So the papers in economics and social science gravitated toward the highly mathematical and protected their franchise by putting high mathematical barriers to entry. You could also smoke the general public who is unable to put a check on you. Another e ect of this franchise protection is that it might have encouraged putting “at the top” those idiot-savant-like researchers who lacked in erudition, hence were insular, parochial, and closed to other disciplines. Freedom and determinism: determinism a speculative idea in Penrose (1989) where only the quantum e ects (with the perceived indeterminacy there) can justify consciousness. Projectibility: Projectibility uniqueness assuming least squares or MAD. Chaos theory and the backward/forward conf usion: usion Laurent Firode’s Happenstance, a.k.a. Le battement d’ailes du papillon / The Beating of a Butterfly’s Wings (2000). Autism and perception of randomness: randomness See Williams et al. (2002). Forecasting and misforecasting errors in hedonic states: states Wilson, Meyers, and Gilbert (2001), Wilson, Gilbert, and Centerbar (2003), and Wilson et al. (2005). They call it “emotional evanescence.” Forecasting and consciousness: consciousness See the idea of “aboutness” in Dennett (1995, 2003) and Humphrey (1992). However, Gilbert (2006) believes that we are not the only animal that forecasts—which is wrong as it turned out. Suddendorf (2006) and Dally, Emery, and Clayton (2006) show that animals too forecast! Russell’s comment on Pascal’s wager: wager Ayer (1988) reports this as a private communication. History: History Carr (1961), Hexter (1979), and Gaddis (2002). But I have trouble with historians throughout, because they often mistake the forward and the backward processes. Mark Buchanan’s Ubiquity and the quite confused discussion by Niall Ferguson in Nature. Neither of them seem to realize the problem of calibration with power laws. See also Ferguson, Why Did the Great War?, to gauge the extent of the forward-backward problems. For the traditional nomological tendency, i.e., the attempt to go beyond cause into a general theory, see Muqaddamah by Ibn Khaldoun. See also Hegel’s Philosophy of History. Emotion and cognition: cognition Zajonc (1980, 1984). Catastrophe insurance: insurance Froot (2001) claims that insurance for remote events is overpriced. How he determined this remains unclear (perhaps by back tting or bootstraps), but reinsurance companies have not been making a penny selling “overpriced” insurance. Postmodernists: Postmodernists Postmodernists do not seem to be aware of the di erences between narrative and prediction. Luck and serendipity in medicine: medicine Vale et al. (2005). In history, see Cooper (2004). See also Ru é (1977). More general, see Roberts (1989). A ective forecasting: forecasting See Gilbert (1991), Gilbert et al. (1993), and Montier (2007). CHAPTERS 14–17 This section will also serve another purpose. Whenever I talk about the Black Swan, people tend to supply me with anecdotes. But these anecdotes are just corroborative: you need to show that in the aggregate the world is dominated by Black Swan events. To me, the rejection of nonscalable randomness is su cient to establish the role and signi cance of Black Swans. Matthew e ects: ects See Merton (1968, 1973a, 1988). Martial, in his Epigrams: “Semper pauper eris, si pauper es, Aemiliane./Dantur opes nullis (nunc) nisi divitibus.” (Epigr. V 81). See also Zuckerman (1997, 1998). Cumulative advantage and its consequences on social fairness: fairness review in DiPrete et al. (2006). See also Brookes- Gun and Duncan (1994), Broughton and Mills (1980), Dannefer (2003), Donhardt (2004), Hannon (2003), and Huber (1998). For how it may explain precocity, see Elman and O’Rand (2004). Concentration and fairness in intellectual careers: careers Cole and Cole (1973), Cole (1970), Conley (1999), Faia (1975), Seglen (1992), Redner (1998), Lotka (1926), Fox and Kochanowski (2004), and Huber (2002). Winner take all: all Rosen (1981), Frank (1994), Frank and Cook (1995), and Attewell (2001). Arts: Arts Bourdieu (1996), Taleb (2004e). Wars: Wars War is concentrated in an Extremistan manner: Lewis Fry Richardson noted last century the uneveness in the distribution of casualties (Richardson ). Modern wars: wars Arkush and Allen (2006). In the study of the Maori, the pattern of ghting with clubs was sustainable for many centuries—modern tools cause 20,000 to 50,000 deaths a year. We are simply not made for technical warfare. For an anecdotal and causative account of the history of a war, see Ferguson (2006). S&P 500: 500 See Rosenzweig (2006). The long tail: tail Anderson (2006). Cognitive diversity: diversity See Page (2007). For the e ect of the Internet on schools, see Han et al. (2006). Cascades: Cascades See Schelling (1971, 1978) and Watts (2002). For information cascades in economics, see Bikhchandani, Hirshleifer, and Welch (1992) and Shiller (1995). See also Surowiecki (2004). Fairness: Fairness Some researchers, like Frank (1999), see arbitrary and random success by others as no di erent from pollution, which necessitates the enactment of a tax. De Vany, Taleb, and Spitznagel (2004) propose a market- based solution to the problem of allocation through the process of voluntary self-insurance and derivative products. Shiller (2003) proposes cross-country insurance. The mathematics of preferential attachment: attachment This argument pitted Mandelbrot against the cognitive scientist Herbert Simon, who formalized Zipf’s ideas in a 1955 paper (Simon ), which then became known as the Zipf-Simon model. Hey, you need to allow for people to fall from favor! Concentration: Concentration Price (1970). Simon’s “Zipf derivation,” Simon (1955). More general bibliometrics, see Price (1976) and Glänzel (2003). Creative destruction revisited: revisited See Schumpeter (1942). Networks: Networks Barabási and Albert (1999), Albert and Barabási (2000), Strogatz (2001, 2003), Callaway et al. (2000), Newman et al. (2000), Newman, Watts, and Strogatz (2000), Newman (2001), Watts and Strogatz (1998), Watts (2002, 2003), and Amaral et al. (2000). It supposedly started with Milgram (1967). See also Barbour and Reinert (2000), Barthélémy and Amaral (1999). See Boots and Sasaki (1999) for infections. For extensions, see Bhalla and Iyengar (1999). Resilence, Cohen et al. (2000), Barabási and Bonabeau (2003), Barabási (2002), and Banavar et al. (2000). Power laws and the Web, Adamic and Huberman (1999) and Adamic (1999). Statistics of the Internet: Huberman (2001), Willinger et al. (2004), and Faloutsos, Faloutsos, and Faloutsos (1999). For DNA, see Vogelstein et al. (2000). Self -organized criticality: criticality Bak (1996). Pioneers of fat tails: tails For wealth, Pareto (1896), Yule (1925, 1944). Less of a pioneer Zipf (1932, 1949). For linguistics, see Mandelbrot (1952). Pareto: Pareto See Bouvier (1999). Endogenous vs. exogenous: exogenous Sornette et al. (2004). Sperber’s work: work Sperber (1996a, 1996b, 1997). Regression: Regression If you hear the phrase least square regression, you should be suspicious about the claims being made. As it assumes that your errors wash out rather rapidly, it underestimates the total possible error, and thus overestimates what knowledge one can derive from the data. The notion of central limit: limit very misunderstood: it takes a long time to reach the central limit—so as we do not live in the asymptote, we’ve got problems. All various random variables (as we started in the example of Chapter 16 with a +1 or −1, which is called a Bernouilli draw) under summation (we did sum up the wins of the 40 tosses) become Gaussian. Summation is key here, since we are considering the results of adding up the 40 steps, which is where the Gaussian, under the rst and second central assumptions becomes what is called a “distribution.” (A distribution tells you how you are likely to have your outcomes spread out, or distributed.) However, they may get there at di erent speeds. This is called the central limit theorem: if you add random variables coming from these individual tame jumps, it will lead to the Gaussian. Where does the central limit not work? If you do not have these central assumptions, but have jumps of random size instead, then we would not get the Gaussian. Furthermore, we sometimes converge very slowly to the Gaussian. For preasymptotics and scalability, Mandelbrot and Taleb (2007a), Bouchaud and Potters (2003). For the problem of working outside asymptotes, Taleb (2007). Aureas mediocritas: mediocritas historical perspective, in Naya and Pouey-Mounou (2005) aptly called Éloge de la médiocrité. Rei cation (hypostatization): (hypostatization) Lukacz, in Bewes (2002). Catastrophes: Catastrophes Posner (2004). Concentration and modern economic life: life Zajdenweber (2000). Choices of society structure and compressed outcomes: outcomes The classical paper is Rawls (1971), though Frohlich, Oppenheimer, and Eavy (1987a, 1987b), as well as Lissowski, Tyszka, and Okrasa (1991), contradict the notion of the desirability of Rawl’s veil (though by experiment). People prefer maximum average income subjected to a oor constraint on some form of equality for the poor, inequality for the rich type of environment. Gaussian contagion: contagion Quételet in Stigler (1986). Francis Galton (as quoted in Ian Hacking’s The Taming of Chance): “I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by ‘the law of error.’” “Finite variance” nonsense: nonsense Associated with CLT is an assumption called “ nite variance” that is rather technical: none of these building-block steps can take an in nite value if you square them or multiply them by themselves. They need to be bounded at some number. We simpli ed here by making them all one single step, or nite standard deviation. But the problem is that some fractal payo s may have nite variance, but still not take us there rapidly. See Bouchaud and Potters (2003). Lognormal: Lognormal There is an intermediate variety that is called the lognormal, emphasized by one Gibrat (see Sutton ) early in the twentieth century as an attempt to explain the distribution of wealth. In this framework, it is not quite that the wealthy get wealthier, in a pure preferential attachment situation, but that if your wealth is at 100 you will vary by 1, but when your wealth is at 1,000, you will vary by 10. The relative changes in your wealth are Gaussian. So the lognormal super cially resembles the fractal, in the sense that it may tolerate some large deviations, but it is dangerous because these rapidly taper o at the end. The introduction of the lognormal was a very bad compromise, but a way to conceal the aws of the Gaussian. Extinctions: Extinctions Sterelny (2001). For extinctions from abrupt fractures, see Courtillot (1995) and Courtillot and Gaudemer (1996). Jumps: Eldredge and Gould. FRACTALS, POWER LAWS, and SCALE-FREE DISTRIBUTIONS nition Technically, P>x= K x-α where α is supposed to De nition: be the power-law exponent. It is said to be scale free, in the sense that it does not have a characteristic scale: relative deviation of does not depend on x, but on n—for x “large enough.” Now, in the other class of distribution, the one that I can intuitively describe as nonscalable, with the typical shape p(x) = Exp[-a x], the scale will be a. Problem of “how large”: large” Now the problem that is usually misunderstood. This scalability might stop somewhere, but I do not know where, so I might consider it in nite. The statements very large and I don’t know how large and infinitely large are epistemologically substitutable. There might be a point at which the distributions ip. This will show once we look at them more graphically. Log P>x = -α Log X +Ct for a scalable. When we do a log-log plot (i.e., plot P>x and x on a logarithmic scale), as in Figures 15 and 16, we should see a straight line. Fractals and power laws: laws Mandelbrot (1975, 1982). Schroeder (1991) is imperative. John Chipman’s unpublished manuscript The Paretian Heritage (Chipman ) is the best review piece I’ve seen. See also Mitzenmacher (2003). “To come very near true theory and to grasp its precise application are two very di erent things as the history of science teaches us. Everything of importance has been said before by somebody who did not discover it.” Whitehead (1925). Fractals in poetry: poetry For the quote on Dickinson, see Fulton (1998). Lacunarity: Lacunarity Brockman (2005). In the arts, Mandelbrot (1982). Fractals in medicine: medicine “New Tool to Diagnose and Treat Breast Cancer,” Newswise, July 18, 2006. General reference books in statistical physics: physics The most complete (in relation to fat tails) is Sornette (2004). See also Voit (2001) or the far deeper Bouchaud and Potters (2002) for nancial prices and econophysics. For “complexity” theory, technical books: Bocarra (2004), Strogatz (1994), the popular Ruelle (1991), and also Prigogine (1996). Fitting processes: processes For the philosophy of the problem, Taleb and Pilpel (2004). See also Pisarenko and Sornette (2004), Sornette et al. (2004), and Sornette and Ide (2001). Poisson jump: jump Sometimes people propose a Gaussian distribution with a small probability of a “Poisson” jump. This may be ne, but how do you know how large the jump is going to be? Past data might not tell you how large the jump is. Small sample e ect: ect Weron (2001). O cer (1972) is quite ignorant of the point. Recursivity of statistics: statistics Taleb and Pilpel (2004), Blyth et al. (2005). Biology: Biology Modern molecular biology pioneers Salvador Luria and Max Delbrück witnessed a clustering phenomenon with the occasional occurrence of extremely large mutants in a bacterial colony, larger than all other bacteria. Thermodynamics: Thermodynamics Entropy maximization without the constraints of a second moment The two exhaustive domains of attraction: vertical or straight line with slopes either negative in nity or constant negative α. Note that since probabilities need to add up to 1 (even in France) there cannot be other alternatives to the two basins, which is why I narrow it down to these two exclusively. FIGURE 15: TYPICAL DISTRIBUTION WITH POWER- LAW TAILS (HERE A STUDENT T) FIGURE 16 My ideas are made very simple with this clean cut polarization— added to the problem of not knowing which basin we are in owing to the scarcity of data on the far right. leads to a Levy- stable distribution—Mandelbrot’s thesis of 1952 (see Mandelbrot [1997a]). Tsallis’s more sophisticated view of entropy leads to a Student T. Imitation chains and pathologies: pathologies An informational cascade is a process where a purely rational agent elects a particular choice ignoring his own private information (or judgment) to follow that of others. You run, I follow you, because you may be aware of a danger I may be missing. It is e cient to do what others do instead of having to reinvent the wheel every time. But this copying the behavior of others can lead to imitation chains. Soon everyone is running in the same direction, and it can be for spurious reasons. This behavior causes stock market bubbles and the formation of massive cultural fads. Bikhchandani et al. (1992). In psychology, see Hansen and Donoghue (1977). In biology/selection, Dugatkin (2001), Kirpatrick and Dugatkin (1994). Self -organized criticality: criticality Bak and Chen (1991), Bak (1996). Economic variables: variables Bundt and Murphy (2006). Most economic variables seem to follow a “stable” distribution. They include foreign exchange, the GDP, the money supply, interest rates (long and short term), and industrial production. Statisticians not accepting scalability: scalability Flawed reasoning mistaking for sampling error in the tails for a boundedness: Perline (2005), for instance, does not understand the di erence between absence of evidence and evidence of absence. Time series and memory: memory You can have “fractal memory,” i.e., the e ect of past events on the present has an impact that has a “tail.” It decays as power-law, not exponentially. Marmott’s work: work Marmott (2004). CHAPTER 18 Economists: Economists Weintraub (2002), Szenberg (1992). Portfolio theory and modern nance:nance Markowitz (1952, 1959), Huang and Litzenberger (1988) and Sharpe (1994, 1996). What is called the Sharpe ratio is meaningless outside of Mediocristan. The contents of Steve Ross’s book (Ross ) on “neoclassical nance” are completely canceled if you consider Extremistan in spite of the “elegant” mathematics and the beautiful top-down theories. “Anecdote” of Merton minor in Merton (1992). Obsession with measurement: measurement Crosby (1997) is often shown to me as convincing evidence that measuring was a great accomplishment not knowing that it applied to Mediocristan and Mediocristan only. Bernstein (1996) makes the same error. Power laws in nance: nance Mandelbrot (1963), Gabaix et al. (2003), and Stanley et al. (2000). Kaizoji and Kaizoji (2004), Véhel and Walter (2002). Land prices: Kaizoji (2003). Magisterial: Bouchaud and Potters (2003). Equity premium puzzle: puzzle If you accept fat tails, there is no equity premium puzzle. Benartzi and Thaler (1995) o er a psychological explanation, not realizing that variance is not the measure. So do many others. Covered writes: writes a sucker’s game as you cut your upside— conditional on the upside being breached, the stock should rally a lot more than intuitively accepted. For a representative mistake, see Board et al. (2000). Nobel family: family “Nobel Descendant Slams Economics Prize,” The Local, September 28, 2005, Stockholm. Double bubble: bubble The problem of derivatives is that if the underlying security has mild fat tails and follows a mild power law (i.e., a tail exponent of three or higher), the derivative will produce far fatter tails (if the payo is in squares, then the tail exponent of the derivatives portfolio will be half that of the primitive). This makes the Black- Scholes-Merton equation twice as un t! Poisson busting: busting The best way to gure out the problems of the Poisson as a substitute for a scalable is to calibrate a Poisson and compute the errors out of sample. The same applies to methods such as GARCH—they fare well in sample, but horribly, horribly outside (even a trailing three-month past historical volatility or mean deviation will outperform a GARCH of higher orders). Why the Nobel: Nobel Derman and Taleb (2005), Haug (2007). Claude Bernard and experimental medicine: medicine “Empiricism pour le présent, avec direction a aspiration scientifique pour l’avenir.” From Claude Bernard, Principe de la médecine expérimentale. See also Fagot-Largeault (2002) and Ru é (1977). Modern evidence-based medicine: Ierodiakonou and Vandenbroucke (1993) and Vandenbroucke (1996) discuss a stochastic approach to medicine. CHAPTER 19 Popper quote: quote From Conjectures and Refutations, pages 95– 97. The lottery paradox: paradox This is one example of scholars not understanding the high-impact rare event. There is a well- known philosophical conundrum called the “lottery paradox,” originally posed by the logician Henry Kyburg (see Rescher and Clark ), which goes as follows: “I do not believe that any ticket will win the lottery, but I do believe that all tickets will win the lottery.” To me (and a regular person) this statement does not seem to have anything strange in it. Yet for an academic philosopher trained in classical logic, this is a paradox. But it is only so if one tries to squeeze probability statements into commonly used logic that dates from Aristotle and is all or nothing. An all or nothing acceptance and rejection (“I believe” or “I do not believe”) is inadequate with the highly improbable. We need shades of belief, degrees of faith you have in a statement other than 100% or 0%. One nal philosophical consideration. For my friend the options trader and Talmudic scholar Rabbi Tony Glickman: life is convex and to be seen as a series of derivatives. Simply put, when you cut the negative exposure, you limit your vulnerability to unknowledge, Taleb (2005).

Use Quizgecko on...
Browser
Browser