🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Super Thinking: The Big Book of Mental Models PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

scrollinondubs

Uploaded by scrollinondubs

Stanford School of Medicine

2019

Gabriel Weinberg and Lauren McCann

Tags

mental models super thinking decision making critical thinking

Summary

This book explores mental models, recurring concepts that help explain, predict, or approach various subjects. Super models are useful for daily decision making and are applicable across disciplines. The book delves into topics like opportunity cost, inertia, Goodhart's law, regulatory capture, and critical mass.

Full Transcript

PORTFOLIO / PENGUIN An imprint of Penguin Random House LLC penguinrandomhouse.com Copyright © 2019 by Gabriel Weinberg and Lauren McCann Penguin supports copyright. Copyright fuels creativity, encourages diverse voices, promotes free speech, and create...

PORTFOLIO / PENGUIN An imprint of Penguin Random House LLC penguinrandomhouse.com Copyright © 2019 by Gabriel Weinberg and Lauren McCann Penguin supports copyright. Copyright fuels creativity, encourages diverse voices, promotes free speech, and creates a vibrant culture. Thank you for buying an authorized edition of this book and for complying with copyright laws by not reproducing, scanning, or distributing any part of it in any form without permission. You are supporting writers and allowing Penguin to continue to publish books for every reader. Image credits appear on this page LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA Names: Weinberg, Gabriel, author. | McCann, Lauren, author. Title: Super Thinking: The Big Book of Mental Models / Gabriel Weinberg and Lauren McCann. Description: New York: Portfolio/Penguin, | Includes index. Identi ers: LCCN 2019002099 (print) | LCCN 2019004235 (ebook) | ISBN 9780525533597 (ebook) | ISBN 9780525542810 (international edition) | ISBN 9780525533580 (hardcover) Subjects: LCSH: Thought and thinking. | Cognition. | Reasoning. Classi cation: LCC BF441 (ebook) | LCC BF441.W4446 2019 (print) | DDC 153.4/2—dc23 LC record available at https://lccn.loc.gov/2019002099 While the authors have made every e ort to provide accurate telephone numbers, internet addresses, and other contact information at the time of publication, neither the publisher nor the authors assume any responsibility for errors, or for changes that occur after publication. Further, the publisher does not have any control over and does not assume any responsibility for authors or third-party websites or their content. Version_2 Contents Title Page Copyright Introduction: The Super Thinking Journey CHAPTER ONE Being Wrong Less CHAPTER TWO Anything That Can Go Wrong, Will CHAPTER THREE Spend Your Time Wisely CHAPTER FOUR Becoming One with Nature CHAPTER FIVE Lies, Damned Lies, and Statistics CHAPTER SIX Decisions, Decisions CHAPTER SEVEN Dealing with Con ict CHAPTER EIGHT Unlocking People’s Potential CHAPTER NINE Flex Your Market Power CONCLUSION Acknowledgments Image Credits Index About the Authors Introduction The Super Thinking Journey EACH MORNING, AFTER OUR KIDS head o to school or camp, we take a walk and talk about our lives, our careers, and current events. (We’re married.) Though we discuss a wide array of topics, we often nd common threads—recurring concepts that help us explain, predict, or approach these seemingly disparate subjects. Examples range from more familiar concepts, such as opportunity cost and inertia, to more obscure ones, such as Goodhart’s law and regulatory capture. (We will explain these important ideas and many more in the pages that follow.) These recurring concepts are called mental models. Once you are familiar with them, you can use them to quickly create a mental picture of a situation, which becomes a model that you can later apply in similar situations. (Throughout this book, major mental models appear in boldface when we introduce them to you. We use italics to emphasize words in a model’s name, as well as to highlight common related concepts and phrases.) In spite of their usefulness, most of these concepts are not universally taught in school, even at the university level. We picked up some of them in our formal education (both of us have undergraduate and graduate degrees from MIT), but the bulk of them we learned on our own through reading, conversations, and experience. We wish we had learned about these ideas much earlier, because they not only help us better understand what is going on around us, but also make us more e ective decision makers in all areas of our lives. While we can’t go back in time and teach our younger selves these ideas, we can provide this guide for others, and for our children. That was our primary motivation for writing this book. An example of a useful mental model from physics is the concept of critical mass, the mass of nuclear material needed to create a critical state whereby a nuclear chain reaction is possible. Critical mass was an essential mental model in the development of the atomic bomb. Every discipline, like physics, has its own set of mental models that people in the eld learn through coursework, mentorship, and rsthand experience. There is a smaller set of mental models, however, that are useful in general day-to-day decision making, problem solving, and truth seeking. These often originate in speci c disciplines (physics, economics, etc.), but have metaphorical value well beyond their originating discipline. Critical mass is one of these mental models with wider applicability: ideas can attain critical mass; a party can reach critical mass; a product can achieve critical mass. Unlike hundreds of other concepts from physics, critical mass is broadly useful outside the context of physics. (We explore critical mass in more detail in Chapter 4.) We call these broadly useful mental models super models because applying them regularly gives you a super power: super thinking—the ability to think better about the world—which you can use to your advantage to make better decisions, both personally and professionally. We were introduced to the concept of super models many years ago through Charlie Munger, the partner of renowned investor Warren Bu ett. As Munger explained in a 1994 speech at University of Southern California Marshall Business School titled “A Lesson on Elementary, Worldly Wisdom as It Relates to Investment Management and Business”: What is elementary, worldly wisdom? Well, the rst rule is that you can’t really know anything if you just remember isolated facts and try and bang ’em back. If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form. You’ve got to have models in your head. And you’ve got to array your experience—both vicarious and direct—on this latticework of models. As the saying goes, “History doesn’t repeat itself, but it does rhyme.” If you can identify a mental model that applies to the situation in front of you, then you immediately know a lot about it. For example, suppose you are thinking about a company that involves people renting out their expensive power tools, which usually sit dormant in their garages. If you realize that the concept of critical mass applies to this business, then you know that there is some threshold that needs to be reached before it could be viable. In this case, you need enough tools available for rent in a community to satisfy initial customer demand, much as you need enough Lyft drivers in a city for people to begin relying on the service. That is super thinking, because once you have determined that this business model can be partially explained through the lens of critical mass, you can start to reason about it at a higher level, asking and answering questions like these: What density of tools is needed to reach the critical mass point in a given area? How far away can two tools be to count toward the same critical mass point in that area? Is the critical mass likely to be reached in an area? Why or why not? Can you tweak the business model so that this critical mass point is reachable or easier to reach? (For instance, the company could seed each area with its own tools.) As you can see, super models are shortcuts to higher-level thinking. If you can understand the relevant models for a situation, then you can bypass lower-level thinking and immediately jump to higher-level thinking. In contrast, people who don’t know these models will likely never reach this higher level, and certainly not quickly. Think back to when you rst learned multiplication. As you may recall, multiplication is just repeated addition. In fact, all mathematical operations based on arithmetic can be reduced to just addition: subtraction is just adding a negative number, division is just repeated subtraction, and so on. However, using addition for complex operations can be really slow, which is why you use multiplication in the rst place. For example, suppose you have a calculator or spreadsheet in front of you. When you have 158 groups of 7 and you want to know the total, you could use your tool to add 7 to itself 158 times (slow), or you could just multiply 7 × 158 (quick). Using addition is painfully time-consuming when you are aware of the higher-level concept of multiplication, which helps you work quickly and e ciently. When you don’t use mental models, strategic thinking is like using addition when multiplication is available to you. You start from scratch every time without using these essential building blocks that can help you reason about problems at higher levels. And that’s exactly why knowing the right mental models unlocks super thinking, just as subtraction, multiplication, and division unlock your ability to do more complex math problems. Once you have internalized a mental model like multiplication, it’s hard to imagine a world without it. But very few mental models are innate. There was a time when even addition wasn’t known to most people, and you can still nd whole societies that live without it. The Pirahã of the Amazon rain forest in Brazil, for example, have no concept of speci c numbers, only concepts for “a smaller amount” and “a larger amount.” As a result, they cannot easily count beyond three, let alone do addition, as Brian Butterworth recounted in an October 20, 2004, article for The Guardian, “What Happens When You Can’t Count Past Four?”: Not having much of number vocabulary, and no numeral symbols, such as one, two, three, their arithmetical skills could not be tested in the way we would test even ve-year-olds in Britain. Instead, [linguist Peter] Gordon used a matching task. He would lay out up to eight objects in front of him on a table, and the Pirahã participant’s task was to place the same number of objects in order on the table. Even when the objects were placed in a line, accuracy dropped o dramatically after three objects. Consider that there are probably many disciplines where you have only rudimentary knowledge. Perhaps physics is one of them? Most of the concepts from physics are esoteric, but some—those physics mental models that we present in this book—do have the potential to be repeatedly useful in your day-to-day life. And so, despite your rudimentary knowledge of the discipline, you can and should still learn enough about these particular concepts to be able to apply them in non-physics contexts. For instance, unless you are a physicist, Coriolis force, Lenz’s law, di raction, and hundreds of other concepts are unlikely to be of everyday use to you, but we contend that critical mass will prove useful. That’s the di erence between regular mental models and super models. And this pattern repeats for each of the major disciplines. As Munger said: And the models have to come from multiple disciplines—because all the wisdom of the world is not to be found in one little academic department.... You’ve got to have models across a fair array of disciplines. You may say, “My God, this is already getting way too tough.” But, fortunately, it isn’t that tough—because 80 or 90 important models will carry about 90 percent of the freight in making you a worldly-wise person. And, of those, only a mere handful really carry very heavy freight. Munger expanded further in an April 19, 1996, speech at Stanford Law School similarly titled “A Lesson on Elementary, Worldly Wisdom, Resulted”: When I urge a multidisciplinary approach... I’m really asking you to ignore jurisdictional boundaries. If you want to be a good thinker, you must develop a mind that can jump these boundaries. You don’t have to know it all. Just take in the best big ideas from all these disciplines. And it’s not that hard to do. You want to have a broad base of mental models at your ngertips, or else you risk using suboptimal models for a given situation. It’s like the expression “If all you have is a hammer, everything looks like a nail.” (This phrase is associated with another super model, Maslow’s hammer, which we cover in Chapter 6.) You want to use the right tool for a given situation, and to do that, you need a whole toolbox full of super models. This book is that toolbox: it systematically lists, classi es, and explains all the important mental models across the major disciplines. We have woven all these super models together for you in a narrative fashion through nine chapters that we hope are both fun to read and easy to understand. Each chapter has a unifying theme and is written in a way that should be convenient to refer back to. We believe that when taken together, these super models will be useful to you across your entire life: to make sense of situations, help generate ideas, and aid in decision making. For these mental models to be most useful, however, you must apply them at the right time and in the right context. And for that to happen, you must know them well enough to associate the right ones with your current circumstances. When you deeply understand a mental model, it should come to you naturally, like multiplication does. It should just pop into your head. Learning to apply super mental models in this manner doesn’t happen overnight. Like Spider-Man or the Hulk, you won’t have instant mastery of your powers. The superpowers you gain from your initial knowledge of these mental models must be developed. Reading this book for the rst time is like Spider-Man getting his spider bite or the Hulk his radiation dose. After the initial transformation, you must develop your powers through repeated practice. When your powers are honed, you will be like the Hulk in the iconic scene from the movie The Avengers depicted on the previous page. When Captain America wants Bruce Banner (the Hulk’s alter ego) to turn into the Hulk, he tells him, “Now might be a really good time for you to get angry.” Banner replies, “That’s my secret, Captain.... I’m always angry.” This is the book we wish someone had gifted us many years ago. No matter where you are in life, this book is designed to help jump start your super thinking journey. This reminds us of another adage, “The best time to plant a tree was twenty years ago. The second best time is now.” 1 Being Wrong Less YOU MAY NOT REALIZE IT, but you make dozens of decisions every day. And when you make those decisions, whether they are personal or professional, you want to be right much more often than you are wrong. However, consistently being right more often is hard to do because the world is a complex, ever-evolving place. You are steadily faced with unfamiliar situations, usually with a large array of choices. The right answer may be apparent only in hindsight, if it ever becomes clear at all. Carl Jacobi was a nineteenth-century German mathematician who often used to say, “Invert, always invert” (actually he said, “Man muss immer umkehren,” because English wasn’t his rst language). He meant that thinking about a problem from an inverse perspective can unlock new solutions and strategies. For example, most people approach investing their money from the perspective of making more money; the inverse approach would be investing money from the perspective of not losing money. Or consider healthy eating. A direct approach would be to try to construct a healthy diet, perhaps by making more food at home with controlled ingredients. An inverse approach, by contrast, would be to try to avoid unhealthy options. You might still go to all the same eating establishment but simply choose the healthier options when there. The concept of inverse thinking can help you with the challenge of making good decisions. The inverse of being right more is being wrong less. Mental models are a tool set that can help you be wrong less. They are a collection of concepts that help you more e ectively navigate our complex world. As noted in the Introduction, mental models come from a variety of speci c disciplines, but many have more value beyond the eld they come from. If you can use these mental models to help you make decisions as events unfold before you, they can help you be wrong less often. Let us o er an example from the world of sports. In tennis, an unforced error occurs when a player makes a mistake not because the other player hit an awesome shot, but rather because of their own poor judgment or execution. For example, hitting an easy ball into the net is one kind of unforced error. To be wrong less in tennis, you need to make fewer unforced errors on the court. And to be consistently wrong less in decision making, you consistently need to make fewer unforced errors in your own life. See how this works? Unforced error is a concept from tennis, but it can be applied as a metaphor in any situation where an avoidable mistake is made. There are unforced errors in baking (using a tablespoon instead of a teaspoon) or dating (making a bad rst impression) or decision making (not considering all your options). Start looking for unforced errors around you and you will see them everywhere. An unforced error isn’t the only way to make a wrong decision, though. The best decision based on the information available at the time can easily turn out to be the wrong decision in the long run. That’s just the nature of dealing with uncertainty. No matter how hard you try, because of uncertainty, you may still be wrong when you make decisions, more frequently than you’d like. What you can do, however, is strive to make fewer unforced errors over time by using sound judgment and techniques to make the best decision at any given time. Another mental model to help improve your thinking is called antifragile, a concept explored in a book of the same name by nancial analyst Nassim Nicholas Taleb. In his words: Some things bene t from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile. Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better. Just as it pays o to make your nancial portfolio antifragile in the face of economic shocks, it similarly pays o to make your thinking antifragile in the face of new decisions. If your thinking is antifragile, then it gets better over time as you learn from your mistakes and interact with your surroundings. It’s like working out at the gym—you are shocking your muscles and bones so they grow stronger over time. We’d like to improve your thought process by helping you incorporate mental models into your day-to-day thinking, increasingly matching the right models to a given situation. By the time you’ve nished reading this book, you will have more than three hundred mental models oating around in your head from dozens of disciplines, eager to pop up at just the right time. You don’t have to be an expert at tennis or nancial analysis to bene t from these concepts. You just need to understand their broader meaning and apply them when appropriate. If you apply these mental models consistently and correctly, your decisions will become wrong much less, or inverted, right much more. That’s super thinking. In this chapter we’re going to explore solving problems without bias. Unfortunately, evolution has hardwired us with several mind traps. If you are not aware of them, you will make poor decisions by default. But if you can recognize these traps from afar and avoid them by using some tried- and-true techniques, you will be well on the path to super thinking. KEEP IT SIMPLE, STUPID! Any science or math teacher worth their salt stresses the importance of knowing how to derive every formula that you use, because only then do you really know it. It’s the di erence between being able to attack a math problem with a blank sheet of paper and needing a formula handed to you to begin with. It’s also the di erence between being a chef—someone who can take ingredients and turn them into an amazing dish without looking at a cookbook—and being the kind of cook who just knows how to follow a recipe. Lauren was the teaching assistant for several statistics courses during her years at MIT. One course had a textbook that came with a computer disk, containing a simple application that could be used as a calculator for the statistical formulas in the book. On one exam, a student wrote the following answer to one of the statistical problems posed: “I would use the disk and plug the numbers in to get the answer.” The student was not a chef. The central mental model to help you become a chef with your thinking is arguing from rst principles. It’s the practical starting point to being wrong less, and it means thinking from the bottom up, using basic building blocks of what you think is true to build sound (and sometimes new) conclusions. First principles are the group of self-evident assumptions that make up the foundation on which your conclusions rest—the ingredients in a recipe or the mathematical axioms that underpin a formula. Given a set of ingredients, a chef can adapt and create new recipes, as on Chopped. If you can argue from rst principles, then you can do the same thing when making decisions, coming up with novel solutions to hard problems. Think MacGyver, or the true story depicted in the movie Apollo 13 (which you should watch if you haven’t), where a malfunction on board the spacecraft necessitated an early return to Earth and the creation of improvised devices to make sure, among other things, that there was enough usable air for the astronauts to breathe on the trip home. NASA engineers gured out a solution using only the “ingredients” on the ship. In the movie, an engineer dumps all the parts available on the spacecraft on a table and says, “We’ve got to nd a way to make this [holding up square canister] t into the hole for this [holding up round canister] using nothing but that [pointing to parts on the table].” If you can argue from rst principles, then you can more easily approach unfamiliar situations, or approach familiar situations in innovative ways. Understanding how to derive formulas helps you to understand how to derive new formulas. Understanding how molecules t together enables you to build new molecules. Tesla founder Elon Musk illustrates how this process works in practice in an interview on the Foundation podcast: First principles is kind of a physics way of looking at the world.... You kind of boil things down to the most fundamental truths and say, “What are we sure is true?”... and then reason up from there.... Somebody could say... “Battery packs are really expensive and that’s just the way they will always be.... Historically, it has cost $600 per kilowatt-hour, and so it’s not going to be much better than that in the future.”... With rst principles, you say, “What are the material constituents of the batteries? What is the stock market value of the material constituents?”... It’s got cobalt, nickel, aluminum, carbon, and some polymers for separation, and a seal can. Break that down on a material basis and say, “If we bought that on the London Metal Exchange, what would each of those things cost?”... It’s like $80 per kilowatt-hour. So clearly you just need to think of clever ways to take those materials and combine them into the shape of a battery cell and you can have batteries that are much, much cheaper than anyone realizes. When arguing from rst principles, you are deliberately starting from scratch. You are explicitly avoiding the potential trap of conventional wisdom, which could turn out to be wrong. Even if you end up in agreement with conventional wisdom, by taking the rst-principles approach, you will gain a much deeper understanding of the subject at hand. Any problem can be approached from rst principles. Take your next career move. Most people looking for work will apply to too many jobs and take the rst job that is o ered to them, which is likely not the optimal choice. When using rst principles, you’ll instead begin by thinking about what you truly value in a career (e.g., autonomy, status, mission, etc.), your required job parameters ( nancial, location, title, etc.), and your previous experience. When you add those up, you will get a much better picture of what might work best for your next career move, and then you can actively seek that out. Thinking alone, though, even from rst principles, only gets you so far. Your rst principles are merely assumptions that may be true, false, or somewhere in between. Do you really value autonomy in a job, or do you just think you do? Is it really true you need to go back to school to switch careers, or might it actually be unnecessary? Ultimately, to be wrong less, you also need to be testing your assumptions in the real world, a process known as de-risking. There is risk that one or more of your assumptions are untrue, and so the conclusions you reach could also be false. As another example, any startup business idea is built upon a series of principled assumptions: My team can build our product. People will want our product. Our product will generate pro t. We will be able to fend o competitors. The market is large enough for a long-term business opportunity. You can break these general assumptions down into more speci c assumptions: My team can build our product. We have the right number and type of engineers; our engineers have the right expertise; our product can be built in a reasonable amount of time; etc. People will want our product. Our product solves the problem we think it does; our product is simple enough to use; our product has the critical features needed for success; etc. Our product will generate pro t. We can charge more for our product than it costs to make and market it; we have good messaging to market our product; we can sell enough of our product to cover our xed costs; etc. We will be able to fend o competitors. We can protect our intellectual property; we are doing something that is di cult to copy; we can build a trusted brand; etc. The market is large enough for a long-term business opportunity. There are enough people out there who will want to buy our product; the market for our product is growing rapidly; the bigger we get, the more pro t we can make; etc. Once you get speci c enough with your assumptions, then you can devise a plan to test (de-risk) them. The most important assumptions to de- risk rst are the ones that are necessary conditions for success and that you are most uncertain about. For example, in the startup context, take the assumption that your solution su ciently solves the problem it was designed to solve. If this assumption is untrue, then you will need to change what you are doing immediately before you can proceed any further, because the whole endeavor won’t work otherwise. Once you identify the critical assumptions to de-risk, the next step is actually going out and testing these assumptions, proving or disproving them, and then adjusting your strategy appropriately. Just as the concept of rst principles is universally applicable, so is de- risking. You can de-risk anything: a policy idea, a vacation plan, a workout routine. When de-risking, you want to test assumptions quickly and easily. Take a vacation plan. Assumptions could be around cost (“I can a ord this vacation”), satisfaction (“I will enjoy this vacation”), coordination (“my relatives can join me on this vacation”), etc. Here, de-risking is as easy as doing a few minutes of online research, reading reviews, and sending an email to your relatives. Unfortunately, people often make the mistake of doing way too much work before testing assumptions in the real world. In computer science this trap is called premature optimization, where you tweak or perfect code or algorithms (optimize) too early (prematurely). If your assumptions turn out to be wrong, you’re going to have to throw out all that work, rendering it ultimately a waste of time. It’s as if you booked an entire vacation assuming your family could join you, only to nally ask them and they say they can’t come. Then you have to go back and change everything, but all this work could have been avoided by a simple communication up front. Back in startup land, there is another mental model to help you test your assumptions, called minimum viable product, or MVP. The MVP is the product you are developing with just enough features, the minimum amount, to be feasibly, or viably, tested by real people. The MVP keeps you from working by yourself for too long. LinkedIn cofounder Reid Ho man put it like this: “If you’re not embarrassed by the rst version of your product, you’ve launched too late.” As with many useful mental models, you will frequently be reminded of the MVP now that you are familiar with it. An oft-quoted military adage says: “No battle plan survives contact with the enemy.” And boxer Mike Tyson (prior to his 1996 bout against Evander Holy eld): “Everybody has a plan until they get punched in the mouth.” No matter the context, what they’re all saying is that your rst plan is probably wrong. While it is the best starting point you have right now, you must revise it often based on the real-world feedback you receive. And we recommend doing as little work as possible before getting that real-world feedback. As with de-risking, you can extend the MVP model to t many other contexts: minimum viable organization, minimum viable communication, minimum viable strategy, minimum viable experiment. Since we have so many mental models to get to, we’re trying to do minimum viable explanations! Minimum Viable Product Vision MVP 2.0 The MVP forces you to evaluate your assumptions quickly. One way you can be wrong with your assumptions is by coming up with too many or too complicated assumptions up front when there are clearly simpler sets you can start with. Ockham’s razor helps here. It advises that the simplest explanation is most likely to be true. When you encounter competing explanations that plausibly explain a set of data equally well, you probably want to choose the simplest one to investigate rst. This model is a razor because it “shaves o ” unnecessary assumptions. It’s named after fourteenth-century English philosopher William of Ockham, though the underlying concept has much older roots. The Greco-Roman astronomer Ptolemy (circa A.D. 90–168) stated, “We consider it a good principle to explain the phenomena by the simplest hypotheses possible.” More recently, the composer Roger Sessions, paraphrasing Albert Einstein, put it like this: “Everything should be made as simple as it can be, but not simpler!” In medicine, it’s known by this saying: “When you hear hoofbeats, think of horses, not zebras.” A practical tactic is to look at your explanation of a situation, break it down into its constituent assumptions, and for each one, ask yourself: Does this assumption really need to be here? What evidence do I have that it should remain? Is it a false dependency? For example, Ockham’s razor would be helpful in the search for a long- term romantic partner. We’ve seen rsthand that many people have a long list of extremely speci c criteria for their potential mates, enabled by online dating sites and apps. “I will only date a Brazilian man with blue eyes who loves hot yoga and raspberry ice cream, and whose favorite Avengers character is Thor.” However, this approach leads to an unnecessarily small dating pool. If instead people re ected on whom they’ve dated in the past in terms of what underlying characteristics drove their past relationships to fail, a much simpler set of dating criteria would probably emerge. It is usually okay for partners to have more varied cultural backgrounds and looks, and even to prefer di erent Avengers characters, but they probably do need to make each other think and laugh and nd each other attractive. Therefore, a person shouldn’t narrow their dating pool unnecessarily with overly speci c criteria. If it turns out that dating someone who doesn’t share their taste in superheroes really does doom the relationship, then they can always add that speci c lter back in. Ockham’s razor is not a “law” in that it is always true; it just o ers guidance. Sometimes the true explanation can indeed be quite complicated. However, there is no reason to jump immediately to the complex explanation when you have simpler alternatives to explore rst. If you don’t simplify your assumptions, you can fall into a couple of traps, described in our next mental models. First, most people are, unfortunately, hardwired to latch onto unnecessary assumptions, a predilection called the conjunction fallacy, studied by Amos Tversky and Daniel Kahneman, who provided this example in the October 1983 Psychological Review: Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which is more probable? 1. Linda is a bank teller. 2. Linda is a bank teller and is active in the feminist movement. In their study, most people answered that number 2 is more probable, but that’s impossible unless all bank tellers are also active in the feminist movement. The fallacy arises because the probability of two events in conjunction is always less than or equal to the probability of either one of the events occurring alone, a concept illustrated in the Venn diagram on the next page. You not only have a natural tendency to think something speci c is more probable than something general, but you also have a similarly fallacious tendency to explain data using too many assumptions. The mental model for this second fallacy is over tting, a concept from statistics. Adding in all those overly speci c dating requirements is over tting your dating history. Similarly, believing you have cancer when you have a cold is over tting your symptoms. Conjunction Fallacy Over tting occurs when you use an overly complicated explanation when a simpler one will do. It’s what happens when you don’t heed Ockham’s razor, when you get sucked into the conjunction fallacy or make a similar unforced error. It can occur in any situation where an explanation introduces unnecessary assumptions. As a visual example, the data depicted on the next page can be easily explained by a straight line, but you could also over t the data by creating a curved one that moves through every single point, as the wavy line does. One approach to combatting both traps is to ask yourself: How much does my data really support my conclusion versus other conclusions? Do my symptoms really point only to cancer, or could they also point to a variety of other ailments, such as the common cold? Do I really need the curvy line to explain the data, or would a simple straight line explain just as much? A pithy mnemonic of this advice and all the advice in this section is KISS: Keep It Simple, Stupid! When crafting a solution to a problem, whether making a decision or explaining data, you want to start with the simplest set of assumptions you can think of and de-risk them as simply as possible. Over tting IN THE EYE OF THE BEHOLDER You go through life seeing everything from your perspective, which varies widely depending on your particular life experiences and current situation. In physics your perspective is called your frame of reference, a concept central to Einstein’s theory of relativity. Here’s an example from everyday life: If you are in a moving train, your reference frame is inside the train, which appears at rest to you, with objects inside the train not moving relative to one another, or to yourself. However, to someone outside the train looking in, you and all the objects in the train are moving at great speed, as seen from their di erent frame of reference, which is stationary to them. In fact, everything but the speed of light—even time—appears di erent in di erent frames of reference. If you’re trying to be as objective as possible when making a decision or solving a problem, you always want to account for your frame of reference. You will of course be in uenced by your perspective, but you don’t want to be unknowingly in uenced. And if you think you may not have the full understanding of a situation, then you must actively try to get it by looking from a variety of di erent frames of reference. A frame-of-reference mental trap (or useful trick, depending on your perspective) is framing. Framing refers to the way you present a situation or explanation. When you present an important issue to your coworker or family member, you try to frame it in a way that might help them best understand your perspective, setting the stage for a bene cial conversation. For example, if you want your organization to embark on an innovative yet expensive project, you might frame it to your colleagues as a potential opportunity to outshine the competition rather than as an endeavor that would require excessive resources. The latter framing may have it rejected out of hand. You also need to be aware that family members and coworkers are constantly framing issues for you as well, and your perception of their ideas can vary widely based on how they are framed. When someone presents a new idea or decision to you, take a step back and consider other ways in which it could be framed. If a colleague tells you they are leaving for another job to seek a better opportunity, that may indeed be true, but it also may be true that they want to leave the organization after feeling overlooked. Multiple framings can be valid yet convey vastly di erent perspectives. If you visit news sites on the internet, then you probably know all about framing, or at least you should. For example, headlines have a framing e ect, a ecting the meaning people take away from stories. On August 31, 2015, three police o cers responded to a 911 call about a burglary in progress. Unfortunately, the call did not specify an exact address, and the o cers responded to the wrong house. Upon nding the back door unlocked, they entered, and encountered a dog. Gun re ensued, and the dog, homeowner, and one of the o cers were shot, all by o cer gun re. The homeowner and o cer survived. Two headlines framed the incident in dramatically di erent ways. Framing E ect In a study by Ullrich Ecker and others, “The E ects of Subtle Misinformation in News Headlines,” presented in the December 2014 issue of the Journal of Experimental Psychology: Applied, students read an article about a small increase in burglary rates over the last year (0.2 percent) that was anomalous in a much larger decline over the past decade (10 percent). The same article came with one of two di erent headlines: “Number of Burglaries Going Up” or “Downward Trend in Burglary Rate.” The headline had a signi cant e ect on which facts in the article were remembered: The pattern was clear-cut: A misleading headline impaired memory for the article.... A misleading headline can thus do damage despite genuine attempts to accurately comprehend an article.... The practical implications of this research are clear: News consumers must be [made] aware that editors can strategically use headlines to e ectively sway public opinion and in uence individuals’ behavior. A related trap/trick is nudging. Aldert Vrij presents a compelling example in his book Detecting Lies and Deceit: Participants saw a lm of a tra c accident and then answered the question, “About how fast were the cars going when they contacted each other?” Other participants received the same question, except that the verb contacted was replaced by either hit, bumped, collided, or smashed. Even though the participants saw the same lm, the wording of the question a ected their answers. The speed estimates (in miles per hour) were 31, 34, 38, 39, and 41, respectively. You can be nudged in a direction by a subtle word choice or other environmental cues. Restaurants will nudge you by highlighting certain dishes on menu inserts, by having servers verbally describe specials, or by just putting boxes around certain items. Retail stores and websites nudge you to purchase certain products by placing them where they are easier to see. Nudging Another concept you will nd useful when making purchasing decisions is anchoring, which describes your tendency to rely too heavily on rst impressions when making decisions. You get anchored to the rst piece of framing information you encounter. This tendency is commonly exploited by businesses when making o ers. Dan Ariely, behavioral economist and author of Predictably Irrational, brings us an illustrative example of anchoring using subscription o ers for The Economist. Readers were o ered three ways to subscribe: web only ($59), print only ($125), and print and web ($125). Yes, you read that right: the “print only” version cost the same as the “print and web” version. Who would choose that? Predictably, no one. Here is the result when one hundred MIT students reported their preference: Web only ($59): 16 percent Print only ($125): 0 percent Print and web ($125): 84 percent So why include that option at all? Here’s why: when it was removed from the question, this result was revealed: Web only ($59): 68 percent Print and web ($125): 32 percent Just having the print-only option—even though no one chooses it— anchors readers to a much higher value for the print-and-web version. It feels like you are getting the web version for free, causing many more people to choose it and creating 43 percent more revenue for the magazine by just adding a version that no one chooses! Shoppers at retailers Michaels or Kohl’s know that these stores often advertise sales, where you can save 40 percent or more on selected items or departments. However, are those reduced prices a real bargain? Usually not. They’re reduced from the so-called manufacturer’s suggested retail price (MSRP), which is usually very high. Being aware of the MSRP anchors you so that you feel you are getting a good deal at 40 percent o. Often, that reduction just brings the price to a reasonable level. Anchoring isn’t just for numbers. Donald Trump uses this mental model, anchoring others to his extreme positions, so that what seem like compromises are actually agreements in his favor. He wrote about this in his 1987 book Trump: The Art of the Deal: My style of deal-making is quite simple and straightforward. I aim very high, and then I just keep pushing and pushing to get what I’m after. Sometimes I settle for less than I sought, but in most cases I still end up with what I want. More broadly, these mental models are all instances of a more general model, availability bias, which occurs when a bias, or distortion, creeps into your objective view of reality thanks to information recently made available to you. In the U.S., illegal immigration has been a hot topic with conservative pundits and politicians in recent years, leading many people to believe it is at an all-time high. Yet the data suggests that illegal immigration via the southern border is actually at a ve-decade low, indicating that the prevalence of the topic is creating an availability bias for many. U.S. Southern Border Apprehensions: at Five-Decade Low Availability bias can easily emerge from high media coverage of a topic. Rightly or wrongly, the media infamously has a mantra of “If it bleeds, it leads.” The resulting heavy coverage of violent crime causes people to think it occurs more often than it does. The polling company Gallup annually asks Americans about their perception of changing violent crime rates and found in 2014 that “federal crime statistics have not been highly relevant to the public’s crime perceptions in recent years.” U.S. Crime Rate: Actual vs. Perceived In a famous 1978 study, “Judged Frequency of Lethal Events,” from the Journal of Experimental Psychology, Sarah Lichtenstein and others asked people about forty-one leading causes of death. They found that people often overstate the risk of sensationally over-reported causes of death, like tornados, by fty times and understate the risk of common causes of death, like stroke, by one hundred times. Mortality Rates by Causes: Actual vs. Perceived Availability bias stems from overreliance on your recent experiences within your frame of reference, at the expense of the big picture. Let’s say you are a manager and you need to write an annual review for your direct report. You are supposed to think critically and objectively about her performance over the entire year. However, it’s easy to be swayed by those really bad or really good contributions over just the past few weeks. Or you might just consider the interactions you have had with her personally, as opposed to getting a more holistic view based on interactions with other colleagues with di erent frames of reference. With the rise of personalized recommendations and news feeds on the internet, availability bias has become a more and more pernicious problem. Online this model is called the lter bubble, a term coined by author Eli Pariser, who wrote a book on it with the same name. Because of availability bias, you’re likely to click on things you’re already familiar with, and so Google, Facebook, and many other companies tend to show you more of what they think you already know and like. Since there are only so many items they can show you—only so many links on page one of the search results—they therefore lter out links they think you are unlikely to click on, such as opposing viewpoints, e ectively placing you in a bubble. In the run-up to the 2012 U.S. presidential election and again in 2018, the search engine DuckDuckGo (founded by Gabriel) conducted studies where individuals searched on Google for the same political topics, such as gun control and climate change. It discovered that people got signi cantly di erent results, personalized to them, when searching for the same topics at the same time. This happened even when they were signed out and in so- called incognito mode. Many people don’t realize that they are getting tailored results based on what a mathematical algorithm thinks would increase their clicks, as opposed to a more objective set of ranked results. The Filter Bubble When you put many similar lter bubbles together, you get echo chambers, where the same ideas seem to bounce around the same groups of people, echoing around the collective chambers of these connected lter bubbles. Echo chambers result in increased partisanship, as people have less and less exposure to alternative viewpoints. And because of availability bias, they consistently overestimate the percentage of people who hold the same opinions. It’s easy to focus solely on what is put in front of you. It’s much harder to seek out an objective frame of reference, but that is what you need to do in order to be wrong less. WALK A MILE IN THEIR SHOES Most of the signi cant problems in the world involve people, so making headway on these problems often requires a deep understanding of the people involved. For instance, enough food is produced to feed everyone on the planet, yet starvation still exists because this food cannot be distributed e ectively. Issues involving people, such as in corrupt governments, are primary reasons behind these distribution failures. However, it is very easy to be wrong about other people’s motivations. You may assume they share your perspective or context, think like you do, or have circumstances similar to yours. With such assumptions, you may conclude that they should also behave like you would or hold your beliefs. Unfortunately, often these assumptions are wrong. Consequently, to be wrong less when thinking about people, you must nd ways to increase your empathy, opening up a deeper understanding of what other people are really thinking. This section explores various mental models to help you do just that. In any con ict between two people, there are two sides of the story. Then there is the third story, the story that a third, impartial observer would recount. Forcing yourself to think as an impartial observer can help you in any con ict situation, including di cult business negotiations and personal disagreements. The third story helps you see the situation for what it really is. But how do you open yourself up to it? Imagine a complete recording of the situation, and then try to think about what an outside audience would say was happening if they watched or listened to the recording. What story would they tell? How much would they agree with your story? Authors Douglas Stone, Bruce Patton, and Sheila Heen explore this model in detail in their book Di cult Conversations: “The key is learning to describe the gap —or di erence—between your story and the other person’s story. Whatever else you may think and feel, you can at least agree that you and the other person see things di erently.” If you can coherently articulate other points of view, even those directly in con ict with your own, then you will be less likely to make biased or incorrect judgments. You will dramatically increase your empathy—your understanding of other people’s frames of reference—whether or not you agree. Additionally, if you acknowledge the perspective of the third story within di cult conversations, it can have a disarming e ect, causing others involved to act less defensively. That’s because you are signaling your willingness and ability to consider an objective point of view. Doing so encourages others involved to do the same. Another tactical model that can help you empathize is the most respectful interpretation, or MRI. In any situation, you can explain a person’s behavior in many ways. MRI asks you to you interpret the other parties’ actions in the most respectful way possible. It’s giving people the bene t of the doubt. For example, suppose you sent an email to your kid’s school asking for information on the science curriculum for the upcoming year, but haven’t heard back in a few days. Your rst interpretation may be that they’re ignoring your request. A more respectful interpretation would be that they are actively working to get back to you but haven’t completed that work yet. Maybe they are just waiting on some crucial information before replying, like a personnel decision that hasn’t been nalized yet, and that is holding up the response. The point is you don’t know the real answer yet, but if you approach the situation with the most respectful interpretation, then you will generally build trust with those involved rather than destroy it. With MRI, your follow-up email or call is more likely to have an inquisitive tone rather than an accusatory one. Building trust pays dividends over time, especially in di cult situations where that trust can serve as a bridge toward an amicable resolution. The next time you feel inclined to make an accusation, take a step back and think about whether that is really a fair assumption to make. Using MRI may seem naïve, but like the third story, this model isn’t asking you to give up your point of view. Instead, MRI asks you to approach a situation from a perspective of respect. You remain open to other interpretations and withhold judgment until necessary. Another way of giving people the bene t of the doubt for their behavior is called Hanlon’s razor: never attribute to malice that which is adequately explained by carelessness. Like Ockham’s razor, Hanlon’s razor seeks out the simplest explanation. And when people do something harmful, the simplest explanation is usually that they took the path of least resistance. That is, they carelessly created the negative outcome; they did not cause the outcome out of malice. Hanlon’s razor is especially useful for navigating connections in the virtual world. For example, we have all misread situations online. Since the signals of body language and voice intonation are missing, harmless lines of text can be read in a negative way. Hanlon’s razor says the person probably just didn’t take enough time and care in crafting their message. So the next time you send a message and all you get back is OK, consider that the writer is in a rush or otherwise occupied (the more likely interpretation) instead of coming from a place of dismissiveness. The third story, most respectful interpretation, and Hanlon’s razor are all attempts to overcome what psychologists call the fundamental attribution error, where you frequently make errors by attributing others’ behaviors to their internal, or fundamental, motivations rather than external factors. You are guilty of the fundamental attribution error whenever you think someone was mean because she is mean rather than thinking she was just having a bad day. You of course tend to view your own behavior in the opposite way, which is called self-serving bias. When you are the actor, you often have self-serving reasons for your behavior, but when you are the observer, you tend to blame the other’s intrinsic nature. (That’s why this model is also sometimes called actor-observer bias.) For example, if someone runs a red light, you often assume that person is inherently reckless; you do not consider that she might be rushing to the hospital for an emergency. On the other hand, you will immediately rationalize your own actions when you drive like a maniac (“I’m in a hurry”). Another tactical model to help you have greater empathy is the veil of ignorance, put forth by philosopher John Rawls. It holds that when thinking about how society should be organized, we should do so by imagining ourselves ignorant of our particular place in the world, as if there were a veil preventing us from knowing who we are. Rawls refers to this as the “original position.” For example, you should not just consider your current position as a free person when contemplating a world where slavery is allowed. You must consider the possibility that you might have been born a slave, and how that would feel. Or, when considering policies regarding refugees, you must consider the possibility that you could have been one of those seeking refuge. The veil of ignorance encourages you to empathize with people across a variety of circumstances, so that you can make better moral judgments. Suppose that, like many companies in recent years, you are considering ending a policy that has allowed your employees to work remotely because you believe that your teams perform better face-to-face. As a manager, it may be easy to imagine changing the policy from your perspective, especially if you personally do not highly value remote working. The veil of ignorance, though, pushes you to imagine the change from the original position, where you could be any employee. What if you were an employee caring for an elderly family member? What if you were a single parent? You may nd that the new policy is warranted even after considering its repercussions holistically, but putting on the veil of ignorance helps you appreciate the challenges this might pose for your sta and might even help you come up with creative alternatives. Speaking of privilege, we (the authors) often say we are lucky to have won the birth lottery. Not only were we not born into slavery, but we were also not born into almost any disadvantaged group. At birth, we were no more deserving of an easier run at life than a child who was born into poverty, or with a disability, or any other type of disadvantage. Yet we are the ones who won this lottery since we do not have these disadvantages. It can be challenging to acknowledge that a good portion of your success stems from luck. Many people instead choose to believe that the world is completely fair, orderly, and predictable. This view is called the just world hypothesis, where people always get what they deserve, good or bad, because of their actions alone, with no accounting for luck or randomness. This view is summed up as you reap what you sow. Ironically, belief in a just world can get in the way of actual justice by leading people to victim-blame: The sexual assault victim “should have worn di erent clothes” or the welfare recipient “is just lazy.” Victims of circumstance are actually blamed for their circumstances, with no accounting for factors of randomness like the birth lottery. The problem with the just world hypothesis and victim-blaming is that they make broad judgments about why things are happening to people that are often inaccurate at the individual level. You should also keep in mind that the model of learned helplessness can make it hard for some people to strive for improvement without some assistance. Learned helplessness describes the tendency to stop trying to escape di cult situations because we have gotten used to di cult conditions over time. Someone learns that they are helpless to control their circumstances, so they give up trying to change them. In a series of experiments summarized in “Learned Helplessness” in the February 1972 Annual Review of Medicine, psychologist Martin Seligman placed dogs in a box where they were repeatedly shocked at random intervals. Then he placed them in a similar box where they could easily escape the shocks. However, they did not actually try to escape; they simply lay down and waited for the shocks to stop. On the other hand, dogs who were not shocked would quickly jump out of the box. Learned helplessness can be overcome when animals or people see that their actions can make a di erence, that they aren’t actually helpless. A shining light in the reduction of chronic homelessness has been a strategy that directly combats learned helplessness, helping people take back control of their lives after years on the streets. The strategy, known as Housing First, involves giving apartments to the chronic homeless and, at the same time, assigning a social worker to help each person reintegrate into society, including nding work and living day-to-day in their apartment. Utah has been the leader in this strategy, reducing its chronic homeless population by as much as 72 percent. And the strategy actually saves on average eight thousand dollars per person in annual expenses, as the chronic homeless tend to use a lot of public resources, such as hospitals, jails, and shelters. Learned helplessness is not found only in dire situations. People can also exhibit learned helplessness in everyday circumstances, believing they are incapable of doing or learning certain things, such as public speaking or using new technologies. In each of these cases, though, they are probably capable of improving their area of weakness if guided by the right mentor, a topic we cover in more detail later in Chapter 8. You don’t want to make a fundamental attribution error by assuming that your colleague is incapable of doing something when they really just need the proper guidance. All the mental models in this section—from the third story to learned helplessness—can help you increase your empathy. When applying them, you are e ectively trying to understand people’s actual circumstances and motivations better, trying as best you can to walk a mile in their shoes. PROGRESS, ONE FUNERAL AT A TIME Just as you can be anchored to a price, you can also be anchored to an entire way of thinking about something. In other words, it can be very di cult to convince you of a new idea when a contradictory idea is already entrenched in your thinking. Like many kids in the U.S., our sons are learning “Singapore math,” an approach to arithmetic that includes introducing pictorial steps in order to develop a deeper understanding of basic concepts. Even to mathematically inclined parents, this alternative way of doing arithmetic can feel foreign after so many years of thinking about it another way. Singapore Math: Addition Singapore math teaches addition using “number bonds” that break apart numbers so that students can add in groups of ten. In science, this phenomenon is documented in Thomas Kuhn’s book The Structure of Scienti c Revolutions, which popularized the paradigm shift model, describing how accepted scienti c theories change over time. Instead of a gradual, evolving progression, Kuhn describes a bumpy, messy process in which initial problems with a scienti c theory are either ignored or rationalized away. Eventually so many issues pile up that the scienti c discipline in question is thrown into a crisis mode, and the paradigm shifts to a new explanation, entering a new stable era. Essentially, the old guard holds on to the old theories way too long, even in the face of an obvious-in-hindsight alternative. Nobel Prize–winning physicist Max Planck explained it like this in his Scienti c Autobiography and Other Papers: “A new scienti c truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it,” or, more succinctly, “Science progresses one funeral at a time.” In 1912, Alfred Wegener put forth the theory of continental drift that we know to be true today, in which the continents drift across the oceans. Wegener noticed that the di erent continents t together nicely like a jigsaw puzzle. Upon further study, he found that fossils seemed strikingly similar across continents, as if the continents indeed were put together this way sometime in the past. Distribution of Fossils Across the Southern Continents of Pangea We now know this to be the case—all of our continents were previously grouped together into one supercontinent now called Pangea. However, his theory was met with harsh criticism because Wegener was an outsider—a meteorologist by training instead of a geologist—and because he couldn’t o er an explanation of the mechanism causing continental drift, just the idea that it likely had taken place. It basically sat uninvestigated by mainstream geologists for forty years, until the new science of paleomagnetism started creating additional data in support of it, reviving the theory. The major theory that held during this time was that there must have been narrow land bridges (called Gondwanian bridges) sometime in the past that allowed the animals to cross between continents, even though there was never any concrete evidence of their existence. Instead of helping to investigate Wegener’s theory (which certainly wasn’t perfect but had promise), geologists chose to hold on to this incorrect land bridge theory until the evidence for continental drift was so overwhelming that a paradigm shift occurred. Gondwanian Bridges The work of Ignaz Semmelweis, a nineteenth-century Hungarian doctor, met a similar fate. He worked at a teaching hospital where doctors routinely handled cadavers and also delivered babies, without appropriately washing their hands in between. The death rate of mothers who gave birth in this part of the hospital was about 10 percent! In another part of the same hospital, where babies were mostly delivered by midwives who did not routinely handle cadavers, the comparable death rate was 4 percent. Semmelweis obsessed about this di erence, painstakingly eliminating all variables until he was left with just one: doctors versus midwives. After studying doctor behavior, he concluded that it must be due to their handling of the cadavers and instituted a practice of washing hands with a solution of chlorinated lime. The death rate immediately dropped to match that in the other part of the hospital. Despite the clear drop in the death rate, his theories were completely rejected by the medical community at large. In part, doctors were o ended by the idea that they were killing their patients. Others were so hung up on the perceived de ciencies of Semmelweis’s theoretical explanation that they ignored the empirical evidence that the handwashing was improving mortality. After struggling to get his ideas adopted, Semmelweis went crazy, was admitted to an asylum, and died at the age of forty-seven. It took another twenty years after his death for his ideas about antiseptics to start to take hold, following Louis Pasteur’s unquestionable con rmation of germ theory. Like Wegener, Semmelweis didn’t fully understand the scienti c mechanism that underpinned his theory and crafted an initial explanation that turned out to be somewhat incorrect. However, they both noticed obvious and important empirical truths that should have been investigated by other scientists but were re exively rejected by these scientists because the suggested explanations were not in line with the conventional thinking of the time. Today, this is known as a Semmelweis re ex. Individuals still hang on to old theories in the face of seemingly overwhelming evidence—it happens all the time in science and in life in general. The human tendency to gather and interpret new information in a biased way to con rm preexisting beliefs is called con rmation bias. Unfortunately, it’s extremely easy to succumb to con rmation bias. Correspondingly, it is hard to question your own core assumptions. There is a reason why many startup companies that disrupt industries are founded by industry outsiders. There is a reason why many scienti c breakthroughs are discovered by outsiders to the eld. There is a reason why “fresh eyes” and “outside the box” are clichés. The reason is because outsiders aren’t rooted in existing paradigms. Their reputations aren’t at stake if they question the status quo. They are by de nition “free thinkers” because they are free to think without these constraints. Con rmation bias is so hard to overcome that there is a related model called the back re e ect that describes the phenomenon of digging in further on a position when faced with clear evidence that disproves it. In other words, it often back res when people try to change your mind with facts and gures, having the opposite e ect on you than it should; you become more entrenched in the original, incorrect position, not less. In one 2008 Yale study, pro-choice Democrats were asked to give their opinions of Supreme Court nominee John Roberts before and after hearing an ad claiming he supported “violent fringe groups and a convicted [abortion] clinic bomber.” Unsurprisingly, disapproval went from 56 percent to 80 percent. However, disapproval stayed up at 72 percent when they were told the ad was refuted and withdrawn by the abortion rights advocacy group that created it. You may also succumb to holding on to incorrect beliefs because of discon rmation bias, where you impose a stronger burden of proof on the ideas you don’t want to believe. Psychologist Daniel Gilbert put it like this in an April 16, 2006, article for The New York Times, “I’m O.K., You’re Biased”: When our bathroom scale delivers bad news, we hop o and then on again, just to make sure we didn’t misread the display or put too much pressure on one foot. When our scale delivers good news, we smile and head for the shower. By uncritically accepting evidence when it pleases us, and insisting on more when it doesn’t, we subtly tip the scales in our favor. The pernicious e ects of con rmation bias and related models can be explained by cognitive dissonance, the stress felt by holding two contradictory, dissonant, beliefs at once. Scientists have actually linked cognitive dissonance to a physical area in the brain that plays a role in helping you avoid aversive outcomes. Instead of dealing with the underlying cause of this stress—the fact that we might actually be wrong— we take the easy way out and rationalize the con icting information away. It’s a survival instinct! Once you start looking for con rmation bias and cognitive dissonance, we guarantee you will spot them all over, including in your own thoughts. A real trick to being wrong less is to ght your instincts to dismiss new information and instead to embrace new ways of thinking and new paradigms. The meme on the next page perfectly illustrates how cognitive dissonance can make things we take for granted seem absurd. There are a couple of tactical mental models that can help you on an everyday basis to overcome your ingrained con rmation bias and tribalism. First, consider thinking gray, a concept we learned from Steven Sample’s book The Contrarian’s Guide to Leadership. You may think about issues in terms of black and white, but the truth is somewhere in between, a shade of gray. As Sample puts it: Most people are binary and instant in their judgments; that is, they immediately categorize things as good or bad, true or false, black or white, friend or foe. A truly e ective leader, however, needs to be able to see the shades of gray inherent in a situation in order to make wise decisions as to how to proceed. The essence of thinking gray is this: don’t form an opinion about an important matter until you’ve heard all the relevant facts and arguments, or until circumstances force you to form an opinion without recourse to all the facts (which happens occasionally, but much less frequently than one might imagine). F. Scott Fitzgerald once described something similar to thinking gray when he observed that the test of a rst-rate mind is the ability to hold two opposing thoughts at the same time while still retaining the ability to function. This model is powerful because it forces you to be patient. By delaying decision making, you avoid con rmation bias since you haven’t yet made a decision to con rm! It can be di cult to think gray because all the nuance and di erent points of view can cause cognitive dissonance. However, it is worth ghting through that dissonance to get closer to the objective truth. A second mental model that can help you with con rmation bias is the Devil’s advocate position. This was once an o cial position in the Catholic Church used during the process of canonizing people as saints. Once someone is canonized, the decision is eternal, so it was critical to get it right. Hence this position was created for someone to advocate from the Devil’s point of view against the deceased person’s case for sainthood. More broadly, playing the Devil’s advocate means taking up an opposing side of an argument, even if it is one you don’t agree with. One approach is to force yourself literally to write down di erent cases for a given decision or appoint di erent members in a group to do so. Another, more e ective approach is to proactively include people in a decision-making process who are known to hold opposing viewpoints. Doing so will help everyone involved more easily see the strength in other perspectives and force you to craft a more compelling argument in favor of what you believe. As Charlie Munger says, “I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.” DON’T TRUST YOUR GUT You make most of your everyday decisions using your intuition, with your subconscious automatically intuiting what to do from instinct or encoded knowledge. It’s your common or sixth sense, your gut feeling, drawing on your past experiences and natural programming to react to circumstances. In his book Thinking, Fast and Slow, economics Nobel laureate Daniel Kahneman makes a distinction between this intuitive fast thinking and the more deliberate, logical thinking you do when you slow down and question your intuitive assumptions. He argues that when you do something frequently, it gradually gets encoded in your brain until at some point your intuition, via your fast thinking, takes over most of the time and you can do the task mindlessly: driving on the highway, doing simple arithmetic, saying your name. However, when you are in uncertain situations where you do not have encoded knowledge, you must use your slower thinking: driving on new roads, doing complex math, digging into your memory to recall someone you used to know. These are not mindless tasks. You can run into trouble when you blindly trust your gut in situations where it is unclear whether you should be thinking fast or slow. Following your intuition alone at times like these can cause you to fall prey to anchoring, availability bias, framing, and other pitfalls. Getting physically lost often starts with you thinking you intuitively know where to go and ends with the realization that your intuition failed you. Similarly, in most situations where the mental models in this book will be useful, you will want to slow down and deliberately look for how to best apply them. You may use intuition as a guide to where to investigate, but you won’t rely on it alone to make decisions. You will need to really take out the map and study it before making the next turn. You probably do not have the right experience intuitively to handle everything that life throws at you, and so you should be especially wary of your intuition in any new or unfamiliar situation. For example, if you’re an experienced hiker in bear country, you know that you should never stare down a bear, as it will take this as a sign of aggression and may charge you in response. Suppose now you’re hiking in mountain lion country and you come across a lion—what should you do? Your intuition would tell you not to stare it down, but in fact, you should do exactly that. To mountain lions, direct eye contact signals that you aren’t easy prey, and so they will hesitate to attack. At the same time, intuition can help guide you to the right answer much more quickly. For example, the more you work with mental models, the more your intuition about which one to use in a given situation will be right, and the faster you will get to better decisions working with these models. In other words, as we explained at the beginning of this chapter, using mental models over time is a slow and steady way to become more antifragile, making you better able to deal with new situations over time. Of course, the better the information you put into your brain, the better your intuition will be. One way to accelerate building up useful intuition like this is to try consistently to argue from rst principles. Another is to take every opportunity you can to gure out what is actually causing things to happen. The remaining mental models in this chapter can help you do just that. At 11:39 A.M. EST on January 28, 1986, the space shuttle Challenger disintegrated over the Atlantic Ocean, just seventy-three seconds into its ight, killing the seven crew members on board. It was a sad day we both remember vividly. A U.S. presidential commission was appointed to investigate the incident, ultimately producing the Rogers Commission Report, named after its chairman, William Rogers. When something happens, the proximate cause is the thing that immediately caused it to happen. In the case of the Challenger, the Rogers Commission Report showed that the proximate cause was the external hydrogen tank igniting. The root cause, by contrast, is what you might call the real reason something happened. People’s explanations for their behavior are no di erent: anyone can give you a reason for their behavior, but that might not be the real reason they did something. For example, consistent underperformers at work usually have a plausible excuse for each incident, but the real reason is something more fundamental, such as lack of skills, motivation, or e ort. The Rogers Commission, in its June 6, 1986, report to the president, concluded that the root cause of the Challenger disaster was organizational failure: Failures in communication... resulted in a decision to launch 51-L based on incomplete and sometimes misleading information, a con ict between engineering data and management judgments, and a NASA management structure that permitted internal ight safety problems to bypass key Shuttle managers. As part of its work, the commission conducted a postmortem. In medicine, a postmortem is an examination of a dead body to determine the root cause of death. As a metaphor, postmortem refers to any examination of a prior situation to understand what happened and how it could go better next time. At DuckDuckGo, it is mandatory to conduct a postmortem after every project so that the organization can collectively learn and become stronger (antifragile). One technique commonly used in postmortems is called 5 Whys, where you keep asking the question “Why did that happen?” until you reach the root causes. 1. Why did the Challenger’s hydrogen tank ignite? Hot gases were leaking from the solid rocket motor. 2. Why was hot gas leaking? A seal in the motor broke. 3. Why did the seal break? The O-ring that was supposed to protect the seal failed. 4. Why did the O-ring fail? It was used at a temperature outside its intended range. 5. Why was the O-ring used outside its temperature range? Because on launch day, the temperature was below freezing, at 29 degrees Fahrenheit. (Previously, the coldest launch had been at 53 degrees.) 6. Why did the launch go forward when it was so cold? Safety concerns were ignored at the launch meeting. 7. Why were safety concerns ignored? There was a lack of proper checks and balances at NASA. That was the root cause, the real reason the Challenger disaster occurred. As you can see, you can ask as many questions as you need in order to get to the root cause— ve is just an arbitrary number. Nobel Prize–winning physicist Richard Feynman was on the Rogers Commission, agreeing to join upon speci c request even though he was then dying of cancer. He uncovered the organizational failure within NASA and threatened to resign from the commission unless its report included an appendix consisting of his personal thoughts around root cause, which reads in part: It appears that there are enormous di erences of opinion as to the probability of a failure with loss of vehicle and of human life. The estimates range from roughly 1 in 100 to 1 in 100,000. The higher gures come from the working engineers, and the very low gures from management.... It would appear that, for whatever purpose, be it for internal or external consumption, the management of NASA exaggerates the reliability of its product, to the point of fantasy.... For a successful technology, reality must take precedence over public relations, for nature cannot be fooled. Sometimes you may want something to be true so badly that you fool yourself into thinking it is likely to be true. This feeling is known as optimistic probability bias, because you are too optimistic about the probability of success. NASA managers were way too optimistic about the probability of success, whereas the engineers who were closer to the analysis were much more on target. Root cause analysis, whether you use 5 Whys or some other framework, helps you cut through optimistic probability bias, forcing you to slow down your thinking, push through your intuition, and deliberately uncover the truth. The reason that root causes are so important is that, by addressing them, you can prevent the same mistakes from happening in the future. An apt analogy is that by investigating root causes, you are not just treating the symptoms but treating the underlying disease. We started this chapter explaining that to be wrong less, you need to both work at getting better over time (antifragile) and make fewer avoidable mistakes in your thinking (unforced errors). Unfortunately, there are a lot of mental traps that you actively need to try to avoid, such as relying too much on recent information (availability bias), being too wed to your existing position (con rmation bias), and overstating the likelihood of your desired outcome (optimistic probability bias). As Feynman warned Caltech graduates in 1974: “You must not fool yourself—and you are the easiest person to fool.” KEY TAKEAWAYS To avoid mental traps, you must think more objectively. Try arguing from rst principles, getting to root causes, and seeking out the third story. Realize that your intuitive interpretations of the world can often be wrong due to availability bias, fundamental attribution error, optimistic probability bias, and other related mental models that explain common errors in thinking. Use Ockham’s razor and Hanlon’s razor to begin investigating the simplest objective explanations. Then test your theories by de-risking your assumptions, avoiding premature optimization. Attempt to think gray in an e ort to consistently avoid con rmation bias. Actively seek out other perspectives by including the Devil’s advocate position and bypassing the lter bubble. Consider the adage “You are what you eat.” You need to take in a variety of foods to be a healthy person. Likewise, taking in a variety of perspectives will help you become a super thinker. 2 Anything That Can Go Wrong, Will ALL YOUR ACTIONS HAVE CONSEQUENCES, but sometimes those consequences are unexpected. On the surface, these unintended consequences seem unpredictable. However, if you dig deeper, you will nd that unintended consequences often follow predictable patterns and can therefore be avoided in many situations. You just need to know which patterns to look out for—the right mental models. Here is an example. In 2016, the UK government asked the public to help name a new polar research ship. Individuals could submit names and then vote on them in an online poll. More than seven thousand names were submitted, but one name won easily, with 124,109 votes: RSS Boaty McBoatface. (The ship was eventually named RSS Sir David Attenborough instead.) Could the government have predicted this result? Well, maybe not that the exact name RSS Boaty McBoatface would triumph. But could they have guessed that someone might turn the contest into a joke, that the joke would be well received by the public, and that the joke answer might become the winner? You bet. People turn open contests like this into jokes all the time. In 2012, Mountain Dew held a similar campaign to name a new soda, but they quickly closed it down when “Diabeetus” and “Hitler Did Nothing Wrong” appeared near the top of the rankings. Also that year, Walmart teamed up with Sheets Energy Strips and o ered to put on a concert by international recording artist Pitbull at the Walmart location that received the most new Facebook likes. After an internet prankster took hold of the contest, Walmart’s most remote store, in Kodiak, Alaska, won. Walmart and Pitbull still held the concert there and they even had the prankster who rigged the contest join Pitbull on the trip! Unintended consequences are not a laughing matter under more serious circumstances. For instance, medical professionals routinely prescribe opioids to help people with chronic pain. Unfortunately, these drugs are also highly addictive. As a result, pain patients may abuse their prescribed medication or even seek out similar, cheaper, and more dangerous drugs like street heroin. According to the National Institutes of Health, in the U.S., nearly half of young people who inject heroin started abusing prescription opioids rst. Patients’ susceptibility to opioid addiction and abuse has substantially contributed to the deadliest drug crisis in American history. As reported by The New York Times on November 29, 2018, more people died from drug overdoses in 2017 than from HIV/AIDS, car crashes, or gun deaths in the years of their respective peaks. Of course, no doctor prescribing painkillers intends for their patients to die—these deaths are unintended consequences. Through this chapter, we want to help you avoid unintended consequences like these. You will be much less likely to fall into their traps if you are equipped with the right mental models to help you better predict and deal with these situations. HARM THY NEIGHBOR, UNINTENTIONALLY There is a class of unintended consequences that arise when a lot of people choose what they think is best for them individually, but the sum total of the decisions creates a worse outcome for everyone. To illustrate how this works, consider Boston Common, the oldest public park in the United States. Before it was a park, way back in the 1630s, this fty-acre plot of land in downtown Boston, Massachusetts, was a grazing pasture for cows, with local families using it collectively as common land. In England, this type of land is referred to legally as commons. Pasture commons present a problem, though: Each additional cow that a farmer gets bene ts their family, but if all the farmers keep getting new cows, then the commons can be depleted. All farmers would experience the negative e ects of overgrazing on the health of their herds and land. In an 1833 essay, “Two Lectures on the Checks to Population,” economist William Lloyd described a similar, but hypothetical, overgrazing scenario, now called the tragedy of the commons. However, unbeknownst to him, his hypothetical situation had really occurred in Boston Common two hundred years earlier (and many other times before and since). More a uent families did in fact keep buying more cows, leading to overgrazing, until, in 1646, a limit of seventy cows was imposed on Boston Common. Any shared resource, or commons, is vulnerable to this tragedy. Over shing, deforestation, and dumping waste have obvious parallels to overgrazing, though this model extends far beyond environmental issues. Each additional spam message bene ts the spammer who sends it while simultaneously degrading the entire email system. Collective overuse of antibiotics in medicine and agriculture is leading to dangerous antibiotic resistance. People make self-serving edits to Wikipedia articles, diminishing the overall reliability of the encyclopedia. In each of these cases, an individual makes what appears to be a rational decision (e.g., prescribing an antibiotic to a patient who might have a bacterial infection). They use the common resource for their own bene t at little or no cost (e.g., each course of treatment has only a small chance of increasing resistance). But as more and more people make the same decision, the common resource is collectively depleted, reducing the ability for everyone to bene t from it in the future (e.g., the antibiotic becomes much less useful). More broadly, the tragedy of the commons arises from what is called the tyranny of small decisions, where a series of small, individually rational decisions ultimately leads to a system-wide negative consequence, or tyranny. It’s death by a thousand cuts. You’ve probably gone out to dinner with friends expecting that you will equally split the check. At dinner, each person is faced with a decision to order an expensive meal or a cheaper one. When dining alone, people often order the cheaper meal. However, when they know that the cost of dinner is shared by the whole group, people tend to opt for the expensive meal. If everyone does this then everyone ends up paying more! Ecologist William E. Odum made the connection between the tyranny of small decisions and environmental degradation in his 1982 BioScience article: “Much of the current confusion and distress surrounding environmental issues can be traced to decisions that were never consciously made, but simply resulted from a series of small decisions.” It’s the individual decision to place a well here, cut down some trees there, build a factory over there—over time these isolated decisions aggregate to create widespread problems in our environment that are increasingly di cult to reverse. You can also nd the tyranny of small decisions in your own life. Think of those small credit card purchases or expenses that seem individually warranted at the time, but collectively add up to signi cant credit card bills or cash crunches. Professionally, it may be the occasional distractions and small procrastinations that, in aggregate, make your deadlines hard to reach. The tyranny of small decisions can be avoided when someone who has a view over the whole system can veto or curb particular individual decisions when broad negative impacts can be foreseen. When the decisions are all your own, you could do this for yourself. For example, to stop your out-of- control spending, you could self-impose a budget, checking each potential purchase against the budget to see if it’s compatible with your spending plan. You could do the same for your time management, by more strictly regulating your calendar. When decisions are made by more than just you, then a third party is usually needed to ll this role, just as the city of Boston did when it restricted the number of cows on Boston Common. Company expense policies that help prevent overspending are an organizational example. Another cause of issues like the tragedy of the commons is the free rider problem, where some people get a free ride by using a resource without paying for it. People or companies who cheat on their taxes are free riders to government services they use, such as infrastructure and the legal system. If you’ve ever worked on a team project where one person didn’t do anything substantive, that person was free-riding on the rest of the group. Another familiar example: Has anyone ever leeched o your wi- or Net ix account? Or perhaps you’ve been the free rider? Free-riding is commonplace with public goods, such as national militaries, broadcast television, even the air we breathe. As you can see from these examples, it is usually di cult to exclude people from using public goods, because they are broadly available (public). Since one person’s use does not signi cantly reduce a public good’s availability to others, it might seem as though there is no harm in free-riding. However, if enough people free-ride on a public good, then it can degrade to the point of creating a tragedy of the commons. Vaccinations provide an illustrative example that combines all these models (tragedy of the commons, free rider problem, tyranny of small decisions, public goods), plus one more: herd immunity. Diseases can spread only when they have an eligible host to infect. However, when the vast majority of people are vaccinated against a disease, there are very few eligible new hosts, since most people (in the herd) are immune from infection due to getting vaccinated. As a result, the overall public is less susceptible to outbreaks of the disease. In this example, the public good is a disease-free environment due to herd immunity, and the free riders are those who take advantage of this public good by not getting vaccinated. The tyranny of small decisions can arise when enough individuals choose not to get vaccinated, resulting in an outbreak of the disease, creating a tragedy of the commons. In practice, the percentage of people who need to be vaccinated for a given disease to achieve herd immunity varies by how contagious the disease is. For measles, an extremely contagious disease, the threshold is about 95 percent. That means an outbreak is possible if the measles vaccination rate in a community falls below 95 percent! Before the measles vaccine was introduced in 1963, more than 500,000 people a year contracted measles in the United States, resulting in more than 400 annual deaths. After the vaccine was in popular use, measles deaths dropped to literally zero. In recent years, some parents have refused to vaccinate their kids for measles and other diseases due to the belief that vaccines are linked to autism, based on since-discredited and known-to-be-fraudulent research. These people who choose not to vaccinate are free-riding on the herd immunity from the people who do choose to vaccinate. Herd Immunity Pre-vaccine Post-vaccine Disease Average annual deaths Annual deaths (2004) Diptheria 1,822 (1936-1945) 0 Measles 440 (1953-1962) 0 Mumps 39 (1963-1968) 0 Pertussis 4,034 (1934-1943) 27 Polio 3,272 (1941-1954) 0 Rubella 17 (1966-1968) 0 Herd Immunity Smallpox 337 (1900-1949) 0 Tetanus 472 (1947-1949) 4 Historically, vaccination rates stayed above the respective herd immunity thresholds to prevent outbreaks, so free riders didn’t realize the harm they could be in icting on themselves and others. In recent years, however, vaccination rates have dipped dangerously low in some places. For example, in 2017, more than seventy- ve people in Minnesota, most of whom were unvaccinated, contracted measles. We can expect outbreaks like this to continue as long as there exist communities with vaccination rates below the herd immunity threshold. Unfortunately, some people cannot be medically immunized, such as infants, people with severe allergies, and those with suppressed immune systems. At no fault of their own, they face the potentially deadly consequences of the anti-vaccination movement, a literal tragedy of the commons. Herd immunity as a concept is useful beyond the medical context. It applies directly in maintaining social, cultural, business, and industry norms. If enough infractions are left unchecked, their incidence can start increasing quickly, creating a new negative norm that can be di cult to unwind. For example, in Italy, a common phrase is used to describe the current cultural norm around paying taxes: “Only fools pay.” Though Italy has been actively ghting tax evasion in the past decade, this pervasive cultural norm of tax avoidance took hold over a longer period and is proving hard to reverse. In situations like these, dropping below a herd immunity threshold can create lasting harm. It can be di cult to put the genie back in the bottle. Imagine a once pristine place that is now littered with garbage and gra ti. Once it has become dirtied, that state can quickly become the new normal, and the longer it remains dirty, the more likely it will remain in the dirty state. Hollowed-out urban centers like Detroit or disaster-ridden areas like parts of New Orleans have seen this scenario play out in the recent past. People who don’t want to live with the e ects of the degradation but also don’t want to do the hard work to clean it up may simply move out of the area or visit less, further degrading the space due to lack of a tax base to fund proper maintenance. It then takes a much larger e ort to revitalize the area than it would have taken to keep it nice in the rst place. Not only do the funds need to be found for the revitalization e ort, but the expectation that it should be a nice place has to be reset, and then people need to be drawn back to it. All these unintended consequences we’ve been talking about have a name from economics: externalities, which are consequences, good or bad, that a ect an entity without its consent, imposed from an external source. The infant who cannot be vaccinated, for example, receives a positive externality from those who choose to vaccinate (less chance of getting the disease) and a negative externality from those who do not (more chance of getting the disease). Similarly, air pollution by a factory creates a negative externality for the people living nearby—low air quality. If that same company, though, trained all its workers in rst aid, residents would receive a positive externality if some of those workers used that training to save lives outside of work. Externalities occur wherever there are spillover e ects, which happen when an e ect of an activity spills over outside the core interactions of the activity. The e ects of smoking spill over to surrounding people through secondhand smoke and, more broadly, through increased public healthcare expenditures. Sometimes spillover e ects can be more subtle. When you buy a car, you add congestion to the roads you drive on, a cost borne by everyone who drives on the same roads. Or when you keep your neighbors up with loud music, you deprive them of sleep, causing them to be less productive. Over the next few days, look out for externalities. When you see or hear about someone or some organization taking an action, think about people not directly related to the action who might experience bene t or harm from it. When you see someone litter, be aware of the negative externality borne by everyone else who uses that space. Consider that if enough people litter, the herd immunity threshold could be breached, plunging the s

Use Quizgecko on...
Browser
Browser