Mental Models: Introduction (PDF)
Document Details
Uploaded by scrollinondubs
Stanford School of Medicine
Tags
Summary
This document introduces the concept of mental models and their importance in enhancing understanding of the world and improving decision-making. It argues that mental models, representations of how things work, allow for better problem-solving and decision-making, and avoiding problems, by understanding causal factors and the repercussions of our actions. It highlights different perspectives needed to understand systems in a more complete way and encourages continuous testing and updating of our understanding of reality.
Full Transcript
Introduction: Acquiring Wisdom In life and business, the person with the fewest blind spots wins. Removing blind spots means we see, interact with, and move closer to understanding reality. We think better. And thinking better is about finding simple processes that help us work through problems fro...
Introduction: Acquiring Wisdom In life and business, the person with the fewest blind spots wins. Removing blind spots means we see, interact with, and move closer to understanding reality. We think better. And thinking better is about finding simple processes that help us work through problems from multiple dimensions and perspectives, allowing us to better choose solutions that fit what matters to us. The skill for finding the right solutions for the right problems is one form of wisdom. This book is about the pursuit of that wisdom, the pursuit of uncovering how things work, the pursuit of going to bed smarter than when we woke up. It is a book about getting out of our own way so we can understand how the world really is. Decisions based on improved understanding will be better than ones based on ignorance. While we can’t predict which problems will inevitably crop up in life, we can learn time-tested ideas that help us prepare for whatever the world throws at us. Perhaps more importantly, this book is about avoiding problems. This often comes down to understanding a problem accurately and seeing the secondary and subsequent consequences of any proposed action. The author and explorer of mental models, Peter Bevelin, put it best: “I don’t want to be a great problem solver. I want to avoid problems—prevent them from happening and doing it right from the beginning.” 2 How can we do things right from the beginning? We must understand how the world works and adjust our behavior accordingly. Contrary to what we’re led to believe, thinking better isn’t about being a genius. It is about the processes we use to uncover reality and the choices we make once we do. How this book can help you This is the first of a series of volumes aimed at defining and exploring the Great Mental Models—those that have the broadest utility across our lives. Mental models describe the way the world works. They shape how we think, how we understand, and how we form beliefs. Largely subconscious, mental models operate below the surface. We’re not generally aware of them and yet they’re the reason when we look at a problem we consider some factors relevant and others irrelevant. They are how we infer causality, match patterns, and draw analogies. They are how we think and reason. A mental model is simply a representation of how something works. We cannot keep all of the details of the world in our brains, so we use models to simplify the complex into understandable and organizable chunks. Whether we realize it or not, we then use these models every day to think, decide, and understand our world. While there are millions of mental models, some true and some false, these volumes will focus on the ones with the greatest utility—the all-star team of mental models. Volume One presents the first nine models, general thinking concepts. Although these models are hiding in plain sight, they are useful tools that you were likely never directly taught. Put to proper use, they will improve your understanding of the world we live in and improve your ability to look at a situation through different lenses, each of which reveals a different layer. They can be used in a wide variety of situations and are essential to making rational decisions, even when there is no clear path. Collectively they will allow you to walk around any problem in a three-dimensional way. Our approach to the Great Mental Models rests on the idea that the fundamentals of knowledge are available to everyone. There is no discipline that is off limits—the core ideas from all fields of study contain principles that reveal how the universe works, and are therefore essential to navigating it. Our models come from fundamental disciplines that most of us have never studied, but no prior knowledge is required—only a sharp mind with a desire to learn. Why mental models? There is no system that can prepare us for all risks. Factors of chance introduce a level of complexity that is not entirely predictable. But being able to draw on a repertoire of mental models can help us minimize risk by understanding the forces that are at play. Likely consequences don’t have to be a mystery. Not having the ability to shift perspective by applying knowledge from multiple disciplines makes us vulnerable. Mistakes can become catastrophes whose effects keep compounding, creating stress and limiting our choices. Multidisciplinary thinking, learning these mental models and applying them across our lives, creates less stress and more freedom. The more we can draw on the diverse knowledge contained in these models, the more solutions will present themselves. Understanding reality Understanding reality is a vague phrase, one you’ve already encountered as you’ve read this book. Of course we want to understand reality, but how? And why is it important? In order to see a problem for what it is, we must first break it down into its substantive parts so the interconnections can reveal themselves. This bottom-up perspective allows us to expose what we believe to be the causal relationships and how they will govern the situation both now and in the future. Being able to accurately describe the full scope of a situation is the first step to understanding it. Using the lenses of our mental models helps us illuminate these interconnections. The more lenses used on a given problem, the more of reality reveals itself. The more of reality we see, the more we understand. The more we understand, the more we know what to do. Simple and well-defined problems won’t need many lenses, as the variables that matter are known. So too are the interactions between them. In such cases we generally know what to do to get the intended results with the fewest side effects possible. When problems are more complicated, however, the value of having a brain full of lenses becomes readily apparent. That’s not to say all lenses (or models) apply to all problems. They don’t. And it’s not to say that having more lenses (or models) will be an advantage in all problems. It won’t. This is why learning and applying the Great Mental Models is a process that takes some work. But the truth is, most problems are multidimensional, and thus having more lenses often offers significant help with the problems we are facing. Keeping your feet on the ground In Greek mythology, Antaeus was the human-giant son of Poseidon, god of the sea, and Gaia, Mother Earth. Antaeus had a strange habit. He would challenge all those who passed through his country to a wrestling match. Greek wrestling isn’t much different from what we think of today when we think of wrestling. The goal is to force the opponent to the ground. Antaeus always won and his opponents’ skulls were used to build a temple to his father. While Antaeus was undefeated and nearly undefeatable, there was a catch to his invulnerability. His epic strength depended on constant contact with the earth. When he lost touch with earth, he lost all of his strength. On the way to the Garden of the Hesperides, Heracles was to fight Antaeus as one of his 12 labors. After a few rounds in which Heracles flung the giant to the ground only to watch him revive, he realized he could not win by using traditional wrestling techniques. Instead, Heracles fought to lift him off the ground. Away from contact with his mother, Antaeus lost his strength and Heracles crushed him.3,4 When understanding is separated from reality, we lose our powers. Understanding must constantly be tested against reality and updated accordingly. This isn’t a box we can tick, a task with a definite beginning and end, but a continuous process. You all know the person who has all the answers on how to improve your organization, or the friend who has the cure to world hunger. While pontificating with friends over a bottle of wine at dinner is fun, it won’t help you improve. The only way you’ll know the extent to which you understand reality is to put your ideas and understanding into action. If you don’t test your ideas against the real world—keep contact with the earth— how can you be sure you understand? Getting in our own way The biggest barrier to learning from contact with reality is ourselves. It’s hard to understand a system that we are part of because we have blind spots, where we can’t see what we aren’t looking for, and don’t notice what we don’t notice. « There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?” » David Foster Wallace5 Our failures to update from interacting with reality spring primarily from three things: not having the right perspective or vantage point, ego-induced denial, and distance from the consequences of our decisions. As we will learn in greater detail throughout the volumes on mental models, these can all get in the way. They make it easier to keep our existing and flawed beliefs than to update them accordingly. Let’s briefly flesh these out: The first flaw is perspective. We have a hard time seeing any system that we are in. Galileo had a great analogy to describe the limits of our default perspective. Imagine you are on a ship that has reached constant velocity (meaning without a change in speed or direction). You are below decks and there are no portholes. You drop a ball from your raised hand to the floor. To you, it looks as if the ball is dropping straight down, thereby confirming gravity is at work. Now imagine you are a fish (with special x-ray vision) and you are watching this ship go past. You see the scientist inside, dropping a ball. You register the vertical change in the position of the ball. But you are also able to see a horizontal change. As the ball was pulled down by gravity it also shifted its position east by about 20 feet. The ship moved through the water and therefore so did the ball. The scientist on board, with no external point of reference, was not able to perceive this horizontal shift. This analogy shows us the limits of our perception. We must be open to other perspectives if we truly want to understand the results of our actions. Despite feeling that we’ve got all the information, if we’re on the ship, the fish in the ocean has more he can share. The second flaw is ego. Many of us tend to have too much invested in our opinions of ourselves to see the world’s feedback—the feedback we need to update our beliefs about reality. This creates a profound ignorance that keeps us banging our head against the wall over and over again. Our inability to learn from the world because of our ego happens for many reasons, but two are worth mentioning here. First, we’re so afraid about what others will say about us that we fail to put our ideas out there and subject them to criticism. This way we can always be right. Second, if we do put our ideas out there and they are criticized, our ego steps in to protect us. We become invested in defending instead of upgrading our ideas. The third flaw is distance. The further we are from the results of our decisions, the easier it is to keep our current views rather than update them. When you put your hand on a hot stove, you quickly learn the natural consequence. You pay the price for your mistakes. Since you are a pain-avoiding creature, you update your view. Before you touch another stove, you check to see if it’s hot. But you don’t just learn a micro lesson that applies in one situation. Instead, you draw a general abstraction, one that tells you to check before touching anything that could potentially be hot. Organizations over a certain size often remove us from the direct consequences of our decisions. When we make decisions that other people carry out, we are one or more levels removed and may not immediately be able to update our understanding. We come a little off the ground, if you will. The further we are from the feedback of the decisions, the easier it is to convince ourselves that we are right and avoid the challenge, the pain, of updating our views. Admitting that we’re wrong is tough. It’s easier to fool ourselves that we’re right at a high level than at the micro level, because at the micro level we see and feel the immediate consequences. When we touch that hot stove, the feedback is powerful and instantaneous. At a high or macro level we are removed from the immediacy of the situation, and our ego steps in to create a narrative that suits what we want to believe, instead of what really happened. These flaws are the main reasons we keep repeating the same mistakes, and why we need to keep our feet on the ground as much as we can. As Confucius said, “A man who has committed a mistake and doesn’t correct it, is committing another mistake.” The majority of the time, we don’t even perceive what conflicts with our beliefs. It’s much easier to go on thinking what we’ve already been thinking than go through the pain of updating our existing, false beliefs. When it comes to seeing what is—that is, understanding reality—we can follow Charles Darwin’s advice to notice things “which easily escape attention,” and ask why things happened. We also tend to undervalue the elementary ideas and overvalue the complicated ones. Most of us get jobs based on some form of specialized knowledge, so this makes sense. We don’t think we have much value if we know the things everyone else does, so we focus our effort on developing unique expertise to set ourselves apart. The problem is then that we reject the simple to make sure what we offer can’t be contributed by someone else. But simple ideas are of great value because they can help us prevent complex problems. In identifying the Great Mental Models we have looked for elementary principles, the ideas from multiple disciplines that form a time-tested foundation. It may seem counterintuitive, to work on developing knowledge that is available to everyone, but the universe works in the same way no matter where you are in it. What you need is to understand the principles, so that when the details change you are still able to identify what is really going on. This is part of what makes the Great Mental Models so valuable—understanding the principles, you can easily change tactics to apply the ones you need. « Most geniuses—especially those who lead others—prosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities. » Andy Benoit6 These elementary ideas, so often overlooked, are from multiple disciplines: biology, physics, chemistry, and more. These help us understand the interconnections of the world, and see it for how it really is. This understanding allows us to develop causal relationships, which allow us to match patterns, which allow us to draw analogies. All of this so we can navigate reality with more clarity and comprehension of the real dynamics involved. Understanding is not enough However, understanding reality is not everything. The pursuit of understanding fuels meaning and adaptation, but this understanding, by itself, is not enough. Understanding only becomes useful when we adjust our behavior and actions accordingly. The Great Models are not just theory. They are actionable insights that can be used to effect positive change in your life. What good is it to know that you constantly interrupt people if you fail to adjust your behavior in light of this? In fact, if you know and don’t change your behavior it often has a negative effect. People around you will tell themselves the simplest story that makes sense to them given what they see: that you just don’t care. Worse still, because you understand that you interrupt people, you’re surprised when you get the same results over and over. Why? You’ve failed to reflect on your new understanding and adjust your behavior. In the real world you will either understand and adapt to find success or you will fail. Now you can see how we make suboptimal decisions and repeat mistakes. We are afraid to learn and admit when we don’t know enough. This is the mindset that leads to poor decisions. They are a source of stress and anxiety, and consume massive amounts of time. Not when we’re making them—no, when we’re making them they seem natural because they align with our view of how we want things to work. We get tripped up when the world doesn’t work the way we want it to or when we fail to see what is. Rather than update our views, we double down our effort, accelerating our frustrations and anxiety. It’s only weeks or months later, when we’re spending massive amounts of time fixing our mistakes, that they start to increase their burden on us. Then we wonder why we have no time for family and friends and why we’re so consumed by things outside of our control. We are passive, thinking these things just happened to us and not that we did something to cause them. This passivity means that we rarely reflect on our decisions and the outcomes. Without reflection we cannot learn. Without learning we are doomed to repeat mistakes, become frustrated when the world doesn’t work the way we want it to, and wonder why we are so busy. The cycle goes on. But we are not passive participants in our decisions. The world does not act on us as much as it reveals itself to us and we respond. Ego gets in the way, locking reality behind a door that it controls with a gating mechanism. Only through persistence in the face of having it slammed on us over and over can we begin to see the light on the other side. Ego, of course, is more than the enemy. It’s also our friend. If we had a perfect view of the world and made decisions rationally, we would never attempt to do the amazing things that make us human. Ego propels us. Why, without ego, would we even attempt to travel to Mars? After all, it’s never been done before. We’d never start a business because most of them fail. We need to learn to understand when ego serves us and when it hinders us. Wrapping ego up in outcomes instead of in ourselves makes it easier to update our views. We optimize for short-term ego protection over long-term happiness. Increasingly, our understanding of things becomes black and white rather than shades of grey. When things happen in accord with our view of the world we naturally think they are good for us and others. When they conflict with our views, they are wrong and bad. But the world is smarter than we are and it will teach us all we need to know if we’re open to its feedback— if we keep our feet on the ground. Mental models and how to use them Perhaps an example will help illustrate the mental models approach. Think of gravity, something we learned about as kids and perhaps studied more formally in university as adults. We each have a mental model about gravity, whether we know it or not. And that model helps us to understand how gravity works. Of course we don’t need to know all of the details, but we know what’s important. We know, for instance, that if we drop a pen it will fall to the floor. If we see a pen on the floor we come to a probabilistic conclusion that gravity played a role. This model plays a fundamental role in our lives. It explains the movement of the Earth around the sun. It informs the design of bridges and airplanes. It’s one of the models we use to evaluate the safety of leaning on a guard rail or repairing a roof. But we also apply our understanding of gravity in other, less obvious ways. We use the model as a metaphor to explain the influence of strong personalities, as when we say, “He was pulled into her orbit.” This is a reference to our basic understanding of the role of mass in gravity—the more there is the stronger the pull. It also informs some classic sales techniques. Gravity diminishes with distance, and so too does your propensity to make an impulse buy. Good salespeople know that the more distance you get, in time or geography, between yourself and the object of desire, the less likely you are to buy. Salespeople try to keep the pressure on to get you to buy right away. Gravity has been around since before humans, so we can consider it to be time-tested, reliable, and representing reality. And yet, can you explain gravity with a ton of detail? I highly doubt it. And you don’t need to for the model to be useful to you. Our understanding of gravity, in other words, our mental model, lets us anticipate what will happen and also helps us explain what has happened. We don’t need to be able to describe the physics in detail for the model to be useful. However, not every model is as reliable as gravity, and all models are flawed in some way. Some are reliable in some situations but useless in others. Some are too limited in their scope to be of much use. Others are unreliable because they haven’t been tested and challenged, and yet others are just plain wrong. In every situation, we need to figure out which models are reliable and useful. We must also discard or update the unreliable ones, because unreliable or flawed models come with a cost. For a long time people believed that bloodletting cured many different illnesses. This mistaken belief actually led doctors to contribute to the deaths of many of their patients. When we use flawed models we are more likely to misunderstand the situation, the variables that matter, and the cause and effect relationships between them. Because of such misunderstandings we often take suboptimal actions, like draining so much blood out of patients that they die from it. Better models mean better thinking. The degree to which our models accurately explain reality is the degree to which they improve our thinking. Understanding reality is the name of the game. Understanding not only helps us decide which actions to take but helps us remove or avoid actions that have a big downside that we would otherwise not be aware of. Not only do we understand the immediate problem with more accuracy, but we can begin to see the second-, third-, and higher-order consequences. This understanding helps us eliminate avoidable errors. Sometimes making good decisions boils down to avoiding bad ones. Flawed models, regardless of intentions, cause harm when they are put to use. When it comes to applying mental models we tend to run into trouble either when our model of reality is wrong, that is, it doesn’t survive real world experience, or when our model is right and we apply it to a situation where it doesn’t belong. Models that don’t hold up to reality cause massive mistakes. Consider that the model of bloodletting as a cure for disease caused unnecessary death because it weakened patients when they needed all their strength to fight their illnesses. It hung around for such a long time because it was part of a package of flawed models, such as those explaining the causes of sickness and how the human body worked, that made it difficult to determine exactly where it didn’t fit with reality. We compound the problem of flawed models when we fail to update our models when evidence indicates they are wrong. Only by repeated testing of our models against reality and being open to feedback can we update our understanding of the world and change our thinking. We need to look at the results of applying the model over the largest sample size possible to be able to refine it so that it aligns with how the world actually works. — Sidebar: What Can the Three Buckets of Knowledge Teach Us About History? The power of acquiring new models The quality of our thinking is largely influenced by the mental models in our heads. While we want accurate models, we also want a wide variety of models to uncover what’s really happening. The key here is variety. Most of us study something specific and don’t get exposure to the big ideas of other disciplines. We don’t develop the multidisciplinary mindset that we need to accurately see a problem. And because we don’t have the right models to understand the situation, we overuse the models we do have and use them even when they don’t belong. You’ve likely experienced this first hand. An engineer will often think in terms of systems by default. A psychologist will think in terms of incentives. A business person might think in terms of opportunity cost and risk-reward. Through their disciplines, each of these people sees part of the situation, the part of the world that makes sense to them. None of them, however, see the entire situation unless they are thinking in a multidisciplinary way. In short, they have blind spots. Big blind spots. And they’re not aware of their blind spots. There is an old adage that encapsulates this: “To the man with only a hammer, everything starts looking like a nail.” Not every problem is a nail. The world is full of complications and interconnections that can only be explained through understanding of multiple models. What Can the Three Buckets of Knowledge Teach Us About History? “Every statistician knows that a large, relevant sample size is their best friend. What are the three largest, most relevant sample sizes for identifying universal principles? Bucket number one is inorganic systems, which are 13.7 billion years in size. It’s all the laws of math and physics, the entire physical universe. Bucket number two is organic systems, 3.5 billion years of biology on Earth. And bucket number three is human history, you can pick your own number, I picked 20,000 years of recorded human behavior. Those are the three largest sample sizes we can access and the most relevant.” —Peter Kaufman The larger and more relevant the sample size, the more reliable the model based on it is. But the key to sample sizes is to look for them not just over space, but over time. You need to reach back into the past as far as you can to contribute to your sample. We have a tendency to think that how the world is, is how it always was. And so we get caught up validating our assumptions from what we find in the here and now. But the continents used to be pushed against each other, dinosaurs walked the planet for millions of years, and we are not the only hominid to evolve. Looking to the past can provide essential context for understanding where we are now. Removing blind spots means thinking through the problem using different lenses or models. When we do this the blind spots slowly go away and we gain an understanding of the problem. We’re much like the blind men in the classic parable of the elephant, going through life trying to explain everything through one limited lens. Too often that lens is driven by our particular field, be it economics, engineering, physics, mathematics, biology, chemistry, or something else entirely. Each of these disciplines holds some truth and yet none of them contain the whole truth. Here’s another way to look at it: think of a forest. When a botanist looks at it they may focus on the ecosystem, an environmentalist sees the impact of climate change, a forestry engineer the state of the tree growth, a business person the value of the land. None are wrong, but neither are any of them able to describe the full scope of the forest. Sharing knowledge, or learning the basics of the other disciplines, would lead to a more well-rounded understanding that would allow for better initial decisions about managing the forest. Relying on only a few models is like having a 400- horsepower brain that’s only generating 50 horsepower of output. To increase your mental efficiency and reach your 400- horsepower potential, you need to use a latticework of mental models. Exactly the same sort of pattern that graces backyards everywhere, a lattice is a series of points that connect to and reinforce each other. The Great Models can be understood in the same way—models influence and interact with each other to create a structure that can be used to evaluate and understand ideas. _ A group of blind people approach a strange animal, called an elephant. None of them are aware of its shape and form. So they decide to understand it by touch. The first person, whose hand touches the trunk, says, “This creature is like a thick snake.” For the second person, whose hand finds an ear, it seems like a type of fan. The third person, whose hand is on a leg, says the elephant is a pillar like a tree-trunk. The fourth blind man who places his hand on the side says, “An elephant is a wall.” The fifth, who feels its tail, describes it as a rope. The last touches its tusk, and states the elephant is something that is hard and smooth, like a spear. In a famous speech in the 1990s, Charlie Munger summed up this approach to practical wisdom: “Well, the first rule is that you can’t really know anything if you just remember isolated facts and try and bang ‘em back. If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form. You’ve got to have models in your head. And you’ve got to array your experience both vicarious and direct on this latticework of models. You may have noticed students who just try to remember and pound back what is remembered. Well, they fail in school and in life. You’ve got to hang experience on a latticework of models in your head.”8 « The chief enemy of good decisions is a lack of sufficient perspectives on a problem. » Alain de Botton9 Expanding your latticework of mental models A latticework is an excellent way to conceptualize mental models, because it demonstrates the reality and value of interconnecting knowledge. The world does not isolate itself into discrete disciplines. We only break it down that way because it makes it easier to study it. But once we learn something, we need to put it back into the complex system in which it occurs. We need to see where it connects to other bits of knowledge, to build our understanding of the whole. This is the value of putting the knowledge contained in mental models into a latticework. It reduces the blind spots that limit our view of not only the immediate problem, but the second and subsequent order effects of our potential solutions. Without a latticework of the Great Models our decisions become harder, slower, and less creative. But by using a mental models approach, we can complement our specializations by being curious about how the rest of the world works. A quick glance at the Nobel Prize winners list show that many of them, obviously extreme specialists in something, had multidisciplinary interests that supported their achievements. To help you build your latticework of mental models, this book, and the books that follow, attempt to arm you with the big models from multiple disciplines. We’ll take a look at biology, physics, chemistry, economics, and even psychology. We don’t need to master all the details from these disciplines, just the fundamentals. To quote Charlie Munger, “80 or 90 important models will carry about 90 percent of the freight in making you a worldly- wise person. And, of those, only a mere handful really carry very heavy freight.”10 These books attempt to collect and make accessible organized common sense—the 80 to 90 mental models you need to get started. To help you understand the models, we will relate them to historical examples and stories. Our website fs.blog will have even more practical examples. The more high-quality mental models you have in your mental toolbox, the more likely you will have the ones needed to understand the problem. And understanding is everything. The better you understand, the better the potential actions you can take. The better the potential actions, the fewer problems you’ll encounter down the road. Better models make better decisions. «I think it is undeniably true that the human brain must work in models. The trick is to have your brain work better than the other person’s brain because it understands the most fundamental models: ones that will do most work per unit. If you get into the mental habit of relating what you’re reading to the basic structure of the underlying ideas being demonstrated, you gradually accumulate some wisdom.» Charlie Munger11 It takes time, but the benefits are enormous What successful people do is file away a massive, but finite, amount of fundamental, established, essentially unchanging knowledge that can be used in evaluating the infinite number of unique scenarios which show up in the real world. It’s not just knowing the mental models that is important. First you must learn them, but then you must use them. Each decision presents an opportunity to comb through your repertoire and try one out, so you can also learn how to use them. This will slow you down at first, and you won’t always choose the right models, but you will get better and more efficient at using mental models as time progresses. We need to work hard at synthesizing across the borders of our knowledge, and most importantly, synthesizing all of the ideas we learn with reality itself. No model contains the entire truth, whatever that may be. What good are math and biology and psychology unless we know how they fit together in reality itself, and how to use them to make our lives better? It would be like dying of hunger because we don’t know how to combine and cook any of the foods in our pantry. «Disciplines, like nations, are a necessary evil that enable human beings of bounded rationality to simplify their goals and reduce their choices to calculable limits. But parochialism is everywhere, and the world badly needs international and interdisciplinary travelers to carry new knowledge from one enclave to another.» Herbert Simon12 You won’t always get it right. Sometimes the model, or models, you choose to use won’t be the best ones for that situation. That’s okay. The more you use them, the more you will be able to build the knowledge of indicators that can trigger the use of the most appropriate model. Using and failing, as long as you acknowledge, reflect, and learn from it, is also how you build your repertoire. You need to be deliberate about choosing the models you will use in a situation. As you use them, a great practice is to record and reflect. That way you can get better at both choosing models and applying them. Take the time to notice how you applied them, what the process was like, and what the results were. Over time you will develop your knowledge of which situations are best tackled through which models. Don’t give up on a model if it doesn’t help you right away. Learn more about it, and try to figure out exactly why it didn’t work. It may be that you have to improve your understanding. Or there were aspects of the situation that you did not consider. Or that your focus was on the wrong variable. So keep a journal. Write your experiences down. When you identify a model at work in the world, write that down too. Then you can explore the applications you’ve observed, and start being more in control of the models you use every day. For instance, instead of falling victim to confirmation bias, you will be able to step back and see it at work in yourself and others. Once you get practice, you will start to naturally apply models as you go through your life, from reading the news to contemplating a career move. As we have seen, we can run into problems when we apply models to situations in which they don’t fit. If a model works, we must invest the time and energy into understanding why it worked so we know when to use it again. At the beginning the process is more important than the outcome. As you use the models, stay open to the feedback loops. Reflect and learn. You will get better. It will become easier. Results will become more profoundly useful, more broadly applicable, and more memorable. While this book isn’t intended to be a book specifically about making better decisions, it will help you make better decisions. Mental models are not an excuse to create a lengthy decision process but rather to help you move away from seeing things the way you think they should be to the way they are. Uncovering this knowledge will naturally help your decision- making. Right now you are only touching one part of the elephant, so you are making all decisions based on your understanding that it’s a wall or a rope, not an animal. As soon as you begin to take in the knowledge that other people have of the world, like learning the perspectives others have of the elephant, you will start having more success because your decisions will be aligned with how the world really is. When you start to understand the world better, when the whys seem less mysterious, you gain confidence in how you navigate it. The successes will accrue. And more success means more time, less stress, and ultimately a more meaningful life. Time to dive in. The map appears to us more real than the land. D.H. Lawrence1 The People Who Appear in this Chapter Korzybski, Alfred. 1879-1950 - Polish-American independent scholar who developed the field of general semantics. He argued that knowledge is limited by our physical and language capabilities. Newton, Sir Isaac. 1643-1727 - English polymath. One of the most influential scientists of all time. He related the workings of the Earth to the wonders of the universe. He also spent 27 years being Master of the Royal Mint. Einstein, Albert. 1879-1955 - German theoretical physicist who gave us the theory of relativity and opened up the universe. He is famous for many things, including his genius, his kindness and his hair. Ostrom, Elinor. 1933-2012 - American political economist. In 2009 she shared the Nobel Memorial Prize in Economic Sciences for her analysis of economic governance; in particular, questions related to “the commons”. Abbud, Karimeh. 1893-1955 - Palestinian professional photographer. Also known as the “Lady Photographer”, she was an artist who lived and worked in Lebanon and Palestine. Jacobs, Jane. 1916-2006 - American-Canadian journalist, author, and activist who influenced urban studies, sociology, and economics. Her work has greatly impacted the development of North American cities. The Map is not the Territory The map of reality is not reality. Even the best maps are imperfect. That’s because they are reductions of what they represent. If a map were to represent the territory with perfect fidelity, it would no longer be a reduction and thus would no longer be useful to us. A map can also be a snapshot of a point in time, representing something that no longer exists. This is important to keep in mind as we think through problems and make better decisions. We use maps every day. They help us navigate from one city to another. They help us reduce complexity to simplicity. Think of the financial statements for a company, which are meant to distill the complexity of thousands of transactions into something simpler. Or a policy document on office procedure, a manual on parenting a two-year-old, or your performance review. All are models or maps that simplify some complex territory in order to guide you through it. Just because maps and models are flawed is not an excuse to ignore them. Maps are useful to the extent they are explanatory and predictive. Key elements of a map In 1931, the mathematician Alfred Korzybski presented a paper on mathematical semantics in New Orleans, Louisiana. Looking at it today, most of the paper reads like a complex, technical argument on the relationship of mathematics to human language, and of both to physical reality. However, with this paper Korzybski introduced and popularized the concept that the map is not the territory. In other words, the description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted. Specifically, in his own words:2 1. A map may have a structure similar or dissimilar to the structure of the territory. The London underground map is super useful to travelers. The train drivers don’t use it at all! Maps describe a territory in a useful way, but with a specific purpose. They cannot be everything to everyone. 2. Two similar structures have similar “logical” characteristics. If a correct map shows Dresden as between Paris and Warsaw, a similar relation is found in the actual territory. If you have a map showing where Dresden is, you should be able to use it to get there. 3. A map is not the actual territory. The London underground map does not convey what it’s like to be standing in Covent Garden station. Nor would you use it to navigate out of the station. 4. An ideal map would contain the map of the map, the map of the map of the map, etc., endlessly. We may call this characteristic self-reflexiveness. Imagine using an overly complicated “Guide to Paris” on a trip to France, and then having to purchase another book that was the “Guide to the Guide of Paris”. And so on. Ideally, you’d never have any issues— but eventually, the level of detail would be overwhelming. The truth is, the only way we can navigate the complexity of reality is through some sort of abstraction. When we read the news, we’re consuming abstractions created by other people. The authors consumed vast amounts of information, reflected upon it, and drew some abstractions and conclusions that they share with us. But something is lost in the process. We can lose the specific and relevant details that were distilled into an abstraction. And, because we often consume these abstractions as gospel, without having done the hard mental work ourselves, it’s tricky to see when the map no longer agrees with the territory. We inadvertently forget that the map is not reality. But my GPS didn’t show that cliff We need maps and models as guides. But frequently, we don’t remember that our maps and models are abstractions and thus we fail to understand their limits. We forget there is a territory that exists separately from the map. This territory contains details the map doesn’t describe. We run into problems when our knowledge becomes of the map, rather than the actual underlying territory it describes. When we mistake the map for reality, we start to think we have all the answers. We create static rules or policies that deal with the map but forget that we exist in a constantly changing world. When we close off or ignore feedback loops, we don’t see the terrain has changed and we dramatically reduce our ability to adapt to a changing environment. Reality is messy and complicated, so our tendency to simplify it is understandable. However, if the aim becomes simplification rather than understanding we start to make bad decisions. We can’t use maps as dogma. Maps and models are not meant to live forever as static references. The world is dynamic. As territories change, our tools to navigate them must be flexible to handle a wide variety of situations or adapt to the changing times. If the value of a map or model is related to its ability to predict or explain, then it needs to represent reality. If reality has changed the map must change. Take Newtonian physics. For hundreds of years it served as an extremely useful model for understanding the workings of our world. From gravity to celestial motion, Newtonian physics was a wide-ranging map. Then in 1905 Albert Einstein, with his theory of Special Relativity, changed our understanding of the universe in a huge way. He replaced the understanding handed down by Isaac Newton hundreds of years earlier. He created a new map. Newtonian physics is still a very useful model. One can use it very reliably to predict the movement of objects large and small, with some limitations as pointed out by Einstein. And, on the flip side, Einstein’s physics are still not totally complete: With every year that goes by, physicists become increasingly frustrated with their inability to tie it into small-scale quantum physics. Another map may yet come. But what physicists do so well, and most of us do so poorly, is that they carefully delimit what Newtonian and Einsteinian physics are able to explain. They know down to many decimal places where those maps are useful guides to reality, and where they aren’t. And when they hit uncharted territory, like quantum mechanics, they explore it carefully instead of assuming the maps they have can explain it all. Maps can’t show everything Some of the biggest map/territory problems are the risks of the territory that are not shown on the map. When we’re following the map without looking around, we trip right over them. Any user of a map or model must realize that we do not understand a model, map, or reduction unless we understand and respect its limitations. If we don’t understand what the map does and doesn’t tell us, it can be useless or even dangerous. — Sidebar: The Tragedy of the Commons The Tragedy of the Commons The Tragedy of the Commons is a parable that illustrates why common resources get used more than is desirable from the standpoint of society as a whole. Garrett Hardin wrote extensively about this concept. “Picture a pasture open to all. It is to be expected that each herdsman will try to keep as many cattle as possible on the commons. Such an arrangement may work reasonably satisfactorily for centuries because tribal wars, poaching, and disease keep the numbers of both man and beast well below the carrying capacity of the land. Finally, however, comes the day of reckoning, that is, the day when the long-desired goal of social stability becomes a reality. At this point, the inherent logic of the commons remorselessly generates tragedy. As a rational being, each herdsman seeks to maximize his gain. Explicitly or implicitly, more or less consciously, he asks, “What is the utility to me of adding one more animal to my herd?” This utility has one negative and one positive component. 1. The positive component is a function of the increment of one animal. Since the herdsman receives all the proceeds from the sale of the additional animal, the positive utility is nearly +1. 2. The negative component is a function of the additional overgrazing created by one more animal. Since, however, the effects of overgrazing are shared by all the herdsmen, the negative utility for any particular decision-making herdsman is only a fraction of 1. Adding together the component partial utilities, the rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another.... But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all.” 3 What is common to many is taken least care of, for all men have greater regard for what is their own than for what they possess in common with others. –Aristotle Here’s another way to think about it. Economist Elinor Ostrom wrote about being cautious with maps and models when looking at different governance structures for common resources. She was worried that the Tragedy of the Commons model (see sidebar), which shows how a shared resource can become destroyed through bad incentives, was too general and did not account for how people, in reality, solved the problem. She explained the limitations of using models to guide public policy, namely that they often become metaphors. “What makes these models so dangerous … is that the constraints that are assumed to be fixed for the purpose of analysis are taken on faith as being fixed in empirical setting.”4 This is a double problem. First, having a general map, we may assume that if a territory matches the map in a couple of respects it matches the map in all respects. Second, we may think adherence to the map is more important than taking in new information about a territory. Ostrom asserts that one of the main values of using models as maps in public policy discussions is in the thinking that is generated. They are tools for exploration, not doctrines to force conformity. They are guidebooks, not laws. «Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.» George Box5 In order to use a map or model as accurately as possible, we should take three important considerations into account: 1. Reality is the ultimate update. 2. Consider the cartographer. 3. Maps can influence territories. Reality is the ultimate update: When we enter new and unfamiliar territory it’s nice to have a map on hand. Everything from travelling to a new city, to becoming a parent for the first time has maps that we can use to improve our ability to navigate the terrain. But territories change, sometimes faster than the maps and models that describe them. We can and should update them based on our own experiences in the territory. That’s how good maps are built: feedback loops created by explorers. We can think of stereotypes as maps. Sometimes they are useful—we have to process large amounts of information every day, and simplified chunks such as stereotypes can help us sort through this information with efficiency. The danger is when, like with all maps, we forget the territory is more complex. That people have far more territory than a stereotype can represent. In the early 1900s, Europeans were snapping pictures all over Palestine, leaving a record that may have reflected their ethnographic perspective, but did not cover Karimeh Abbud’s perception of her culture. She began to take photos of those around her, becoming the first female Arab to set up her own photo studio in Palestine. Her pictures reflected a different take on the territory—she rejected the European style and aimed to capture the middle class as they were. She tried to let her camera record the territory as she saw it versus manipulating the images to follow a narrative. Her informal style and desire to photograph the variety around her, from landscapes to intimate portraits, have left a legacy far beyond the photos themselves.6,7 She contributed a different perspective, a new map, with which to explore the history of the territory of Palestine. We do have to remember though, that a map captures a territory at a moment in time. Just because it might have done a good job at depicting what was, there is no guarantee that it depicts what is there now or what will be there in the future. The faster the rate of change in the territory, the harder it will be for a map to keep up to date. «Viewed in its development through time, the map details the changing thought of the human race, and few works seem to be such an excellent indicator of culture and civilization.» Norman J.W. Thrower8 Consider the cartographer: Maps are not purely objective creations. They reflect the values, standards, and limitations of their creators. We can see this in the changing national boundaries that make up our world maps. Countries come and go depending on shifting political and cultural sensibilities. When we look at the world map we have today, we tend to associate societies with nations, assuming that the borders reflect a common identity shared by everyone contained within them. However, as historian Margaret MacMillan has pointed out, nationalism is a very modern construct, and in some sense has developed with, not in advance of, the maps that set out the shape of countries.9 We then should not assume that our literal maps depict an objective view of the geographical territory. For example, historians have shown that the modern borders of Syria, Jordan, and Iraq reflect British and French determination to maintain influence in the Middle East after World War I.10 Thus, they are a better map of Western interest than of local custom and organization. Models, then, are most useful when we consider them in the context they were created. What was the cartographer trying to achieve? How does this influence what is depicted in the map? « As a branch of human endeavor, cartography has a long and interesting history that well reflects the state of cultural activity, as well as the perception of the world, in different periods. … Though technical in nature, cartography, like architecture, has attributes of both a scientific and artistic pursuit, a dichotomy not satisfactorily reconciled in all presentations. » Norman J.W. Thrower11 Maps can influence territories: This problem was part of the central argument put forth by Jane Jacobs in her groundbreaking work, The Death and Life of Great American Cities. She chronicled the efforts of city planners who came up with elaborate models for the design and organization of cities without paying any attention to how cities actually work. They then tried to fit the cities into the model. She describes how cities were changed to correspond to these models, and the often negative consequences of these efforts. “It became possible also to map out master plans for the statistical city, and people take these more seriously, for we are all accustomed to believe that maps and reality are necessarily related, or that if they are not, we can make them so by altering reality.” 12 Jacobs’ book is, in part, a cautionary tale of what can happen when faith in the model influences the decisions we make in the territory. When we try to fit complexity into the simplification. «In general, when building statistical models, we must not forget that the aim is to understand something about the real world. Or predict, choose an action, make a decision, summarize evidence, and so on, but always about the real world, not an abstract mathematical world: our models are not the reality. » David Hand13 Conclusion Maps have long been a part of human society. They are valuable tools to pass on knowledge. Still, in using maps, abstractions, and models, we must always be wise to their limitations. They are, by definition, reductions of something far more complex. There is always at least an element of subjectivity, and we need to remember that they are created at particular moments in time. This does not mean that we cannot use maps and models. We must use some model of the world in order to simplify it and therefore interact with it. We cannot explore every bit of territory for ourselves. We can use maps to guide us, but we must not let them prevent us from discovering new territory or updating our existing maps. While navigating the world based on terrain is a useful goal, it’s not always possible. Maps, and models, help us understand and relate to the world around us. They are flawed but useful. In order to think a few steps ahead we must think beyond the map. Model of Management Let’s take a model of management. There are hundreds of them, dating back at least to the Scientific Theory of Management by Frederick Taylor, which had factory managers breaking down tasks into small pieces, forcing their workers to specialize, and financially incentivizing them to complete those tasks efficiently. It was a brute force method, but it worked pretty well. As time went on and the economy increasingly moved away from manufacturing, other theories gained popularity, and Taylor’s scientific model is no longer used by anyone of note. That does not mean it wasn’t useful: For a time, it was. It’s just that reality is more complicated than Taylor’s model. It had to contend with at least the following factors: 1. As more and more people know what model you’re using to manipulate them, they may decide not to respond to your incentives. 2. As your competitors gain knowledge of the model, they respond in kind by adopting the model themselves, thus flattening the field. 3. The model may have been mostly useful in a factory setting, and not in an office setting, or a technology setting. 4. Human beings are not simple automatons: A more complete model would hone in on other motivations they might have besides financial ones. And so on. Clearly, though Taylor’s model was effective for a time, it was effective with limitations. As with Einstein eclipsing Newton, better models came along in time. Maps Are Necessarily Flawed Maps, or models, are necessary but necessarily flawed. Lewis Carroll once jabbed at this in a story called Sylvie and Bruno, where one of the characters decided that his country would create a map with the scale of one mile to one mile. Obviously, such a map would not have the limitations of a map, but it wouldn’t be of much help either. You couldn’t use it to actually go anywhere. It wouldn’t fit in your pocket or your car. We need maps to condense the territory we are trying to navigate. I’m no genius. I’m smart in spots —but I stay around those spots. Thomas Watson1 The People Who Appear in this Chapter Norgay, Tenzing, born Namgyal Wangdi. 1914-1986 - Nepali Sherpa mountaineer. Time named him one of the 100 most influential people of the 20th century. Gawande, Atul. 1965 - American surgeon, writer, and public health researcher. He practices medicine in Boston, is a professor at Harvard University, and has been a staff writer for The New Yorker since 1998. Elizabeth I. 1533-1603 - Queen of England and Ireland. One of the most famous monarchs of all time, her image and legacy continue to capture the imagination. Elizabeth was a great orator, could speak about 11 languages, and wrote her own speeches and letters. Circle of Competence When ego and not competence drives what we undertake, we have blind spots. If you know what you understand, you know where you have an edge over others. When you are honest about where your knowledge is lacking you know where you are vulnerable and where you can improve. Understanding your circle of competence improves decision-making and outcomes. In order to get the most out of this mental model, we will explore the following: 1. What is a circle of competence? 2. How do you know when you have one? 3. How do you build and maintain one? 4. How do you operate outside of one? What is a circle of competence? Imagine an old man who’s spent his entire life up in a small town. He’s the Lifer. No detail of the goings-on in the town has escaped his notice over the years. He knows the lineage, behavior, attitudes, jobs, income, and social status of every person in town. Bit by bit, he built that knowledge up over a long period of observation and participation in town affairs. The Lifer knows where the bodies are buried and who buried them. He knows who owes money to whom, who gets along with whom, and who the town depends on to keep spinning. He knows about that time the mayor cheated on his taxes. He knows about that time the town flooded, how many inches high the water was, and exactly who helped whom and who didn’t. Now imagine a Stranger enters the town, in from the Big City. Within a few days, the Stranger decides that he knows all there is to know about the town. He’s met the mayor, the sheriff, the bartender, and the shopkeeper, and he can get around fairly easily. It’s a small town and he hasn’t come across anything surprising. In the Stranger’s mind, he’s convinced he pretty much knows everything a Lifer would know. He has sized up the town in no time, with his keen eye. He makes assumptions based on what he has learned so far, and figures he knows enough to get his business done. This, however, is a false sense of confidence that likely causes him to take more risks than he realizes. Without intimately knowing the history of the town, how can he be sure that he has picked the right land for development, or negotiated the best price? After all, what kind of knowledge does he really have, compared to the Lifer? The difference between the detailed web of knowledge in the Lifer’s head and the surface knowledge in the Stranger’s head is the difference between being inside a circle of competence and being outside the perimeter. True knowledge of a complex territory cannot be faked. The Lifer could stump the Stranger in no time, but not the other way around. Consequently, as long as the Lifer is operating in his circle of competence he will always have a better understanding of reality to use in making decisions. Having this deep knowledge gives him flexibility in responding to challenges, because he will likely have more than one solution to every problem. And this depth increases his efficiency—he can eliminate bad choices quickly because he has all the pieces of the puzzle. What happens when you take the Lifer/Stranger idea seriously and try to delineate carefully the domains in which you’re one or the other? There is no definite checklist for figuring this out, but if you don’t have at least a few years and a few failures under your belt, you cannot consider yourself competent in a circle. «We shall be unable to turn natural advantage to account unless we make use of local guides.» Sun Tzu2 For most of us, climbing to the summit of Mount Everest is outside our circles of competence. Not only do we have no real idea how to do it, but—even more scary—should we attempt it, we don’t even know what we don’t know. If we studied hard, maybe we’d figure out the basics. We’d learn about the training, the gear, the process, the time of year, all the things an outsider could quickly know. But at what point would you be satisfied that you knew enough to get up there, and back, with your life intact? And how confident would you be in this assessment? There are approximately 200 bodies on Everest (not to mention the ones that have been removed). All of those people thought they could get up and down alive. The climate preserves their corpses, almost as a warning. The ascent to the summit takes you by the bodies of people who once shared your dreams. Since the first recorded attempts to climb Everest in 1922, all climbers have relied on the specialized knowledge of the Sherpa people to help navigate the terrain of the mountain. Indigenous to the region, Sherpas grew up in the shadows of the mountain, uniquely placed to develop the circle of competence necessary to get to the top. Sherpa Tenzing Norgay led the team that made the first ascent3, and a quarter of all subsequent ascents have been made by Sherpas (some going as many as 16 times). 4,5 Although the mountain is equally risky for everyone, most people who climb Everest do it once. For the Sherpas, working and climbing various parts of the mountain is their day job. Would you try to climb Everest without their help? The physical challenges alone of reaching the summit are staggering. It is a region that humans aren’t suited for. There isn’t enough oxygen in the air and the top is regularly pummeled by winds of more than 150 miles an hour—stronger than a Category 5 hurricane. You don’t get to the top on a whim, and you don’t survive with only luck. Norgay worked for years as a trekking porter, and was part of a team that tried to ascend Everest in 1935. He finally succeeded in reaching the summit in 1953, after 20 years of climbing and trekking in the region. He developed his expertise through lots of lucky failures. After Everest, Norgay opened a mountaineering school to train other locals as guides, and a trekking company to take others climbing in the Himalayas. Norgay is around the closest someone could come to being a Lifer when it comes to the competence required to climb Mount Everest. How do you know when you have a circle of competence? Within our circles of competence, we know exactly what we don’t know. We are able to make decisions quickly and relatively accurately. We possess detailed knowledge of additional information we might need to make a decision with full understanding, or even what information is unobtainable. We know what is knowable and what is unknowable and can distinguish between the two. We can anticipate and respond to objections because we’ve heard them before and already put in the work of gaining the knowledge to counter them. We also have a lot of options when we confront problems in our circles. Our deep fluency in subjects we are dealing with means we can draw on different information resources and understand what can be adjusted and what is invariant. A circle of competence cannot be built quickly. We don’t become Lifers overnight. It isn’t the result of taking a few courses or working at something for a few months—being a Lifer requires more than skimming the surface. In Alexander Pope’s poem “An Essay on Criticism,” he writes: “A little learning is a dangerous thing; Drink deep, or taste not the Pierian spring: There shallow draughts intoxicate the brain, And drinking largely sobers us again.”6 There is no shortcut to understanding. Building a circle of competence takes years of experience, of making mistakes, and of actively seeking out better methods of practice and thought. How do you build and maintain a circle of competence? One of the essential requirements of a circle of competence is that you can never take it for granted. You can’t operate as if a circle of competence is a static thing, that once attained is attained for life. The world is dynamic. Knowledge gets updated, and so too must your circle. There are three key practices needed in order to build and maintain a circle of competence: curiosity and a desire to learn, monitoring, and feedback. First, you have to be willing to learn. Learning comes when experience meets reflection. You can learn from your own experiences. Or you can learn from the experience of others, through books, articles, and conversations. Learning everything on your own is costly and slow. You are one person. Learning from the experiences of others is much more productive. You need to always approach your circle with curiosity, seeking out information that can help you expand and strengthen it. «Learn from the mistakes of others. You can’t live long enough to make them all yourself.» Anonymous Second, you need to monitor your track record in areas which you have, or want to have, a circle of competence. And you need to have the courage to monitor honestly so the feedback can be used to your advantage. The reason we have such difficulty with overconfidence— as demonstrated in studies which show that most of us are much worse drivers, lovers, managers, traders (and many other things) than we think we are—is because we have a problem with honest self-reporting. We don’t keep the right records, because we don’t really want to know what we’re good and bad at. Ego is a powerful enemy when it comes to better understanding reality. But that won’t work if you’re trying to assess or build your circle of competence. You need to keep a precise diary of your trades, if you’re investing in the stock market. If you are in a leadership position, you need to observe and chronicle the results of your decisions and evaluate them based on what you were trying to achieve. You need to be honest about your failures in order to reflect and learn from them. That’s what it takes. Keeping a journal of your own performance is the easiest and most private way to give self-feedback. Journals allow you to step out of your automatic thinking and ask yourself: What went wrong? How could I do better? Monitoring your own performance allows you to see patterns that you simply couldn’t see before. This type of analysis is painful for the ego, which is also why it helps build a circle of competence. You can’t improve if you don’t know what you’re doing wrong. Finally, you must occasionally solicit external feedback. This helps build a circle, but is also critical for maintaining one. A lot of professionals have an ego problem: their view of themselves does not line up with the way other people see them. Before people can change they need to know these outside views. We need to go to people we trust, who can give us honest feedback about our traits. These people are in a position to observe us operating within our circles, and are thus able to offer relevant perspectives on our competence. Another option is to hire a coach. Atul Gawande is one of the top surgeons in the United States. And when he wanted to get better at being a surgeon, he hired a coach. This is terribly difficult for anyone, let alone a doctor. At first he felt embarrassed. It had been over a decade since he was evaluated by another person in medical school. “Why,” he asked, “should I expose myself to the scrutiny and fault-finding?”7 The coach worked. Gawande got two things out of this. First, Gawande received something he couldn’t see himself and something no one else would point out (if they noticed it at all): knowledge of where his skill and technique was suboptimal. The second thing Gawande took away was the ability to provide better feedback to other doctors. It is extremely difficult to maintain a circle of competence without an outside perspective. We usually have too many biases to solely rely on our own observations. It takes courage to solicit external feedback, so if defensiveness starts to manifest, focus on the result you hope to achieve. How do you operate outside a circle of competence? Part of successfully using circles of competence includes knowing when we are outside them—when we are not well equipped to make decisions. Since we can’t be inside a circle of competence in everything, when we find ourselves Strangers in a place filled with Lifers, what do we do? We don’t always get to “stay around our spots.” We must develop a repertoire of techniques for managing when we’re outside of our sphere, which happens all the time.8 There are three parts to successfully operating outside a circle of competence: 1. Learn at least the basics of the realm you’re operating in, while acknowledging that you’re a Stranger, not a Lifer. However, keep in mind that basic information is easy to obtain and often gives the acquirer an unwarranted confidence. 2. Talk to someone whose circle of competence in the area is strong. Take the time to do a bit of research to at least define questions you need to ask, and what information you need, to make a good decision. If you ask a person to answer the question for you, they’ll be giving you a fish. If you ask them detailed and thoughtful questions, you’ll learn how to fish. Furthermore, when you need the advice of others, especially in higher stakes situations, ask questions to probe the limits of their circles. Then ask yourself how the situation might influence the information they choose to provide you. 3. Use a broad understanding of the basic mental models of the world to augment your limited understanding of the field in which you find yourself a Stranger. These will help you identify the foundational concepts that would be most useful. These then serve as a guide to help you navigate the situation you are in. There are inevitably areas where you are going to be a Stranger, even in the profession in which you excel. It is impossible for our circles of competence to encompass the entire world. Even if we’re careful to know the boundaries and take them seriously, we can’t always operate inside our circles. Life is simply not that forgiving. We have to make HR decisions without being experts in human psychology, implement technology without having the faintest idea how to fix it if something goes wrong, or design products with an imperfect understanding of our customers. These decisions may be outside our circles, but they still have to get made. The Problem of Incentives The problem of incentives can really skew how much you can rely on someone else’s circle of competence. This is particularly acute in the financial realm. Until recently, nearly all financial products we might be pushed into had commissions attached to them—in other words, our advisor made more money by giving us one set of advice than another, regardless of its wisdom. Fortunately, the rise of things like index funds of the stock and bond markets has mostly alleviated the issue. In cases like financial advisory, we’re not on solid ground until we know, in some detail, the compensation arrangement our advisor is under. The same goes for buying furniture, buying a house, or buying a washing machine at a retail store. What does the knowledgeable advisor stand to gain from our purchase? It goes beyond sales, of course. Whenever we are getting advice, it is from a person whose set of incentives is not the same as ours. It is not being cynical to know that this is the case, and to then act accordingly. Suppose we want to take our car to a mechanic. Most of us, especially in this day and age, are complete Strangers in that land; we subsequently are open to be taken advantage of. There is not only an asymmetry in our general knowledge base about mechanics of a car, there is usually an asymmetry of knowledge about the actual current problem with the car. We haven’t been under the hood, but the mechanic has. We know his incentive in this situation; it’s to get us to spend as much as possible while still retaining us as a customer. The only solution, at least until we reach a certain level of trust with our mechanic, is to suck it up and learn a bit of the trade. Fortunately, these days that is easy with the aid of the internet. And we don’t need to learn it ahead of time. We can learn it on an as-needed basis. The way to do it, in this case, would be to defer all decisions on major spending until you’ve had time to poke around the resources you can find online and at least confirm that the mechanic isn’t making a major bluff. When Queen Elizabeth I of England ascended to the throne, her reign was by no means assured. The tumultuous years under her father, brother, and sister had contributed to a political situation that was precarious at best. England was in a religious crisis that was threatening the stability of the kingdom, and was essentially broke. Elizabeth knew there were aspects of leading the country that were outside her circle of competence. She had an excellent education and had spent most of her life just trying to survive. Perhaps that is why she was able to identify and admit to what she didn’t know. In her first speech as Queen, Elizabeth announced, “I mean to direct all my actions by good advice and counsel.”9 After outlining her intent upon becoming Queen, she proceeded to build her Privy Council—effectively the royal advisory board. She didn’t copy her immediate predecessors, filling her council with yes men or wealthy incompetents who happen to have the same religious values. She blended the old and the new to develop stability and achieve continuity. She kept the group small so that real discussions could happen. She wanted a variety of opinions that could be challenged and debated.10 In large measure due to the advice she received from this council, advice that was the product of open debate that took in the circles of competence of each of the participants, Elizabeth took England from a country of civil unrest and frequent persecution to one that inspired loyalty and creativity in its citizens. She sowed the seeds for the empire that would eventually come to control one quarter of the globe. Conclusion Critically, we must keep in mind that our circles of competence extend only so far. There are boundaries on the areas in which we develop the ability to make accurate decisions. In any given situation, there are people who have a circle, who have put in the time and effort to really understand the information. It is also important to remember that no one can have a circle of competence that encompasses everything. There is only so much you can know with great depth of understanding. This is why being able to identify your circle, and knowing how to move around outside of it, is so important. «Ignorance more often begets confidence than knowledge.» Charles Darwin11 Staying in Your Circle The idea a circle of competence in the realm of investments was stated very well by Berkshire Hathaway’s Warren Buffett. When asked, he recommended that each individual stick to their area of special competence, and be very reluctant to stray. For when we stray too far, we get into areas where we don’t even know what we don’t know. We may not even know the questions we need to ask. To explain his point, Buffett gives the example of a Russian immigrant woman who ran one of his businesses, the famous Nebraska Furniture Mart. The CEO, Rose Blumkin, spoke little English and could barely read or write, yet had a head for two things: numbers, and home furnishings. She stuck to those areas and built one of the country’s great retailing establishments. Here it is in Buffett’s words: I couldn’t have given her $200 million worth of Berkshire Hathaway stock when I bought the business because she doesn’t understand stock. She understands cash. She understands furniture. She understands real estate. She doesn’t understand stocks, so she doesn’t have anything to do with them. If you deal with Mrs. B in what I would call her circle of competence…. She is going to buy 5,000 end tables this afternoon (if the price is right). She is going to buy 20 different carpets in odd lots, and everything else like that [snaps fingers] because she understands carpets. She wouldn’t buy 100 shares of General Motors if it was at 50 cents a share.12 Her iron focus on the things she knew best was largely responsible for her massive success in spite of the obstacles she faced. Supporting Idea: Falsifiability Karl Popper wrote “A theory is part of empirical science if and only if it conflicts with possible experiences13 and is therefore in principle falsifiable by experience.” The idea here is that if you can’t prove something wrong, you can’t really prove it right either. Thus, in Popper’s words, science requires testability: “If observation shows that the predicted effect is definitely absent, then the theory is simply refuted.” This means a good theory must have an element of risk to it—namely, it has to risk being wrong. It must be able to be proven wrong under stated conditions. In a true science, as opposed to a pseudo-science, the following statement can be easily made: “If x happens, it would show demonstrably that theory y is not true.” We can then design an experiment, a physical one or sometimes a thought experiment, to figure out if x actually does happen. Falsification is the opposite of verification; you must try to show the theory is incorrect, and if you fail to do so, you actually strengthen it. To understand how this works in practice, think of evolution. As mutations appear, natural selection eliminates what doesn’t work, thereby strengthening the fitness of the rest of the population. Consider Popper’s discussion of the concept of falsifiability in the context of Freud’s psychoanalytic theory, which is broadly about the role of repressed childhood memories influencing our unconscious, in turn affecting our behavior. Popper was careful to say that it is not possible to prove that Freudianism was either true or not true, at least in part. We can say that we simply don’t know whether it’s true because it does not make specific testable predictions. It may have many kernels of truth in it, but we can’t tell. The theory would have to be restated in a way to allow for experience to refute it. Another interesting piece of Popper’s work was an attack on what he called “historicism”—the idea that history has fixed laws or trends that inevitably lead to certain outcomes. This is where we use examples from the past to make definite conclusions about what is going to happen in the future. Popper considered this kind of thinking pseudoscience, or worse—a dangerous ideology that tempts wannabe state planners and utopians to control society. He did not consider such historicist doctrines falsifiable. There is no way, for example, to test whether there is a Law of Increasing Technological Complexity in human society, which many are tempted to claim these days, because it is not actually a testable hypothesis. Instead of calling them interpretations, we call them laws, or some similarly connotative word that implies an unchanging and universal state that is not open to debate, giving them an authority that they haven’t earned. Too frequently, these postulated laws become immune to falsifying evidence—any new evidence is interpreted through the lens of the theory. «A theory is part of empirical science if and only if it conflicts with possible experiences and is therefore in principle falsifiable by experience.» Karl Popper For example, we can certainly find confirmations for the idea that humans have progressed, in a specifically defined way, toward increasing technological complexity. But is that a Law of history, in the inviolable sense? Was it always going to be this way? No matter what the starting conditions or developments along the way, were humans always going to increase our technological prowess? We really can’t say. Here we hit on the problem of trying to assert any fundamental laws by which human history must inevitably progress. Trend is not destiny. Even if we can derive and understand certain laws of human biological nature, the trends of history itself are dependent on conditions, and conditions change. Bertrand Russell’s classic example of the chicken that gets fed every day is a great illustration of this concept.14 Daily feedings have been going on for as long as the chicken has observed, and thus it supposes that these feedings are a guaranteed part of its life and will continue in perpetuity. The feedings appear as a law until the day the chicken gets its head chopped off. They are then revealed to be a trend, not a predictor of the future state of affairs. Another way to look at it is how we tend to view the worst events in history. We tend to assume that the worst that has happened is the worst that can happen, and then prepare for that. We forget that “the worst” smashed a previous understanding of what was the worst. Therefore, we need to prepare more for the extremes allowable by physics rather than what has happened until now. Applying the filter of falsifiability helps us sort through which theories are more robust. If they can’t ever be proven false because we have no way of testing them, then the best we can do is try to determine their probability of being true. I don’t know what’s the matter with people: they don’t learn by understanding; they learn by some other way—by rote or something. Their knowledge is so fragile! Richard Feynman1 The People Who Appear in this Chapter Socrates. 470-399 BCE - Greek philosopher. Famous for many philosophical conclusions, like “the only thing I know is that I know nothing”, he didn’t actually write any of his philosophy down; thus we have to thank those who came after, especially Plato, for preserving his legacy. Warren, Robin. 1937 - Australian pathologist. Marshall, Barry. 1951 - Australian physician. They shared the Nobel Prize in Physiology or Medicine in 2005. Grandin, Temple. 1947 - American professor of animal science. In addition to her contributions to livestock welfare, she invented the “hug box” device to calm those on the autism spectrum. Autistic herself, she is the subject of the movie Temple Grandin, starring Claire Danes. First Principles Thinking First principles thinking is one of the best ways to reverse- engineer complicated situations and unleash creative possibility. Sometimes called reasoning from first principles, it’s a tool to help clarify complicated problems by separating the underlying ideas or facts from any assumptions based on them. What remain are the essentials. If you know the first principles of something, you can build the rest of your knowledge around them to produce something new. The idea of building knowledge from first principles has a long tradition in philosophy. In the Western canon it goes back to Plato and Socrates, with significant contributions from Aristotle and Descartes. Essentially, they were looking for the foundational knowledge that would not change and that we could build everything else on, from our ethical systems to our social structures. First principles thinking doesn’t have to be quite so grand. When we do it, we aren’t necessarily looking for absolute truths. Millennia of epistemological inquiry have shown us that these are hard to come by, and the scientific method has demonstrated that knowledge can only be built when we are actively trying to falsify it (see Supporting Idea: Falsifiability). Rather, first principles thinking identifies the elements that are, in the context of any given situation, non-reducible. First principles do not provide a checklist of things that will always be true; our knowledge of first principles changes as we understand more. They are the foundation on which we must build, and thus will be different in every situation, but the more we know, the more we can challenge. For example, if we are considering how to improve the energy efficiency of a refrigerator, then the laws of thermodynamics can be taken as first principles. However, a theoretical chemist or physicist might want to explore entropy, and thus further break the second law into its underlying principles and the assumptions that were made because of them. First principles are the boundaries that we have to work within in any given situation— so when it comes to thermodynamics an appliance maker might have different first principles than a physicist. Techniques for establishing first principles If we never learn to take something apart, test our assumptions about it, and reconstruct it, we end up bound by what other people tell us—trapped in the way things have always been done. When the environment changes, we just continue as if things were the same, making costly mistakes along the way. Some of us are naturally skeptical of what we’re told. Maybe it doesn’t match up to our experiences. Maybe it’s something that used to be true but isn’t true anymore. And maybe we just think very differently about something. When it comes down to it, everything that is not a law of nature is just a shared belief. Money is a shared belief. So is a border. So are bitcoin. So is love. The list goes on. If we want to identify the principles in a situation to cut through the dogma and the shared belief, there are two techniques we can use: Socratic questioning and the Five Whys. Socratic questioning can be used to establish first principles through stringent analysis. This is a disciplined questioning process, used to establish truths, reveal underlying assumptions, and separate knowledge from ignorance. The key distinction between Socratic questioning and ordinary discussions is that the former seeks to draw out first principles in a systematic manner. Socratic questioning generally follows this process: 1. Clarifying your thinking and explaining the origins of your ideas. (Why do I think this? What exactly do I think?) 2. Challenging assumptions. (How do I know this is true? What if I thought the opposite?) 3. Looking for evidence. (How can I back this up? What are the sources?) 4. Considering alternative perspectives. (What might others think? How do I know I am correct?) 5. Examining consequences and implications. (What if I am wrong? What are the consequences if I am?) 6. Questioning the original questions. (Why did I think that? Was I correct? What conclusions can I draw from the reasoning process?) Socratic questioning stops you from relying on your gut and limits strong emotional responses. This process helps you build something that lasts. The Five Whys is a method rooted in the behavior of children. Children instinctively think in first principles. Just like us, they want to understand what’s happening in the world. To do so, they intuitively break through the fog with a game some parents have come to dread, but which is exceptionally useful for identifying first principles: repeatedly asking “why?” The goal of the Five Whys is to land on a “what” or “how”. It is not about introspection, such as “Why do I feel like this?” Rather, it is about systematically delving further into a statement or concept so that you can separate reliable knowledge from assumption. If your “whys” result in a statement of falsifiable fact, you have hit a first principle. If they end up with a “because I said so” or ”it just is”, you know you have landed on an assumption that may be based on popular opinion, cultural myth, or dogma. These are not first principles. There is no doubt that both of these methods slow us down in the short term. We have to pause, think, and research. They seem to get in the way of what we want to accomplish. And after we do them a couple of times we realize that after one or two questions, we are often lost. We actually don’t know how to answer most of the questions. But when we are confronted with our own ignorance, we can’t just give up or resort to self- defense. If we do, we will never identify the first principles we have to work with, and will instead make mistakes that will slow us down in the long term. «Science is much more than a body of knowledge. It is a way of thinking.» Carl Sagan2 First principles thinking as a way to blow past inaccurate assumptions The discovery that a bacterium, not stress, actually caused the majority of stomach ulcers is a great example of what can be accomplished when we push past assumptions to get at first principles. Since the discovery of bacteria, scientists thought that bacteria could not grow in the stomach on account of the acidity. If you had surveyed both doctors and medical research scientists in the 60s or 70s, they likely would have postulated this as a first principle. When a patient came in complaining of stomach pain, no one ever looked for a bacterial cause. It turned out, however, that a sterile stomach was not a first principle. It was an assumption. As Kevin Ashton writes in his book on creativity, discovery, and invention, “the dogma of the sterile stomach said that bacteria could not live in the gut.”3 Because this dogma was taken as truth, for a long time no one ever looked for reasons that it could be false. That changed for good with the discovery of the H. pylori bacteria and its role in stomach ulcers. When pathologist Robin Warren started seeing bacteria in samples from patients’ stomachs, he realized that stomachs were not, in fact, sterile. He started collaborating with Barry Marshall, a gastroenterologist, and together they started seeing bacteria in loads of stomachs. If the sterile stomach wasn’t a first principle, then, when it came to stomachs, what was? Marshall, in an interview with Discover, recounts that Warren gave him a list of 20 patients identified as possibly having cancer, but when he had looked he had found the same bacteria in all of them instead. He said, “Why don’t you look at their case records and see if they’ve got anything wrong with them.” Since they now knew stomachs weren’t sterile, they could question all the associated dogma about stomach disease and use some Socratic-type questioning to work to identify the first principles at play. They spent years challenging their related assumptions, clarifying their thinking, and looking for evidence.4 Their story ultimately has a happy ending—Marshall and Warren were awarded the Nobel Prize in 2005, and now stomach ulcers are regularly treated effectively with antibiotics, improving and saving the lives of millions of people. But many practitioners and scientists rejected their findings for decades. The dogma of the sterile stomach was so entrenched as a first principle, that it was hard to admit that it rested on some incorrect assumptions which ultimately ended with the explanation, “because that’s just the way it is”. Even though, as Ashton notes, “H. pylori has now been found in medical literature dating back to 1875,” it was Warren and Marshall who were able to show that “because I said so” wasn’t enough to count the sterile stomach as a first principle. Incremental innovation and paradigm shifts To improve something, we need to understand why it is successful or not. Otherwise, we are just copying thoughts or behaviors without understanding why they worked. First principles thinking helps us avoid the problem of relying on someone else’s tactics without understanding the rationale behind them. Even incremental improvement is harder to achieve if we can’t identify the first principles. Temple Grandin is famous for a couple of reasons. One, she is autistic, and was one of the first people to publicly disclose this fact and give insight into the inner workings of one type of autistic mind. Second, she is a scientist who has developed many techniques to improve the welfare of animals in the livestock industry. One of the approaches she pioneered was the curved cattle chute. Previous to her experiments, cattle were put in a straight chute. Curved chutes, on the other hand, “are more efficient for handling cattle because they take advantage of the natural behavior of cattle. Cattle move through curved races more easily because they have a natural tendency to go back to where they came from.”5 Of course, science doesn’t stop with one innovation, and animal scientists continue to study the best way to treat livestock animals. Stockmanship Journal presented research that questioned the efficiency of Grandin’s curved chute. It demonstrated that sometimes the much more simple straight chute would achieve the same effect in terms of cattle movement. The journal sought out Grandin’s response, and it is invaluable for teaching us the necessity of first principles thinking. Grandin explains that curved chutes are not a first principle. She designed them as a tactic to address the first principle of animal handling that she identified in her research —essentially that reducing stress to the animals is the single most important aspect and affects everything from conception rates to weight to immune systems. When designing a livestock environment, a straight chute could work as long as it is part of a system that reduces stress to the animals. You can change the tactics if you know the principles.6 Sometimes we don’t want to fine-tune what is already there. We are skeptical, or curious, and are not interested in accepting what already exists as our starting point. So when we start with the idea that the way things are might not be the way they have to be, we put ourselves in the right frame of mind to identify first principles. The real power of first principles thinking is moving away from random change and into choices that have a real possibility of success. _ Curved cattle chutes improve animal welfare by working with the natural behavior of the animals. Starting in the 1970s, scientists began to ask: what are the first principles of meat? The answers generally include taste, texture, smell, and use in cooking. Do you know what is not a first principle of meat? Once being a part of an animal. Perhaps most important to consumers is the taste. Less important is whether it was actually once part of a cow. Researchers then looked at why meat tastes like meat. Part of the answer is a chemical reaction between sugars and amino acids during cooking, known as the Maillard reaction. This is what gives meat its flavor and smell. By replicating this exact reaction, scientists expect to be able to replicate the first principles of meat: taste and scent. In doing so they will largely eliminate the need to raise animals for consumption. Instead of looking for ways to improve existing constructs, like mitigating the environmental impacts of the livestock industry, around 30 laboratories worldwide are now developing the means to grow artificial meat. This lab-grown meat is close to having the constituent parts of meat. One food researcher described the product this way: There is really a bite to it, there is quite some flavor with the browning. I know there is no fat in it so I didn’t really know how juicy it would be, but there is … some intense taste; it’s close to meat, it’s not that juicy, but the consistency is perfect…. This is meat to me...it’s really something to bite on and I think the look is quite similar.7 This quote illustrates how artificial meat combines the core properties of meat to form a viable replacement, thereby addressing some significant environmental and ethical concerns. «As to methods, there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble.» Harrington Emerson8 Conclusion Reasoning from first principles allows us to step outside of history and conventional wisdom and see what is possible. When you really understand the principles at work, you can decide if the existing methods make sense. Often they don’t. Many people mistakenly believe that creativity is somethi