Summary

This document discusses several concepts related to unintended consequences in systems, such as Goodhart's Law and the cobra effect. It examines how incentives can shape behavior in ways that are unexpected or undesirable. Examples from various spheres including healthcare and government policies are provided to illustrate these concepts.

Full Transcript

rewarded by the yard, it was woven loosely to make the yarn yield more yards. If the output of nails was determined by their number, factories produced huge numbers of pinlike nails; if by weight, smaller numbers of very heavy nails. The satiric magazine Krokodil once ran a cartoon of a...

rewarded by the yard, it was woven loosely to make the yarn yield more yards. If the output of nails was determined by their number, factories produced huge numbers of pinlike nails; if by weight, smaller numbers of very heavy nails. The satiric magazine Krokodil once ran a cartoon of a factory manager proudly displaying his record output, a single gigantic nail suspended from a crane. Goodhart’s law summarizes the issue: When a measure becomes a target, it ceases to be a good measure. This more common phrasing is from Cambridge anthropologist Marilyn Strathern in her 1997 paper “‘Improving Ratings’: Audit in the British University System.” However, the “law” is named after English economist Charles Goodhart, whose original formulation in a conference paper presented at the Reserve Bank of Australia in 1975 stated: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.” Social psychologist Donald T. Campbell formulated a similar “law” (known as Campbell’s law) in his 1979 study, “Assessing the Impact of Planned Social Change.” He explains the concept a bit more precisely: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” Both describe the same basic phenomenon: When you try to incentivize behavior by setting a measurable target, people focus primarily on achieving that measure, often in ways you didn’t intend. Most importantly, their focus on the measure may not correlate to the behavior you hoped to promote. High-stakes testing culture—be it for school examinations, job interviews, or professional licensing—creates perverse incentives to “teach to the test,” or worse, cheat. In the city of Atlanta in 2011, 178 educators were implicated in a widespread scandal involving correcting student answers on standardized tests, ultimately resulting in eleven convictions and sentences of up to twenty years on racketeering charges. Similarly, hospitals and colleges have been increasingly criticized for trying to achieve rankings at the expense of providing quality care and education, the very things the rankings are supposed to be measuring. In A Short History of Nearly Everything, Bill Bryson describes a situation in which paleontologist Gustav Heinrich Ralph von Koenigswald accidentally created perverse incentives on an expedition: Koenigswald’s discoveries might have been more impressive still but for a tactical error that was realized too late. He had o ered locals ten cents for every piece of hominid bone they could come up with, then discovered to his horror that they had been enthusiastically smashing large pieces into small ones to maximize their income. It’s like a wish-granting genie who nds loopholes in your wishes, meeting the letter of the wish but not its spirit, and rendering you worse o than when you started. In fact, there is a mental model for this more speci c situation, called the cobra e ect, describing when an attempted solution actually makes the problem worse. This model gets its name from a situation involving actual cobras. When the British were governing India, they were concerned about the number of these deadly snakes, and so they started o ering a monetary reward for every snake brought to them. Initially the policy worked well, and the cobra population decreased. But soon, local entrepreneurs started breeding cobras just to collect the bounties. After the government found out and ended the policy, all the cobras that were being used for breeding were released, increasing the cobra population even further. A similar thing happened under French rule of Vietnam. In Hanoi, the local government created a bounty program for rats, paying the bounty based on a rat’s tail. Enterprising ratcatchers, however, would catch and release the rats after just cutting o their tails; that way the rats could go back and reproduce. Whenever you create an incentive structure, you must heed Goodhart’s law and watch out for perverse incentives, lest you be overrun by cobras and rats! The Streisand e ect applies to an even more speci c situation: when you unintentionally draw more attention to something when you try to hide it. It’s named for entertainer Barbra Streisand, who sued a photographer and website in 2003 for displaying an aerial photo of her mansion, which she wanted to remain private. Before the suit, the image had been downloaded a total of six times from the site; after people saw news stories about the lawsuit, the site was visited hundreds of thousands of times, and now the photo is free to license and is on display on Wikipedia and many other places. As was said of Watergate, It’s not the crime, it’s the cover-up. Streisand E ect A related model to watch out for is the hydra e ect, named after the Lernaean Hydra, a beast from Greek mythology that grows two heads for each one that is cut o. When you arrest one drug dealer, they are quickly replaced by another who steps in to meet the demand. When you shut down an internet site where people share illegal movies or music, more pop up in its place. Regime change in a country can result in an even worse regime. An apt adage is Don’t kick a hornet’s nest, meaning don’t disturb something that is going to create a lot more trouble than it is worth. With all these traps—Goodhart’s law, along with the cobra, hydra, and Streisand e ects—if you are going to think about changing a system or situation, you must account for and quickly react to the clever ways people may respond. There will often be individuals who try to game the system or otherwise subvert what you’re trying to do for their personal gain or amusement. If you do engage, another trap to watch out for is the observer e ect, where there is an e ect on something depending on how you observe it, or even who observes it. An everyday example is using a tire pressure gauge. In order to measure the pressure, you must also let out some of the air, reducing the pressure of the tire in the process. Or, when the big boss comes to town, everyone acts on their best behavior and dresses in nicer clothes. The observer e ect is certainly something to be aware of when making actual measurements, but you should also consider how people might indirectly change their behavior as they become less anonymous. Think of how hard it is to be candid when you know the camera is rolling. Or how di erently you might respond to giving a colleague performance feedback in an anonymous survey versus one with your name attached to it. In “Chilling E ects: Online Surveillance and Wikipedia Use,” Oxford researcher Jonathon Penny studied Wikipedia tra c patterns before and after the 2013 revelations by Edward Snowden about the U.S. National Security Agency’s internet spying tactics, nding a 20 percent decline in terrorism-related article views involving terms like al-Qaeda, Taliban, and car bomb. The implication is that when people realized they were being watched by their governments, some of them stopped reading articles that they thought could get them into trouble. The name for this concept is chilling e ect. Wikipedia Chilling E ect In the legal context where the term chilling e ect originated, it refers to when people feel discouraged, or chilled, from freely exercising their rights, for fear of lawsuits or prosecution. More generally, chilling e ects are a type of observer e ect where the threat of retaliation creates a change in behavior. Sometimes chilling e ects are intentional, such as when someone is made an example of to send a message to others about how o enders will be treated. For instance, a company will sue another aggressively over its patents to scare o other companies that might be thinking of competing with them. Many times, though, chilling e ects are unintentional. Mandated harassment reporting can give victims pause when contemplating reaching out for help, since they might not yet be ready for that level of scrutiny. Fear of harassment also curbs usage of social media. In a June 6, 2017, Pew Research study, 13 percent of respondents said they stopped using an online service and 27 percent said they chose not to post something online after witnessing online harassment toward others. In your personal relationships, you might nd yourself walking on eggshells around a person you know has an anger management problem. Similarly, some romantic partners may not be totally honest about their relationship grievances if they perceive their partner as having one foot out the door. Like the Wikipedia study discussed above, another unintentional chilling e ect was found by an MIT study, “Government Surveillance and Internet Search Behavior,” which showed that post-Snowden, people have also stopped searching for as many health-related terms on Google, even though the terms weren’t directly related to illegal activity of any kind. As people understand more about corporate and government tracking, their searching of sensitive topics in general has been chilled. The authors noted: “Suppressing health information searches potentially harms the health of search engine users and, by reducing tra c on easy-to-monetize queries, also harms search engines’ bottom line.” This negative unintended consequence could be considered collateral damage. In a military context, this term means injuries, damage, in icted on unintended, collateral, targets. You can apply this model to any negative side e ects that result from an action. The U.S. government maintains a No Fly List of people who are prohibited from commercial air travel within, into, or out of the U.S. There have been many cases of people with the same names as those on the list who experienced the collateral damage of being denied boarding and missing ights, including a U.S. Marine who was prevented from boarding a ight home from his military tour in Iraq. When people are deported or jailed, even for good reason, collateral damage can be in icted on their family members. For instance, the loss of income could take a nancial toll, or children could experience the trauma of growing up without one or both parents, possibly ending up in foster care. Sometimes collateral damage can impact the entity that in icted the damage in the rst place, which is called blowback. Blowback sometimes can occur well after the initial action. The U.S. supported Afghan insurgents in the 1980s against the USSR. Years later these same groups joined al- Qaeda to ght against the U.S., using some of the very same weapons the U.S. had provided decades earlier. Like Goodhart’s law and related models, observer and chilling e ects concern unintended consequences that can happen after you take a deliberate action—be it a policy, experiment, or campaign. Again, it is best to think ahead about what behaviors you are actually incentivizing by your action, how there might be perverse incentives at play, and what collateral damage or even blowback these perverse incentives might cause. Take medical care as a modern example. Fee-for-service medicine, prevalent in the United States, pays healthcare providers based on how much treatment is provided. Quite simply, the more treatment that is provided, the more money that is made, e ectively incentivizing quantity of treatment. If you have a surgery, any additional care required (follow-up surgeries, tests, physical therapy, medications, etc.) will be billed separately by the provider conducting the treatment, including any care resulting from surgical complications that might arise. Each piece of the treatment is generally individually pro table to the providers. With value-based care, by contrast, there is usually just one reimbursement payment amount for everything related to the surgery, including much of the directly related additional care. This payment scheme therefore incentivizes quality over quantity, as the healthcare provider conducting the surgery is also on the hook for some of the additional care, sometimes even if it is administered by other providers. This payment scheme therefore focuses healthcare providers on determining the exact right amount of treatment because they face nancial consequences for over- or under-providing care. This straightforward change in how medicine is billed (one lump-sum payment to one provider versus many payments to multiple providers) signi cantly changes the incentives for healthcare providers. The Medicare system in the United States is shifting to this value-based reimbursement model both to reduce costs and to improve health outcomes, taking advantage of its better-aligned incentives between payment and quality care. In other words, seemingly small changes in incentive structures can really matter. You should align the outcome you desire as closely as possible with the incentives you provide. You should expect people generally to act in their own perceived self-interest, and so you want to be sure this perceived self-interest directly supports your goals. IT’S GETTING HOT IN HERE In the rst section of this chapter, we warned about the tyranny of small decisions, where a series of isolated and seemingly good decisions can nevertheless add up to a bad outcome. There is a broader class of unintended consequences to similarly watch out for, which also involve making seemingly good short-term decisions that can still add up to a bad outcome in the long term. The mental model often used to describe this class of unintended consequences is called the boiling frog: Suppose a frog jumps into a pot of cold water. Slowly the heat is turned up and up and up, eventually boiling the frog to death. It turns out real frogs generally jump out of the hot water in this situation, but the metaphorical boiling frog persists as a useful mental model describing how a gradual change can be hard to react to, or even perceive. The boiling frog has been used as a cautionary tale in a variety of contexts, from climate change to abusive relationships to the erosion of personal privacy. It is sometimes paired with another animal metaphor, also scienti cally untrue—that of the ostrich with its head in the sand, ignoring the signs of danger. In each case the unintended consequence of not acting earlier is eventually an extremely unpleasant state that is hard to get out of —global warming, domestic violence, mass surveillance. These unintended consequences are likely to arise when people don’t plan for the long term. From nance, short-termism describes these types of situations, when you focus on short-term results, such as quarterly earnings, over long-term results, such as ve-year pro ts. If you focus on just short-term nancial results, you won’t invest enough in the future. Eventually you will be left behind by competitors who are making those long-term investments, or you could be swiftly disrupted by new upstarts (which we cover in Chapter 9). There are many examples of the deleterious e ects of short-termism in everyday life. If you put o learning new skills because of the tasks in front of you, you will never expand your horizons. If you decorate your house one piece at a time in isolation, you won’t end up with a cohesive décor. If there are additions to the tax code without any thought to long-term simpli cation, it eventually becomes a bloated mess. The software industry has a name for the consequences of short-termism: technical debt. The idea comes from writing code: if you prioritize short- term code xes, or “hacks,” over long-term, well-designed code and processes, then you accumulate debt that will eventually have to be paid down by future code rewrites and refactors. Accumulating technical debt isn’t necessarily harmful—it can help projects move along faster in the short term—but it should be done as a conscientious observer, not as an unaware boiling frog. If you have been involved in any small home repairs, you’re probably familiar with this model. When something small is broken, many people opt for a short-term x today, DIY-style (or even duct-tape-style), because it is cheaper and faster. However, these “ xes,” which may not be up to building standards, may cost you in the long run. In particular, the item may need to be repaired again at greater cost, such as when you want to sell your home. Startup culture has extended this concept to other forms of “debt”: Management debt is the failure to put long-term management team members or processes in place. Design debt means not having a cohesive product design language or brand style guide. Diversity debt refers to neglecting to make necessary hires to ensure a diverse team. This model can likewise be extended to any area to describe the unintended consequences of short-term thinking: relationship debt, diet debt, cleaning debt. Technical Debt Customer’s view Developer’s view In these scenarios, you need to keep up with your “payments” or else the debt can become overwhelming: the out-of-control messy house, the expanding waistline, or the deteriorating relationship. These outstanding debts impact your long-term exibility. The general model for this impact comes from economics and is called path dependence, meaning that the set of decisions, or paths, available to you now is dependent on your past decisions. Sometimes an initial decision or event may seem innocuous at rst, but it turns out to strongly in uence or limit your possible outcomes in the long run. As a small company, you may choose to use a piece of software for project management without giving it much thought. As you grow, though, you’ll have a large group of people using this software, which may eventually turn out to be suboptimal; however, all your data is now stored there, and it would be a huge disruption to switch products. On a personal level, many people are likely to stay near the town where they went to school once they graduate. This creates a massive long-term impact on their available career and family choices. The same thing can happen on a larger scale. Similar types of businesses often congregate together—jewelry stores, furniture depots, car dealerships. In these cases, whichever store came rst created a path dependence for all to follow. The song “I Know an Old Lady,” written by Rose Bonne and Alan Mills in 1952, captures the dangers of short-termism and path dependence if left unchecked. There was an old lady who swallowed a y; I don’t know why she swallowed a y—perhaps she’ll die! There was an old lady who swallowed a spider; That wriggled and jiggled and tickled inside her! She swallowed the spider to catch the y; I don’t know why she swallowed a y—perhaps she’ll die!... There was an old lady who swallowed a cow; I don’t know how she swallowed a cow! She swallowed the cow to catch the goat, She swallowed the goat to catch the dog, She swallowed the dog to catch the cat, She swallowed the cat to catch the bird, She swallowed the bird to catch the spider, That wriggled and jiggled and tickled inside her! She swallowed the spider to catch the y; I don’t know why she swallowed a y—perhaps she’ll die! There was an old lady who swallowed a horse;... She died, of course! To escape the fate of the old lady or the boiling frog, you need to think about the long-term consequences of short-term decisions. For any decision, ask yourself: What kind of debt am I incurring by doing this? What future paths am I taking away by my actions today? Another model from economics o ers some reprieve from the limitations of path dependence: preserving optionality. The idea is to make choices that preserve future options. Maybe as a business you put some excess pro ts into a rainy-day fund, or as an employee you dedicate some time to learning new skills that might give you options for future employment. Or, when faced with a decision, maybe you can delay deciding at all (see thinking gray in Chapter 1) and, instead, continue to wait for more information, keeping your options open until you are more certain of a better path to embark upon. Many college freshmen have some idea of what they want to study, but most are not ready to immediately select their major. When selecting a college, it would be a good idea for a student to choose a school that is strong in several elds of interest, not just the one they think they might pick, which preserves their options until they are really ready to decide. As with most things, though, preserving options must be done in moderation. Even if you choose a college with many possible majors, at some point you do need to pick one in order to be able to graduate on time. When selecting a graduate school, Lauren chose a program in operations research as a way of preserving optionality, rather than a more narrowly tailored program in biostatistics. However, not having a strong idea of what area she wanted to research for her dissertation ultimately resulted in an extra year of school. The downside of keeping many options open is that it often requires more resources, increasing costs. Think of going to school while you also have a full-time job, maintaining multiple homes, or exploring several lines of business in one parent company. You need to nd the right balance between preserving optionality and path dependence. One model that can help you gure out how to strike this balance in certain situations is the precautionary principle: when an action could possibly create harm of an unknown magnitude, you should proceed with extreme caution before enacting the policy. It’s like the medical principle of “First, do no harm.” For example, if there is reason to believe a substance might cause cancer, the precautionary principle advises that it is better to control it tightly now while the scienti c community gures out the degree of harm, rather than risk people getting cancer unnecessarily because the substance has not been controlled. In 2012, the European Union adopted the precautionary principle formally with the Treaty on the Functioning of the European Union: Union policy on the environment shall aim at a high level of protection taking into account the diversity of situations in the various regions of the Union. It shall be based on the precautionary principle and on the principles that preventive action should be taken, that environmental damage should as a priority be recti ed at source and that the polluter should pay. On an individual level, the precautionary principle instructs you to take pause when an action could possibly cause you signi cant personal harm. That seems obvious, but people engage in risky behavior all the time (e.g., drunk or reckless driving). Beyond physical harm, the same concept applies to other kinds of harm: for example, nancial harm (gambling or accepting a bad loan) and emotional harm (in delity or going too far in an argument). These mental models are the most useful when thinking about existential risks. After all, in the tale of the boiling frog, the frog dies. Therefore, you want rst to assess what substantial harms could arise in the long term, then work backward to assess how your short-term decisions (or lack thereof) might be contributing to long-term negative scenarios (a process that we cover in more depth in Chapter 6). With this knowledge, you can then take the necessary level of precaution, paying down technical debt as needed, happily preventing yourself from becoming the boiling frog. TOO MUCH OF A GOOD THING On the side of an ancient Greek temple, home to the Oracle of Delphi, was inscribed the precept Nothing in excess. Our modern equivalent is too much of a good thing. It’s natural to want more of something good, but too much of it can be bad. One slice of cookie dough cheesecake from The Cheesecake Factory is amazing; downing a whole cheesecake will probably cause you some problems, though. The same goes for information. Complaints from people overwhelmed by too much information are not new. Roman writer Marcus Seneca said, “The abundance of books is a distraction”—in the rst century A.D.! Today, researching almost anything online can make your head spin, from the mundane, such as wading through all the Amazon products and reviews for co eemakers, to the life-changing, such as comparing colleges or choosing a new city to move to. There is just so much data and advice on almost any subject, it can easily be overwhelming. Of course, you need some information to make good decisions, but too much information leads to information overload, which complicates a decision-making process. The excess information can overload the processing capacity of the system, be it a single person, group, or even computer, causing decision making to take too long. There is a name for this unintended consequence: analysis paralysis, where your decision making su ers from paralysis because you are over- analyzing the large amount of information available. This is why you can spend too much time trying to make that co eemaker decision or choosing where to go out to dinner when faced with an endless list of choices from Yelp. More seriously, people often stay in a job they don’t like because they are unsure of what to do next given all the possibilities. The model perfect is the enemy of good drives home this point—if you wait for the perfect decision, or perfect anything, really, you may be waiting a long time indeed. And by not making a choice, you are actually making a choice: you are choosing the status quo, which could be considerably worse than one of the other choices you could already have made. There is a natural con ict between the desire to make decisions quickly and the feeling that you need to accumulate more information to be sure you are making the right choice. You can deal with this con ict by categorizing decisions as either reversible decisions or irreversible decisions. Irreversible decisions are hard if not impossible to unwind. And they tend to be really important. Think of selling your business or having a kid. This model holds that these decisions require a di erent decision- making process than their reversible counterparts, which should be treated much more uidly. In a letter to shareholders, Amazon CEO Je Bezos stressed the importance of this model: Some decisions are consequential and irreversible or nearly irreversible— one-way doors—and these decisions must be made methodically, carefully, slowly, with great deliberation and consultation. If you walk through and don’t like what you see on the other side, you can’t get back to where you were before.... But most decisions aren’t like that—they are changeable, reversible—they’re two-way doors. If you’ve made a suboptimal [reversible] decision, you don’t have to live with the consequences for that long. You can reopen the door and go back through.... As organizations get larger, there seems to be a tendency to use the heavy-weight [irreversible] decision-making process on most decisions, including many [reversible] decisions. The end result of this is slowness, unthoughtful risk aversion, failure to experiment su ciently, and consequently diminished invention. Another way to help combat analysis paralysis is to limit choice, because the more choices you have, the harder it is to choose between them. In the early 1950s, psychologists William Hick and Ray Hyman separately conducted a number of experiments to try to quantify the mathematical relationship between the number of choices given and how long it takes to decide. They found that a greater number of choices increased the decision time logarithmically, in a formulation now known as Hick’s law. Hick’s law is regularly cited as an important factor in user-experience designs, such as in the design of restaurant menus, website navigation, and forms (o ine or online). For instance, on a menu, having a vegetarian section allows vegetarians to e ciently narrow the sections of the menu they should read through. Being able to determine quickly whether there are enough vegetarian options on a menu might be a big factor in whether a family with a vegetarian would choose to eat at your restaurant. In your own life, you can use Hick’s law to remember that decision time is going to increase with the number of choices, and so if you want people to make quick decisions, reduce the number of choices. One way to do this is to give yourself or others a multi-step decision with fewer choices at each step, such as asking what type of restaurant to go to (Italian, Mexican, etc.), and then o ering another set of choices within the chosen category. In addition to increased decision-making time, there is evidence that a wealth of options can create anxiety in certain contexts. This anxiety is known as the paradox of choice, named after a 2004 book of the same name by American psychologist Barry Schwartz. Schwartz explains that an overabundance of choice, the fear of making a suboptimal decision, and the potential for lingering regret following missed opportunities can leave people unhappy. In the context of seeking romantic relationships, people are often reminded that there are “plenty of sh in the sea.” With so many sh, this can leave you to question how you will know when you have found “the one.” Similarly, you might be left questioning whether a past partner was “the one that got away.” This anxiety also arises with smaller-scale decisions, such as when you have young kids and you nd yourself nally with an opportunity to go out for the night: Do you go out with friends or with just your partner? Do you go to a nice restaurant or the movies? If the movies, which one? The more choices, the more chance you have for regret later. While we, the authors, are reasonably happy people, we have experienced the anxiety surrounding the paradox of choice with our own life choices. We were lucky to have sold a startup company at a young age, leaving us with essentially unlimited career options. At the time of the sale, Lauren had just accepted a position at GlaxoSmithKline and was content with continuing down that path. However, over time she wondered whether this was the right path and found herself constantly reading job postings. She also spent a lot of time thinking about going back to school in a di erent eld, ful lling a di erent childhood dream, such as becoming an architect or designing prosthetics. Gabriel was left with an entirely open-ended future and took some time o. But soon he started asking, What next? Should I start another for-pro t company? Should Lauren and I start a nonpro t together? Write a book? The choices were and are endless. Don’t get us wrong—we aren’t complaining. We are just acknowledging that we personally sympathize with this model. Hick’s law and the paradox of choice explain downsides of having many choices. There is also a model that explains the downside of making many decisions in a limited period: decision fatigue. As you make more and more decisions, you get fatigued, leading to a worsening of decision quality. After taking a mental break, you e ectively reset and start making higher- quality decisions again. The 2011 study “Extraneous Factors in Judicial Decisions” describes the impact of decision fatigue on parole boards deciding whether to grant freedom to prisoners: “We nd that the percentage of favorable rulings drops gradually from [about] 65% to nearly zero within each decision session and returns abruptly to [about] 65% after a break. Our ndings suggest that judicial rulings can be swayed by extraneous variables that should have no bearing on legal decisions.” Some extremely productive people, including Steve Jobs and Barack Obama, have tried to combat decision fatigue by reducing the number of everyday decisions, such as what to eat or wear, so that they can reserve their decision-making faculties for more important decisions. Barack Obama chose to wear only blue or gray suits and said of this choice, “I’m trying to pare down decisions. I don’t want to make decisions about what I’m eating or wearing. Because I have too many other decisions to make.” Gabriel also tends to do this to some extent, usually wearing one of seven identical pairs of dark gray jeans and often eating the same lunch for weeks on end. He swears that it really does make things easier and saves time! If you want more variety in your life, one suggestion is to front-load the decisions on your out ts and meals for the week to Sunday. Making these decisions on a usually lower-stress day can free up your decision-making capacity for the workweek. Meal planning and even some meal prep on the weekend can help keep you from making unhealthy choices when you are overwhelmed later in the week. In this chapter we’ve covered an array of unintended consequences, from market failure to perverse incentives, from too much focus on the short term to too much of a good thing. Most generally, consider heeding Murphy’s law: Anything that can go wrong, will go wrong. It’s named after aerospace engineer Edward Murphy, from his remarks after his measurement devices failed to perform as expected. It was intended as a defensive suggestion, to remind you to be prepared and to have a plan for when things go wrong. It is unfortunately impossible to account for all possible unintended consequences. However, the mental models in this chapter can help you identify and avoid negative unintended consequences in a large array of situations. Look around—when you see unintended consequences in a situation, be it personal, professional, or in the wider world, one of these models is usually lurking behind. Next time, see if you can identify the underlying mental model behind the situation, and also try to think ahead about how it might apply to your own plans under consideration. KEY TAKEAWAYS In any situation where you can spot spillover e ects (like a polluting factory), look for an externality (like bad health e ects) lurking nearby. Fixing it will require intervention either by at (like government regulation) or by setting up a marketplace system according to the Coase theorem (like cap and trade). Public goods (like education) are particularly susceptible to the tragedy of the commons (like poor schools) via the free rider problem (like not paying taxes). Beware of situations with asymmetric information, as they can lead to principal-agent problems. Be careful when basing rewards on measurable incentives, because you are likely to cause unintended and undesirable behavior (Goodhart’s law). Short-termism can easily lead to the accumulation of technical debt and create disadvantageous path dependence; to counteract it, think about preserving optionality and keep in mind the precautionary principle. Internalize the distinction between irreversible and reversible decisions, and don’t let yourself succumb to analysis paralysis for the latter. Heed Murphy’s law! 3 Spend Your Time Wisely POLARIS IS THE BRIGHTEST STAR in the Little Dipper, a constellation also known as Ursa Minor, or Little Bear. You can easily nd Polaris in the night sky because it is the last star in the handle of the Little Dipper, and the two outermost stars on the ladle of the Big Dipper point directly to it. Finding Polaris Since at least as far back as the Middle Ages, Polaris has played a critical role in navigation. Given its unique location, almost directly above the North Pole, Polaris appears nearly xed in the night sky, despite the Earth’s rotation. You can know roughly what direction you’re headed in just by looking up at it. If you want to head north, simply orient yourself toward Polaris. A typical northern hemisphere star trail with Polaris in the center. In the business world, there is a mental model that draws on Polaris for inspiration, called north star, which refers to the guiding vision of a company. For example, DuckDuckGo’s north star is “to raise the standard of trust online.” If you know your north star, you can point your actions toward your desired long-term future. Without a north star, you can be easily “lost at sea,” susceptible to the unintended consequences of short- termism (see Chapter 2). For an individual, it is important to have a personal north star, or mission statement. Do you have one? If not, you should think about drafting one for yourself. If you can orient yourself toward your north star and prioritize the right activities, you can accomplish amazing things over time. There are in nite possibilities, though here are a few examples to get you thinking: Being the best parent I can be Helping refugees as best I can Saving enough to retire by age forty Maximizing my positive impact on homelessness Living simply and being happy Advancing the science of human longevity It’s okay if your north star evolves as you progress toward it. You may discover greater clarity about what you want to accomplish, or a life event (e.g., marriage, kids, career/location change) may propel you in another direction. You may also need a new north star if you reach your destination! For instance, a teenager’s north star might be getting into a certain university program, but once that has been reached, a new north star will be needed. A north star is a long-term vision, so it is also okay if you don’t reach it anytime soon. However, if you don’t know where you want to go, how do you expect to ever get there? Your north star will help guide you through various life choices, slowly but steadily navigating you closer to your goals. In his 1996 book The Road Ahead, entrepreneur and philanthropist Bill Gates commented on this power of incremental progress: “People often overestimate what will happen in the next two years and underestimate what will happen in the next ten.” Gates wrote this statement in a business context, as a warning to not ignore far-o threats that can grow into major disrupters. That is, don’t underestimate how far emerging competitors can advance or how much technology can change in ten years. Think of how Net ix progressed in a decade from a tiny niche to disrupting the entire cable-television industry. This idea can also be powerful to you as an individual. Your incremental progress toward a goal may not be noticeable day to day. But over a long period of time, many small steps can get you really far if you stay pointed in the right direction. If you put $1,000 in a savings account that pays 2 percent interest annually, the rst year you will get $20 back. But the second year you will get a little more back ($20.40) because you also receive 2 percent interest on the $20 you received in interest the previous year. This is called compound interest, referring to the fact that your interest payments are growing over time, or compounding. Previous interest earned is added to the total amount each cycle, making a bigger base from which the next interest cycle is calculated. Investor Warren Bu ett, at one point the richest person in the world, said, “My wealth has come from a combination of living in America, some lucky genes, and compound interest” (see birth lottery in Chapter 1). Compound interest explains why it’s easy for the rich to get richer. They can make more money from investing their already abundant capital, as opposed to having to earn more money just from their labor. In a personal context, as long as you are pointed toward your north star, you have the opportunity to take advantage of the same concept by compounding your ability to move in your chosen direction. That’s because what you can accomplish draws on your cumulative knowledge, skills, and network. As these grow, so does your impact potential. For example, as you progress in an industry, your industry contacts naturally expand, and it becomes increasingly likely that someone you’ve built a relationship with will help you progress higher in your career, such as recruiting you for your next job, serving as a reference, or acting as a mentor. This chapter covers the mental models that you need (or need to avoid) to spend your precious time wisely, from the guiding light of the north star to the nitty-gritty of guring out what to work on day to day, and how to complete those tasks most e ciently. Heed these mental models to get the most out of your future. YOU CAN DO ANYTHING, BUT NOT EVERYTHING Two-front wars played a major role in World Wars I and II, when Germany twice fought both Russia on its eastern front and Western allies on its western front. Each time, dividing its attention contributed signi cantly to Germany’s eventual defeat. An anonymous saying captures this concept well: “If you chase two rabbits, both will escape.” If you’ve ever had to supervise two or more children who don’t want to do the same activity, you understand how challenging it is to ght a two- front war. In business, a two-front war can happen if you have competitors attacking you on both sides, for example, on both the lower end and the higher end in terms of price, squeezing your customer base down. In the recent past in the U.S., mid-tier grocers like A&P have been driven to bankruptcy, squeezed by Walmart, Costco, Aldi, Amazon, and others entering the grocery business on the lower end, and Whole Foods, Wegmans, and others on the higher end. Politicians often face a two-front war in which they are ghting on both sides of the political spectrum, with attacks from both the political right and left. A recent example is Hillary Clinton’s 2016 U.S. presidential candidacy, where she faced a tough primary ght on the left from Bernie Sanders, and then in the general election she was still ghting for those voters while at the same time courting more-centrist voters. You should be wary of ghting a two-front war, yet you probably do so every single day in the form of multitasking. When discussing intuition in Chapter 1, we explained that there are two types of thinking: low- concentration, autopilot thinking (for saying your name, walking, simple addition, etc.) and high-concentration, deliberate thinking (for everything else). You can fully perform only one high-concentration activity at a time. Your brain just isn’t capable of simultaneously focusing on two high- concentration activities at once. If you attempt this, you will be forced to context-switch between the two activities. It’s like when you are reading an article and pause to address an email that just came in. In that case, the context-switching is obvious, but the same thing happens if you’re reading the article and someone starts talking to you at the same time. Your brain tries to handle both activities (reading and listening) by rapidly switching between them, and something has to give. This context-switching isn’t instant, and so you end up either having to slow down one of the activities or doing one or both much more poorly. The negative e ects of multitasking (slow or poor performance) are sometimes acceptable if the activities are of low consequence, such as when you fold the laundry while watching TV, or listen to music while working out at the gym. In contrast, multitasking on activities of any signi cant consequence will be immediately problematic, or even deadly, as in the case of texting while driving. Additionally, all the context-switching that occurs when multitasking is wasted time and e ort. Extra mental overhead is also required to keep track of multiple activities at once. Therefore, you should try to avoid multitasking on any consequential activity. Focusing on one high-concentration activity at a time can also help you produce dramatically better results. That’s because the best results rely on creative solutions, which often come from concentrating intently on one thing. Startup investor Paul Graham calls it the top idea in your mind in his 2010 essay of the same name: Everyone who’s worked on di cult problems is probably familiar with the phenomenon of working hard to gure something out, failing, and then suddenly seeing the answer a bit later while doing something else. There’s a kind of thinking you do without trying to. I’m increasingly convinced this type of thinking is not merely helpful in solving hard problems, but necessary. The tricky part is, you can only control it indirectly. I think most people have one top idea in their mind at any given time. That’s the idea their thoughts will drift toward when they’re allowed to drift freely. And this idea will thus tend to get all the bene t of that type of thinking, while others are starved of it. Which means it’s a disaster to let the wrong idea become the top one in your mind. If you are constantly switching between activities, you don’t end up doing much creative thinking at all. Author Cal Newport refers to the type of thinking that leads to breakthrough solutions as deep work. He advocates for dedicating long, uninterrupted periods of time to making progress on your most important problem. In a November 6, 2014, lecture titled “How to Operate,” entrepreneur and investor Keith Rabois tells a story about how Peter Thiel used this concept when he was CEO of PayPal: [Peter] used to insist at PayPal that every single person could only do exactly one thing. And we all rebelled, every single person in the company rebelled to this idea. Because it’s so unnatural, it’s so di erent than other companies where people wanted to do multiple things, especially as you get more senior, you de nitely want to do more things and you feel insulted to be asked to do just one thing. Peter would enforce this pretty strictly. He would say, I will not talk to you about anything else besides this one thing I assigned you. I don’t want to hear about how great you are doing over here, just shut up, and Peter would run away.... The insight behind this is that most people will solve problems that they understand how to solve. Roughly speaking, they will solve B+ problems instead of A+ problems. A+ problems are high-impact problems for your company, but they are di cult. You don’t wake up in the morning with a solution, so you tend to procrastinate them. So imagine you wake up in the morning and create a list of things to do today, there’s usually the A+ one on the top of the list, but you never get around to it. And so you solve the second and third. Then you have a company of over a hundred people, so it cascades. You have a company that is always solving B+ things, which does mean you grow, which does mean you add value, but you never really create that breakthrough idea. No one is spending 100% of their time banging their head against the wall every day until they solve it. Thiel’s solution encourages deep work by strictly limiting multitasking. Of course, if you limit yourself to one activity at a time, it is critical that this top idea in your mind is an important one. Fortunately, there is also a mental model that can help you identify truly important activities. U.S. President Dwight Eisenhower famously quipped, “What is important is seldom urgent and what is urgent is seldom important.” This quote inspired Stephen Covey in 7 Habits of Highly E ective People to create the Eisenhower Decision Matrix, a two-by-two grid (matrix) that helps you prioritize important activities across both your personal and your professional life by categorizing them according to their urgency and importance. Eisenhower Decision Matrix Urgent Not Urgent I-MANAGE II-FOCUS Important Eisenhower Decision Matrix Crisis/emergency Strategic planning Family obligations Relationship-building Real deadlines Deep work III-TRIAGE IV-AVOID Interruptions Busywork Not Important Many “pressing” matters Picking out clothes Most events Most email and messages Activities in quadrant I (Urgent/Important, such as handling a medical emergency) need to be done immediately. Activities in quadrant II (Not Urgent/Important, such as deep work) are also crucial, and should be prioritized just after the activities from quadrant I. You should focus your creative energies on these quadrant II activities as much as possible, because working on them will drive you fastest toward your long-term goals. The activities in quadrant III (Urgent/Not Important, such as most events and many “pressing” matters) might be better delegated, outsourced, or just ignored. Finally, quadrant IV (Not Urgent/Not Important, such as busywork and most email) contains activities you should try to reduce or eliminate spending time on altogether. The essential insight to be gained from this matrix is that the important activities in quadrant II are often overshadowed by the urgent distractions in quadrant III. You can be tricked into addressing the tasks in quadrant III immediately because they have urgency, vying for your attention. However, if you let those distractions in quadrant III take up a lot of your time, you may never get to the important tasks in quadrant II. Similarly, the Not Urgent/Not Important things in quadrant IV can be attractive distractions because they provide immediate grati cation (like completing a busywork task quickly) or are fun (like mindless phone games). It would be unhealthy to get rid of leisure completely in your life, but it is essential to evaluate how much of your time is being spent on leisure and unimportant activities so that they don’t get in the way of achieving your long-term goals. Quadrant IV activities also have the capacity to present with false urgency (like most email and texts). If you let them consistently interrupt you, you will su er the negative e ects of multitasking, context-switching between quadrant II and IV activities, signi cantly decreasing your performance on important matters. One approach to counteract this e ect is to turn o noti cations so that you do not succumb to this false urgency. Using the Eisenhower Decision Matrix assumes that you can correctly categorize activities into each quadrant. Yet deciding what is important can be challenging, especially in an organizational context. Two mental models can o er insight on this di culty. Sayre’s law, named after political scientist Wallace Sayre, o ers that in any dispute the intensity of feeling is inversely proportional to the value of the issues at stake. A related concept is Parkinson’s law of triviality, named after naval historian Cyril Parkinson, which states that organizations tend to give disproportionate weight to trivial issues. Both of these concepts explain how group dynamics can lead the group to focus on the wrong things. In his 1957 book Parkinson’s Law, Parkinson presents an example of a budget committee considering an atomic reactor and a bike shed, o ering that “the time spent on any item of the agenda will be in inverse proportion to the sum involved.” The committee members are reluctant to deeply discuss all of the complicated aspects of the atomic reactor decision because it is challenging and esoteric. By contrast, everyone wants to weigh in with their opinion on the bike shed decision because it is easy and familiar relative to the reactor, even though it is also relatively unimportant. This phenomenon has become known as bike-shedding. You must try not to let yourself get sucked into these types of debates, because they rob you of time that can be spent on important issues. In the budget meeting, the agenda could instead be structured so that time is pre- allocated proportionally to the relative importance of each item, and items can also be ordered by importance. That way much greater time will be apportioned to the reactor relative to the bike shed, and the reactor discussion will take place rst. You can further set strict time limits for each agenda item (called timeboxing) to ensure that any bike-shedding that does arise doesn’t take over the entire meeting. For a real-life example, consider the recurring prominent debates about small items in the national budget each year in the United States. In the name of balancing the budget, politicians perennially suggest cutting national arts funding, science funding, and foreign aid. No matter what you personally think of these programs, cutting them substantially will not signi cantly reduce the budget, as they respectively amount only to approximately 0.01 percent, 0.2 percent, and 1.3 percent of the total budget. In other words, if your goal is to signi cantly cut the budget, you would need to focus on much more major items in the budget. The sound and fury you hear over these relatively small items is therefore either a distraction from making any substantive progress on the stated goal or a misleading use of the idea of overall budget cuts to attack these programs for an unstated goal (like getting the federal government out of the business of funding these types of programs altogether). U.S. Federal Spending in 2015 That is, what is important versus what is not important is dependent on the particular goal being pursued. By putting potential activities in the context of this overall goal, using quantitative measures as much as possible, you can more clearly determine their relative importance. Once you can correctly categorize activities as important and not important, you have another problem: for most people there is never enough time to work on the many important activities they’ve categorized. How do you choose what to do? This section’s theme is succinctly summarized by a quote from a Fast Company interview with productivity consultant David Allen, author of Getting Things Done: “You can do anything, but not everything.” You must pick between the important activities in front of you, or else you will nd yourself multitasking and lacking time for deep work. Allen also notes that “there is always more to do than there is time to do it, especially in an environment of so much possibility. We all want to be acknowledged; we all want our work to be meaningful. And in an attempt to achieve that goal, we all keep letting stu enter our lives.” Luckily, there is an extremely powerful mental model from economics to guide you: opportunity cost. Every choice you make has a cost: the value of the best alternative opportunity you didn’t choose. As a rule, you want to choose the option with the lowest opportunity cost. Let’s suppose you are thinking about quitting your job and starting your own business. The explicit costs of the new business are self-evident: any startup capital required for equipment, employees, legal costs, etc. If you need a loan, you have to add the explicit cost of interest payments (called the cost of capital). But there are also implicit costs, such as the wages and other bene ts you would be giving up from your current job and the fact that the startup capital you provide could also be used for alternative investments (such as the stock market). Additionally, there are non nancial implicit costs (or bene ts) to weigh, such as the impact on your family and personal ful llment. Your opportunity cost for starting this business is de ned as the sum of all the explicit and implicit costs, based on an alternative future where you stayed at your job, continued earning your salary, and allocated what would have been your startup capital to other investments. What would your return be on path A versus on path B? Opportunity cost extends to everyday decision making, such as when you drive farther to go to the “cheap” gas station. Suppose you can save 10 cents per gallon on a 20-gallon tank for a maximum savings of only $2.00. Even if this trip is an extra six minutes, you are essentially valuing your time at about $20 per hour, and this doesn’t even account for the gas used in making the longer trip, the fact that you are saving less if your tank is not completely empty, or the mental overhead cost to t a longer trip into your schedule. Of course, it may feel good to pay low prices or get a discount, but not when you need to spend a considerable amount of your limited time to do so. Time is money! In business, opportunity cost is sometimes formalized as the opportunity cost of capital, the return you’d get on the best alternative use of that capital, your second-best opportunity. For example, suppose you’re now running your business and you are returning 5 percent to the bottom line for every dollar you spend on an ongoing advertising campaign. You’re now deciding the best way to reinvest some of these pro ts back into the business. Whatever you select, you ought to be sure that you are making back at least 5 percent on your investment, because you could easily make that amount by investing more into the ad campaign. Thinking in terms of opportunity cost of capital pits your investment options against each other. Thus, you can make an informed choice among the array of projects and opportunities available to you. Similarly, in negotiations there is another application of opportunity cost called BATNA, which stands for best alternative to a negotiated agreement. If you have a job o er, your BATNA is the best alternative job o er you have in hand, including your current job. You shouldn’t accept an o er worse than your BATNA, because you can always take this better alternative o er (which could be the status quo). In less clear-cut situations, it can be more challenging to understand your BATNA, and so it helps to brainstorm and literally list out all of your alternatives. This process can help you uncover additional alternatives that aren’t immediately apparent. In any case, going into a negotiation knowing your BATNA is critical to making a decision that you won’t regret. Life and business can be thought of as just a series of such choices. These opportunity-cost models will help you consistently make better choices about what to work on, where to live, and whom to partner with. Generally, you want to choose things that have higher value than their opportunity costs, the best of all the alternatives in front of you. When put like that, it sounds simple, right? Complications arise when you realize that you can’t have it all. There are always trade-o s when you choose among the pursuits important to you. We’ve tried to explain this concept to our kids in a simple way. Unfortunately, we have little to show for it so far. For our boys, there are only four to ve hours from the time they get o the school bus until lights out at bedtime. During this time there are a few essential activities that need to happen, including homework, dinner, and nighttime routines. They are often disappointed that there is little time left for stories, cuddles, or iPad after they take forever putting on pajamas and brushing their teeth because they are fooling around. We explain to them that the cost of fooling around is that they miss out on these other opportunities. It’s a choice they are making. Similarly, if we go for a special trip out to dinner or ice cream, they lament the lack of time for free play before bed, not fully recognizing the trade-o. One day... Pick Two GETTING MORE BANG FOR YOUR BUCK The lever is a simple machine consisting of a bar that sits atop a fulcrum. Through the placement of the fulcrum, the lever ampli es a small force over a large distance to create a much larger force over a small distance. Think of how you might use a crowbar to open a locked door. Back in the third century B.C., Archimedes famously boasted of the powers of the lever, “Give me a place to stand and I shall move the Earth.” The mechanical advantage gained by a lever, also known as leverage, serves as the basis of a mental model applicable across a wide variety of situations. The concept can be useful in any situation where applying force or e ort in a particular area can produce outsized results, relative to similar applications of force or e ort elsewhere. In nance, leverage refers to borrowing money to purchase assets, which allows gains and losses to be multiplied. In this context, leveraging up means increasing debt, while deleveraging is the opposite. A leveraged buyout takes place when one company buys out another, partially using other people’s money. Leverage In all these nancial situations, the small force is the amount of money you initially put up, allowing you to wield a much larger force through the greater sum of money you have available via the debt you take on. For example, individuals usually purchase homes with down payments much smaller than the total price. In the United States this is typically 20 percent, but in the run-up to the 2007/2008 nancial crisis, people bought houses with as little as zero percent down! But by taking on debt, people get to live in the homes they want. In negotiation, leverage refers to the power one side has over another. If you have the ability to give or take more things than the other party, you have more leverage. No matter what the circumstances, small amounts of leverage can have large e ects. As applied to individuals, certain activities or actions have much greater leverage than others, and spending time or money on these high-leverage activities will produce the greatest e ects. Therefore, you should take time to continually identify high-leverage activities. It’s getting more bang for your buck. You can apply this model in all areas of your life. The highest-leverage choice might not be the best t every time, but the option that provides the most impact at the lowest cost always warrants consideration. Which job will give you the best opportunity to advance your career? Which home renovations might most increase the value of your home in an upcoming sale or most increase its livability? Which activities will most help your kids in the future, or bring them the most joy? To which causes or charities would your cash contributions make the most di erence (a mental model itself called e ective altruism)? How much and what type of exercise do you need to do to get the most bene ts in the least amount of time? Thinking about leverage helps you factor opportunity cost into your decision making. As a rule, the highest leverage activities have the lowest opportunity cost. The Pareto principle can help you nd high-leverage activities. It states that in many situations, 80 percent of the results come from approximately 20 percent of the e ort. Addressing this 20 percent is therefore a high- leverage activity. This principle originated from observations in the late 1800s by economist Vilfredo Pareto detailed in his book Manuel d’economie politique: that 80 percent of the peas harvested in his garden came from only 20 percent of the pods, 80 percent of the land in Italy at the time was owned by 20 percent of the people, and so on. Modern-day examples of this principle are easy to nd. In the United States, about 80 percent of healthcare spending comes from 20 percent of the patients (see the gure below). Similarly, in 2007, 85 percent of U.S. wealth was owned by 20 percent of the people. While every relationship is not always 80/20, there is a common pattern for outcomes to be far from evenly distributed. This particular 80/20 arrangement of outcomes is known as a power law distribution, where relatively few occurrences account for a signi cantly outsized proportion of the total. (It is named after mathematical exponentiation, aka power, because the math that creates the distribution involves this operation.) U.S. Health Spending Concentration In the gure above, we see a power law distribution at work in the people who spend the most on healthcare. Other examples with similar patterns include the returns from venture capital, the strength of volcanic eruptions, and the size of power outages. When you’re working to in uence such a distribution, you’re often looking toward those top outcomes, as they will have the most impact on the total. Management consultant Joseph Juran popularized the Pareto principle in the 1940s, advising that the high-leverage plan is to nd and focus on the smallest amount of the work that will bring about the best results. He called these high-leverage activities “the vital few.” For example, if you want to improve the e ectiveness of a web page, focus on the headline and leading image, often referred to as the “hero section.” This is the rst thing visitors will see, and the only thing many of them will read. The hero section is also what will be shared on social media. Small changes to this section—use of a catchier turn of phrase or a more engaging image—are simple, but have potential for a large e ect. The same principle applies to whole organizations. If you are trying to reduce costs and 80 percent of the budget is from 20 percent of the items, it makes sense to spend time seeing what you can do to make reductions in that 20 percent (as in our previous discussion of the U.S. budget). Similarly, if 80 percent of your company’s sales come from 20 percent of its customers, you need to make sure these customers are satis ed, and nd more like them. And if 80 percent of the usage of your website comes from 20 percent of the features, focus on those features. Incidentally, these are also the class of features that should go into an MVP (see Chapter 1). After you determine the 80/20 and address the low-hanging fruit, each additional hour of work will unfortunately produce less and less impactful results. In economics, this model is called the law of diminishing returns. It is the tendency for continued e ort to diminish in e ectiveness after a certain level of result has been achieved. When Lauren was at GlaxoSmithKline, an external group was hired to evaluate the quality of clinical study reports and how e ciently they were written. The group evaluated report drafts to see how they evolved over time. For one report that had six drafts, the consultants found that the report’s quality did not substantially improve from draft two to draft six— quite clearly a case of diminishing returns! The team obviously wasted time when making drafts three through six. Also, they placed undue pressure on colleagues who were waiting for the nal report. There is a similar concept called the law of diminishing utility, which says that the value, or utility, of consuming an additional item is usually, after a certain point, less than the value of the previous one consumed. Consider the di erence between the enjoyment you receive from eating one donut versus eating a second or third donut. By the time you get to a sixth donut, you may no longer get any enjoyment out of it, and you might even start getting sick. When continuing beyond a point like this can actually make things worse, you move from diminishing returns to negative returns. This can happen when you are striving for perfection and it becomes counterproductive. There are lots of phrases related to this concept— overdoing it, trying too hard, etc. (see the Too Much of a Good Thing section in Chapter 2). Law of Diminishing Returns Overdoing it is also a quick path toward burnout, where high stress can take its toll and eventually extinguish your motivation, or worse. In the late 1970s in Japan the term karoshi was coined to describe the increasing number of people, some as young as their twenties and thirties, dying from strokes and heart attacks attributed to overwork. Similar negative returns and burnout are prevalent in the high-stress environment of modern life throughout the world. In the quest for athletic success, for example, children all over the United States are su ering extreme injuries from overtraining—a clear sign of negative returns. Parents have started to sign their kids up for specialized coaching programs and commit them to playing one sport year-round at very young ages, which can easily result in overtaxing their young, growing bodies. In baseball alone, hundreds of young pitchers each year are now having surgery on their elbows, colloquially named Tommy John surgery after the major league pitcher. This is a type of surgery that just a few decades ago was performed only on professional pitchers. Throwing more pitches in a year greatly increases your risk of injury, and so a heavy year-round schedule puts many teenagers in a dangerous situation. A lot of these kids are not even playing baseball two years later, whether it’s because they never fully recovered or they just got completely burned out. Another familiar example of negative returns is pulling an all-nighter. It is proven that cramming is not an e ective method to retain material long term. All-night cram sessions can additionally be counterproductive because no one is at their best in a sleep-deprived state. If the all-nighter is to complete a paper, can the writer accurately evaluate the quality of the writing in the middle of the night? Probably not. Thus, the paper deteriorates as the night progresses. So, once you’ve pushed through the highest-leverage pieces of a given project, when should you move on? Clearly, you ought to quit before you hit negative returns, but just because you’ve hit diminishing returns doesn’t always mean you must stop what you’re doing. It really comes down to opportunity cost. If you can identify another activity that can produce greater results for the same amount of e ort, then you should jump to it. Otherwise, you should keep at your current activity, since you’re still making progress (even if it is slower progress) and you don’t have anything better to do. However—and this is key—you should not assume there isn’t anything better to do. You must periodically brainstorm and seek alternatives, making sure there aren’t other high-leverage projects, with their own 80/20s, just out of view. GET OUT OF YOUR OWN WAY Applying leverage and related mental models will help you spend time on the right activities. The next step is getting those activities done in a timely manner. The path to doing this is fraught with traps. The rst trap: procrastination. Our kids are expert procrastinators, and if this is a genetically transferred trait, Lauren must take responsibility. Around the time we rst met in 1999, Gabriel wrote an article in The Tech, MIT’s student newspaper, recommending that everyone stop procrastinating. While Lauren was not an expert procrastinator, she typically nished Friday’s problem sets sometime late Thursday night. Gabriel was the only person Lauren knew at MIT who nished his weekly work by Tuesday; in fact, he procrastinated so little that he nished MIT in three years. One reason why people procrastinate so much is present bias, which is the tendency to overvalue near-term rewards in the present over making incremental progress on long-term goals (see short-termism in Chapter 2). It’s really easy to nd reasons on any given day to skip going to the gym (too much work, bad sleep, feeling sick/sore, etc.), but if you do this too often, you’ll never reach your long-term tness goals. Everyone discounts the future as compared with the present to some degree. For instance, given a choice between getting $100 today and $100 in a year, most everyone would choose to get it today. Suppose, though, you’re o ered $100 in a year, but you can pay a fee to get the $100 today (minus the fee). How much would you be willing to pay? Would you pay $20 to get the $100 right now (netting $80) versus getting $100 in a year? When you cast this fee as a percentage, it e ectively becomes an “interest rate,” called the discount rate (in the example above, it would be 25 percent, since $80 × 125% = $100). Like any interest rate, it can compound, but instead of compounding positively as we discussed earlier, the discount rate compounds negatively. This negative compounding discounts payments out into the future more and more, since you won’t be able to access them until much later. The discount rate is the cornerstone of the discounted cash ow method of valuing assets, investments, and o ers. This model can help you properly determine the worth of arrangements that involve future payments, such as investment properties, stocks, and bonds. For example, let’s say you win the lottery and are o ered a choice between getting one million dollars each year, forever, or a lump-sum payment today. How high does that lump-sum payment need to be before you will accept it? You might think initially it should be exceptionally high because the payments go out forever; but because of the compounding discount rate, the expected earnings far in the future aren’t actually worth that much to you today. At the discount rate of 5 percent per year, for example, the million dollars in cash ow from next year would be discounted to only $952,381 of value today ($1M/1.05). Two years out, because of compounding, the million that year becomes just $907,029 of value today ($1M/1.052). This continues with earnings further out being discounted more and more until they get discounted closer and closer to zero in today’s dollars. Fifty years out, at the 5 percent discount rate, the million dollars that year is worth only $87,204 to you today ($1M/1.0550). When you add the discounted earnings together from all future years, you get the net present value, or NPV, of the lottery payments. In this case, the total comes to twenty million dollars. That is, if a 5 percent discount rate is appropriate, you would value this stream of cash ows of one million dollars a year forever at only twenty million dollars today, assuming you could get that twenty million dollars right now in the lump- sum payment. And in fact, around 5 percent is what is typically o ered by lotteries. Of course, this method is very sensitive to the discount rate (e.g., 5 percent versus 20 percent). At a 20 percent discount rate applied yearly, the NPV of this cash ow stream becomes valued at $5 million in today’s dollars instead of $20 million at the 5 percent discount rate. Net Present Value (NPV) 0% Discount 5% Discount 10% Discount 20% Discount Rate Rate Rate Rate Total NPV In nite $20,000,000 $10,000,000 $5,000,000 NPVs of Net Present Value (NPV) Year 1 $1,000,000 $952,381 $909,091 $833,333 Year 2 $1,000,000 $907,092 $826,446 $694,444 Payments in Year 3 $1,000,000 $863,838 $751,315 $578,704 Year X Year 4 $1,000,000 $822,702 $683,013 $482,253............... Year 50 $1,000,000 $87,204 $8,519 $110 The right discount rate to apply in a business and investing context is something we will explore a bit more in Chapter 6. Here, though, one thing to consider is what you could do with that money if you had it now. From a purely nancial point of view, if you could guarantee investing at a rate greater than the discount rate, then you would be better o getting the lump-sum payment and investing it. For example, if you think you can invest at a 6 percent rate, then you’d be okay with a 5 percent discount rate. Lotteries usually o er rates around this 5 percent level for similar reasons (because they could invest at that rate). Of course, you wouldn’t consider just the nancial point of view. If you had the lump-sum payment today, you might better enjoy your winnings because having more money now gives you more options in terms of spending. On the other hand, many actual lottery winners regret taking the lump-sum payment because they end up spending too much initially. In personal situations, most people discount the future implicitly at relatively high discount rates. And they do so in a manner that is not actually xed over time, which is called hyperbolic discounting. In other words, people really, really value instant grati cation over delayed grati cation, and this preference plays a central role in procrastination, along with other areas of life where people struggle with self-control, such as dieting, addiction, etc. When you’re on a diet, it’s hard to avoid the pull of that donut in the o ce. That’s because you get the short-term donut payo right now, whereas the long-term dieting payo , being so far in the future, is discounted in your mind close to zero (like company earnings fty years in the future). In studies, this preference is often revealed through asking people variations of the $100 question, nding points at which people are willing to get a lesser amount of money sooner rather than a greater amount later. One such study, “Some Empirical Evidence on Dynamic Inconsistency” by economist Richard Thaler, found that people on average were equally willing to receive $15 immediately, $30 after three months, $60 after one year, or $100 after three years. These values imply decreasing annual discount rates, declining from 277 percent to 139 percent to 63 percent as the delays get longer. Once you are old enough (like us) to have plenty of regrets about procrastination, you can more easily appreciate that your future self is going to have even greater struggles if you continue to put things o. You must strive to keep these feelings of regret in mind as motivation to stay focused on the long-term bene ts of your actions, viewing your present e orts as incremental progress toward your goals. In that way you can attempt to counteract your inherent present bias and resulting procrastination tendencies. A mental model that can help you further combat your present bias is commitment, where you actively commit in some way to your desired future. Commitments can be formal or informal, but they are usually most e ective when they have some sort of penalty attached to breaking the commitment. For example, if you are trying to lose weight, you could sign up for a gym membership or make a bet with a friend. In these cases you are making a nancial commitment and su ering a loss if you don’t stick with it. Or you and a friend could agree to exercise and diet together or make some sort of public pronouncement about how much weight you both want to lose. In these cases, you are holding yourself accountable through social pressure. Choosing to put money into a 401(k) program is another example, where you are committing to save for your retirement. The penalties for withdrawing from these accounts early are notoriously harsh, making it more likely that you will stick with your commitment. Since many people take the path of least resistance, 401(k) programs also showcase the default e ect, the e ect stemming from the fact that many people just accept default options. Participation in a 401(k) or in programs such as organ donation or voter registration varies dramatically depending on whether the programs are default opt-in versus default opt-out. Default E ect You can use the default e ect to your personal advantage by making default commitments toward your long-term goals. A simple example is scheduling recurring time right into your calendar, such as an hour a week to look for a new job, deep-clean your living space, or work on a side project. Thereafter, by default your time is allocated to whichever long-term goal you choose. This same technique also works well for scheduling deep work. By putting deep-work blocks of time into your calendar, you can prevent yourself by default from booking this time with meetings since it is already committed. Commitments have shortcomings, however. First, it is easy to put o making the commitment itself. Second, if the penalty isn’t large, as in many social contracts or calendar commitments, you may decide it is worth it just to break it, defeating its purpose. Third, there are many ways to formulate an ine ective commitment, including making it unrealistic (“I will work out at the gym every day”), not specifying a clear timeline (“I will go to the gym more often”), and being too vague (“I will try to exercise more”). By contrast, a realistic, time-bound, and speci c gym commitment might be: “I will go to the gym Wednesday and Sunday mornings with my friend for the next three months, doing twenty minutes of running and twenty minutes of weight training, and I will give my friend twenty dollars each time I miss a date.” Once you overcome procrastination and are actually making consistent progress toward a goal, the next trap you can fall into is failing to plan your time e ectively. Parkinson’s law (yes, another law by the same Parkinson of Parkinson’s law of triviality) states that “work expands so as to ll the time available for its completion.” Does that ring true for you? It certainly does for us. When your top priority has a deadline far in the future, it doesn’t mean that you need to spend all your time on it until the deadline. The sooner you nish, the sooner you can move on to the next item on your list. You also never know when nishing early might help you—for instance, when something important and urgent pops up in your Eisenhower Decision Matrix. A couple of whimsical models capture the feelings surrounding end-of- project work. In his book Gödel, Escher, Bach, cognitive scientist Douglas Hofstadter coined Hofstadter’s law: It always takes longer than you expect, even when you take into account Hofstadter’s Law. In other words, things take longer than you expect, even when you consider that they take longer than you expect! Tom Cargill was credited (in the September 1985 Communications of the ACM) for the similar ninety- ninety rule from his time programming at Bell Labs in the 1980s: The rst 90 percent of the code accounts for the rst 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time. Both concepts highlight the fact that you’re generally bad at estimating when things will get done, because, unless you put a lot of e ort into continuous project planning, you don’t realize all the little things that need to be completed to really button up a project. This has certainly proved true in writing this book! The deeper point, however, is that you often have a choice of when to call the project “done.” This choice can dramatically a ect the project’s time requirements, and periodically questioning what constitutes “done” can save you from wasted e ort. In the case of the clinical study reports mentioned in the previous section, there could have been a step after each draft, comparing it with a prede ned set of objectives for the project, and evaluating whether the group should move on. Recall from Chapter 2 that perfect is the enemy of good. If you deliver that faultless and de nitive report to your organization, you’ve probably waited too long. A less-than-perfect solution is often good enough to keep optimally moving forward. This model applies in other contexts as well: waiting until you are sure you are making the perfect decision, until you have crafted the awless product, and so on. The best time to call something done is much earlier than it usually happens. Of course, there are times when the circumstances call for getting things closer to perfect. However, those times are rarer than you think, and so it is worth considering ahead of time and again during a project what nal quality level is acceptable, that is, what done means in this context (see reversible decisions versus irreversible decisions in Chapter 2). Another often-overlooked option is to abandon the project altogether before it is done. Sometimes you need to acknowledge that you are just not on the path to success. Other times you may nd that what it would take to get where you originally wanted to go is just not worth the e ort anymore. Unfortunately, psychologically, your mind is working hard against you here, and loss aversion is the model that explains why. You are more inclined to avoid losses, to be averse to them, than you are to want to make similar gains. Quite simply, you get more displeasure from losing fty dollars than pleasure from gaining fty dollars. Since you hate losing, loss aversion can cause you harm under many circumstances. You may hold losing stocks way too long, hoping they will recover back to the value they had when you bought them. You may stay in a house despite wanting to move, because you are waiting until its selling price exceeds your purchase price. These purchase prices are arbitrary numbers, independent of the current value of the assets, but they are meaningful to you because they represent losses or gains. Similarly, you may avoid killing a project because that would mean admitting the loss of your e orts up to that point. Daniel Kahneman and Amos Tversky’s work on this topic, detailed in the October 1992 issue of the Journal of Risk and Uncertainty, demonstrated that across many risky situations, such as winning or losing money based on a coin toss, people tend to want the potential payo to be around double the potential loss before they are willing to take the gamble. That is, people want to have a fty- fty chance of winning one hundred dollars if they have to put fty dollars on the line. Loss aversion can be better understood using the frame of reference model (see Chapter 1). When you already have a win on your hands, you tend to want to lock in your gains. From this frame of reference, you tend to act more conservatively and are more likely to pass up a chance at a bigger gain if it means risking your current winnings. Conversely, when you have a loss on your hands, you’d rather take a chance at breaking even than accept the sure loss. From this frame of reference, you tend to act more aggressively, not wanting to end with the loss. From an objective frame of reference, however, you should approach both situations from the same standpoint of opportunity cost. By holding on to a loss too long, you are misallocating time or money that could be better used on another opportunity. Similarly, by walking away after a sure but small gain, you may be missing out on a potentially better opportunity. When it comes to losses in particular, you need to acknowledge that they’ve already happened: you’ve already spent the resources on the project to date. When you allow these irrecoverable costs to cloud your decision making, you are falling victim to the sunk-cost fallacy. The costs of the project so far, including your time spent, have already been sunk. You can’t get them back. This can be a problem (fallacy) when these previous losses in uence you to make a bad decision. An instance where sunk costs lead to an escalation of commitment is sometimes called the Concorde fallacy, named after the supersonic jet whose development program was plagued with prohibitive cost overruns and never came close to making a pro t. Ask yourself: Is my project like the Concorde? Am I throwing good money after bad when it is better to just walk away? Everyday sunk cost fallacy examples can run from less consequential decisions, such as nishing a movie or book that you don’t like, to larger ones, such as investing more money into a failing business or staying in a career or relationship that is turning sour. You need to avoid thinking, We’ve come too far to stop now. Instead, take a realistic look at your chances of success and evaluate from an opportunity cost perspective whether your limited resources are best used continuing what you are doing or persuing another opportunity. You may have made a commitment, but given all you know now, this may be one of those situations where you should break it. Evaluating your chances honestly can be di cult because you want so badly to believe that you can succeed. In a 1968 paper in the Journal of Personality and Social Psychology, Robert E. Knox and James A. Inkster described two experiments they conducted at two di erent horse tracks. They asked as many people as possible to rate the chances of their horses’ winning. Some of the people were interviewed right before their bets were placed, and others right after. The group questioned after they made their bets rated their horses’ chances signi cantly higher. This supported the scientists’ prediction that, post-bet, bettors were more con dent in their choices. Evidently, simply the act of committing to the bet convinced bettors that their odds of winning had increased (see cognitive dissonance in Chapter 1). Remaining data-driven can help you avoid this mistake. The “power of positive thinking” can only get you so far. Some economists argue that considering sunk costs is okay when taking a loss may damage your reputation. However, you should also consider that holding on to something for too long because of pride can also damage how you’re seen by those who are let down by your failure or stuck bailing you out. It is important to remember that exibility is just as important to your success as tenacity, if not more so. Sometimes, though, you can indeed right the ship. In these situations, admitting you are not on the right track is the best way to save a project. This admission can push you to change strategies and tactics and possibly call in reinforcements. In Chapter 1, we discussed postmortems, where you analyze project failures so that you can do better next time. But you don’t have to wait until the end of a project—you can also conduct mid-mortems, and occasionally even pre-mortems, where you predict ahead of time where things could go o track. In Chapter 1, we also discussed the third story, where you look at con icts from an objective point of view. You need to use the same point of view when evaluating your own projects. If you recognize that you cannot do that, then bring someone else in to help you get out of your own way. SHORTCUT YOUR WAY TO SUCCESS A good plan of attack ensures that you are using the right tools and processes to get the job done. For instance, in writing this book, our rst step was to develop an outline. Rather than write without direction or move back and forth between disparate concepts, we wanted to make sure the book owed properly. An outline helped us link related concepts and group them into coherent sections and chapters. When starting something new, a good thing to remind yourself is that there is no need to reinvent the wheel. It is unlikely that you are the rst person in the world who has faced this task, and, with the ubiquity of self- published experts, you are likely to be able to nd a website, blog article, or how-to video on almost any topic. As Benjamin Franklin wrote in The Way to Wealth, “An investment in knowledge pays the best interest.” In many elds, leaders have agreed on best practices based on what has worked or what has not worked in the past. Architect Christopher Alexander introduced the concept of a design pattern, which is a reusable solution to a design problem. This idea has been adapted to other elds and is especially popular in computer science. You are probably very familiar with common design patterns for everyday items. Think of doorknobs being set at a certain height so they are easy for most people to use, or staircases being wide enough for most people to walk on. They are the same because they adhere to the same basic design patterns that have proven to be useful. In some cases, the patterns have been made o cial standards, as in building codes. There are likely design patterns applicable to whatever you are doing as well. For writing books like this, there are many design patterns, from the way the book is laid out and printed to the length and writing style expected. The same is true in our careers: design patterns for startups (how they are commonly nanced, managed, etc.), coding (how code is structured, common algorithms, etc.), and biostatistics (common drug trial designs, statistical methods, etc.). The opposite of the well-tested design pattern is the anti-pattern, a seemingly intuitive but actually ine ective “solution” to a common problem that often already has a known, better solution. Most of the mental models in this book are either design patterns or anti-patterns, and learning them can help you avoid common mistakes. Anti-patterns in this chapter include bike-shedding, present bias, and negative returns. You can avoid anti- patterns by explicitly looking for them and then seeking out established design patterns instead. While some amount of planning is always useful, sometimes the most e cient way to nish a task is to dive in quickly and start, rather than getting bogged down in analysis paralysis (see Chapter 2). As a child, Lauren had a four-digit combination lock and she forgot the code to open it. Although one solution for an adult would be to just get a new lock, as a child she didn’t have the funds, and after a quick calculation, she decided it would be easy enough to open the lock by doing an exhaustive search for the combination. And, lo and behold, it worked! Exhaustive searches like this are a type of brute force solution. The term brute force is obviously applicable when referring to an activity that requires literal force, such as chopping down a tree with an ax. However, it is also used to refer to any solution that doesn’t require an intellectually sophisticated method. For example, if you have to address ten envelopes, it can be faster to handwrite them than to print them. Brute force solutions can be e ective for many small-scale problems. However, they can quickly become untenable as the problem gets bigger, such as if you have a hundred envelopes to address. When this happens, using more sophisticated tools is an expedient though more expensive approach. Consider again chopping down a tree. For a small tree, an ax or handsaw is okay. For a larger tree, you would want a chainsaw. For clearing a stand of trees, a “feller-buncher” is the preferred tool. In these cases, if you can a ord it, it is e ective to throw more money at the problem by spending on better tools. However, some problems, such as large computational ones, can become intractable even with the help of sophisticated tools. For a password that is exactly eight characters long (letters or numbers, case sensitive), there are 218 trillion possible combinations—impossible to try by hand, and even extremely time-consuming for a computer. At 1,000 passwords a second, it would still take you 6,923 years to work through all those combinations. A better method than trying every combination at random might be rst to try combinations of words from the dictionary, recognizing that people often choose words for passwords. An even better method would consider common passwords, and words or numbers signi cant to this particular person, such as related birth dates, sports teams, or initials. This is a type of heuristic solution, a trial-and-error solution that is not guaranteed to yield optimal or perfect results, but in many cases is nevertheless very e ective. You should consider heuristics because they can be a shortcut to a solution for the problem in front of you, even if they may not work as well in other situations. If the problem persists, however, and you keep adding more heuristic rules, this type of solution can become unwieldy. That’s what has happened to Facebook with content moderation. The company started out with a simple set of heuristic rules (e.g., no nudity), and gradually added more and more rules (e.g., nudity in certain circumstances, such as breastfeeding, is okay), until as of April 2018 it had amassed twenty-seven pages of heuristics. Algorithms, step-by-step processes, are another approach. Algorithms are pervasive in modern life, solving many otherwise intractable problems, but we often don’t even realize it. Consider travel: algorithms govern how tra c patterns are managed, how directions get calculated, how “best available” seats are selected, which hotels are recommended when you search for them... and that’s just a start. Algorithms can range from the simple (like a tra c light that changes every two minutes) to the complex (like a tra c a light that changes dynamically based on live sensors) to the highly complex (like an arti cial intelligence that manages tra c lights across a whole city at once). Many algorithms operate as black boxes, which means they require very little understanding by the user of how they work. You don’t care how you got the best seats, you just want the best seats! You can think of each algorithm as a box where inputs go in and outputs come out, but outside it is painted black so you can’t tell what is going on inside. Common examples of black box algorithms include recommendation systems on Net ix or Amazon, matching on online dating sites, and content moderation on social media. Physical tools can also be black boxes. Two sayings, “The skill is built into the tool” and “The craftsmanship is the workbench itself,” suggest that the more sophisticated tools get, the fewer skills are required to operate them. Repairing or programming them is another story, though! When you think about using tools to get your work done faster, you should start by discovering all the o -the-shelf options available to you. These are e ectively des

Use Quizgecko on...
Browser
Browser