🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Super Thinking: Dealing with Conflict & Arms Races PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Summary

This document, part of a book titled "Super Thinking," explores ways to deal with conflict and adversarial situations. It introduces the concept of "arms races," applying game theory principles, such as the prisoner's dilemma, to better understand strategies in such scenarios. The text provides examples and analyzes how to navigate these situations effectively.

Full Transcript

7 Dealing with Con ict IN ADVERSARIAL SITUATIONS, nearly every one of your choices directly or indirectly a ects other people, and these e ects can play a large role in how a con ict turns out. In the words of English poet John Donne, “No man is an island.” In Chapter 6, we discus...

7 Dealing with Con ict IN ADVERSARIAL SITUATIONS, nearly every one of your choices directly or indirectly a ects other people, and these e ects can play a large role in how a con ict turns out. In the words of English poet John Donne, “No man is an island.” In Chapter 6, we discussed mental models that help you with making decisions. In this chapter we will give you more models to help with decision making, with a focus on guiding you through adversarial situations. As an example, consider the arms race. The term was originally used to describe a race between two or more countries to accumulate weapons for a potential armed con ict. It can also be used more broadly to describe any type of escalating competition. Think of the Cold War between the U.S. and Russia after World War II, where both countries kept accumulating more and more sophisticated nuclear weapons. And that’s not even the only arms race from the Cold War: both countries also intensely competed for dominance of the Olympics (medal race) and space exploration (space race). Arms races are prevalent in our society. For example, many employers in the U.S. have increasingly required college or even more advanced degrees as a condition of employment, even though many of these jobs don’t use the knowledge acquired from these degrees. Arms Race Growing Educational Demand for Employment And getting these degrees is increasingly more expensive as the result of another arms race, in which colleges spend more and more on making their campuses feel like resorts. Stereotypical cinder-block dorm rooms with a mini-fridge and a communal bathroom down the hall are being replaced with apartment-style suites that come with stainless-steel appliances and private bathrooms. And, according to The New York Times, some schools have even been building “lazy rivers” like the ones at amusement parks! This arms race has directly contributed to the cost of U.S. higher education going sky high. Getting into an arms race is not bene cial to anyone involved. There is usually no clear end to the race, as all sides continually eat up resources that could be spent more usefully elsewhere. Think about how much better it would be if the money spent on making campuses luxurious was instead invested in better teaching and other areas that directly impact the quality and accessibility of a college education. Unfortunately, situations like this are common in everyday personal life too: many people go into considerable debt trying to keep up with their social circles (or circles they aspire to belong to) by buying bigger houses, fancier cars, and designer clothes, and by sending their kids to expensive private schools. The phrase keeping up with the Joneses describes this phenomenon and comes from the name of a comic strip that followed the McGinis family, who were xated on matching the lifestyle of their neighbors, the Joneses. Based on what you know about us so far, you might not be surprised to nd out that we send our sons to engineering camps in the summer. Last year, the closest o ering for one of these camps was at a private school on the Philadelphia Main Line, a highly a uent region. When Lauren was waiting to pick up one of our sons, she overheard a group of campers arguing over which of their families owned the most Teslas. While comparisons of social status are not uncommon, it was disheartening to see elementary school–aged children engage in this kind of discussion, especially one so extreme. As an individual, avoiding an arms race means not getting sucked into keeping up with the Joneses. You want to use your income on things that make you ful lled (such as on family vacations or on classes that interest you), rather than on unful lling status symbols. As an organization, avoiding an arms race means di erentiating yourself from the competition instead of pursuing a one-upmanship strategy on features or deals, which can eat away at your pro t margins. By focusing on your unique value proposition, you can devote more resources to improving and communicating it rather than to keeping up with your competition. The satirical publication The Onion famously parodied the corporate arms race between razor blade manufacturers, as depicted below. In the rest of this chapter, we explore mental models to help you analyze and better deal with con icts like arms races. We hope that after reading it, you will be equipped to emerge from any adversarial situation with the best outcome for yourself. PLAYING THE GAME Game theory is the study of strategy and decision making in adversarial situations, and it provides several foundational mental models to help you think critically about con ict. Game in this context refers to a simpli ed version of a con ict, in which players engage in an arti cial scenario with well-de ned rules and quanti able outcomes, much like a board game. In most familiar games—chess, poker, baseball, Monopoly, etc.—there are usually winners and losers. However, game theorists recognize that in real-life con icts there isn’t always a clear winner or a clear loser. In fact, sometimes everyone playing the game can win and other times everyone can lose. The most famous “game” from game theory is called the prisoner’s dilemma. It can be used to illustrate useful game-theory concepts and can also be adapted to many life situations, including the arms race. Here’s the setup: Suppose two criminals are captured and put in jail, each in their own cell with no way to communicate. The prosecutor doesn’t have enough evidence to convict either one for a major crime but does have enough to convict both for minor infractions. However, if the prosecutor could get one of the prisoners to turn on their co-conspirator, the other one could be put away for the major crime. So the prosecutor o ers each prisoner the same deal: the rst one who betrays their partner walks free now, and anyone who stays silent goes to prison. In game theory, diagrams can help you study your options. One example is called a payo matrix, showing the payo s for possible player choices in matrix form (see 2 × 2 matrix in Chapter 4). From the prisoner’s perspective, the payo matrix looks like this: Prisoner’s Dilemma Payo Matrix: Sentences Received B remains silent B betrays A A remains silent 1 year, 1 year 10 years, 0 years A betrays B 0 years, 10 years 5 years, 5 years Here’s where it gets interesting. The simplest formulation of this game assumes that the consequences for the players are only the prison sentences listed, i.e., there is no consideration of real-time negotiation or future retribution. If, as a player, you are acting independently and rationally, the dominant strategy given this formulation and payo matrix is always to betray your partner: No matter what they do, you’re better o betraying, and that’s the only way to get o free. If your co-conspirator remains silent, you go from one to zero years by betraying them, and if they betray you too, you go from ten to ve years. The rub is that if your co-conspirator follows the same strategy, you both go away for much longer than if you both just remained silent ( ve years versus one year). Hence the dilemma: do you risk their betrayal, or can you trust their solidarity and emerge with a small sentence? The dual betrayal with its dual ve-year sentences is known as the Nash equilibrium of this game, named after mathematician John Nash, one of the pioneers of game theory and the subject of the biopic A Beautiful Mind. The Nash equilibrium is a set of player choices for which a change of strategy by any one player would worsen their outcome. In this case, the Nash equilibrium is the strategy of dual betrayals, because if either player instead chose to remain silent, that player would get a longer sentence. To both get a shorter sentence, they’d have to act cooperatively, coordinating their strategies. That coordinated strategy is unstable (i.e., not an equilibrium) because either player could then betray the other to better their outcome. In any game you play, you want to know whether there is a Nash equilibrium, as that is the most likely outcome unless something is done to change the parameters of the game. For example, the Nash equilibrium for an arms race is choosing a high arms strategy where both parties continue to arm themselves. Here’s an example of a payo matrix for this scenario: Arms Race Payo Matrix: Economic Outcomes B disarms B arms A disarms win, win lose big, win big A arms win big, lose big lose, lose As you can see, the arms race directly parallels the prisoner’s dilemma. Both A and B arming (the lose-lose situation) is the Nash equilibrium, because if either party switched to disarming, they’d be worse o , enabling an even poorer outcome, such as an invasion they couldn’t defend against (denoted as “lose big”). The best outcome again results from being cooperative, with both parties agreeing to disarm (the win-win situation), thus opening up the opportunity to spend those resources more productively. That’s the arms race equivalent of remaining silent, but it is also an unstable situation, since either side could then better their situation by arming again (and potentially invading the other side, leading to a “win big” outcome). In both scenarios, a superior outcome is much more likely if everyone involved does not consider the situation as just one turn of the game but, rather, if both sides can continually take turns, running the same game over and over—called an iterated or repeated game. When we mentioned earlier the possibility of future retribution, this is what we were talking about. What if you have to play the game with the same people again and again? In an iterated game of prisoner’s dilemma, cooperating in a tit-for-tat approach usually results in better long-term outcomes than constant betrayal. You can start out cooperating, and thereafter follow suit with what your opponent has recently done. In these situations, you want to wait for your opponent to establish a pattern of bad behavior before you reciprocate in kind. You don’t want to destroy a previously fruitful relationship based on one bad choice by your counterpart. Similarly, cooperation pays o in most long-term life situations where reputation matters. If you are known as a betrayer, people will not want to be your friend or do business with you. On the other hand, if people can trust you based on your repeated good behavior, they will want to make you their ally and collaborate with you. In any case, analyzing con icts from a game-theory perspective is a sound approach to help you understand how your situation is likely to play out. You can write out the payo matrix and use a decision tree (see Chapter 6) to diagram di erent choice scenarios and their potential outcomes, from your perspective. Then you can gure out how you get to your desired outcome. NUDGE NUDGE WINK WINK To get to a desired outcome in a game, you may have to in uence other players to make the moves you want them to make, even if they may not want to make them initially. In these next few sections, we present mental models that can help you do just that. They work well in con ict situations but also in any situation where in uence is useful. First, consider six powerful yet subtle in uence models that psychologist Robert Cialdini presents in his book In uence: The Psychology of Persuasion. Cialdini recounts a study (since replicated) showing that waiters increase their tips when they give customers small gifts. In the study, a single mint increased tips by 3 percent on average, two mints increased tips by 14 percent, and two mints accompanied by a “For you nice people, here’s an extra mint” increased tips by 23 percent. The mental model this study illustrates is called reciprocity, whereby you tend to feel an obligation to return (or reciprocate) a favor, whether that favor was invited or not. In many cultures, it is generally expected that people in social relationships will exchange favors like this, such as taking turns driving a carpool or bringing a bottle of wine to a dinner party. Quid pro quo (Latin for “something for something”) and I’ll scratch your back if you’ll scratch mine are familiar phrases that relate to this model. Reciprocity also explains why some nonpro ts send you free address labels with your name on them along with their donation solicitation letters. It also explains why salespeople give out free concert or sports tickets to potential high-pro le clients. Giving someone something, even if they didn’t ask for it, signi cantly increases the chances they will reciprocate. This natural tendency becomes problematic when it is used to acquire political in uence, for example when politicians accept money or favors from lobbyists or others in exchange for later votes. Lobbyists are of course free to nancially support candidates holding positions that align with the goals of their group. It becomes a concern when it appears there may be an implicit agreement involved. A 2016 study in the American Journal of Political Science showed that even without an understood agreement, though, politicians are more likely to listen to a donor than to another local constituent (see gure). Reciprocity Access to U.S. Congressional O cials The second model that Cialdini describes is commitment—if you agree (or commit) to something, however small, you are more likely to continue to agree later. That’s because not being consistent causes psychological discomfort, called cognitive dissonance (see Chapter 1). Commitment explains why websites favor button titles like “I’ll sign up later” instead of “No thanks”; the former implies a commitment to sign up at a later time. The sales foot-in-the-door technique follows the same principle, where a mattress salesperson tries to get a “small yes” out of you (asking, for instance, “Do you want to sleep better at night?”), since that makes it more likely they’ll get to a “big yes” (in answer to “Do you want to buy this mattress?”). Salespeople will also try to nd common ground through a model Cialdini calls liking. Quite simply, you are more prone to take advice from people you like, and you tend to like people who share characteristics with you. That’s why they ask you questions such as “Are you a baseball fan?” or “Where did you grow up?” and, after your response, they might tell you, “I’m a Yankees fan too!” or “Oh, my cousin lives there....” The technique of mirroring also follows this model, where you mirror the physical and verbal cues of people you talk to. People tend to do this naturally, but trying to do this more (for example, consciously folding your arms when they fold their arms) can help you gain people’s trust. Studies show that the more you mirror, the more you will be perceived as similar. People want to emulate the people they like and trust. “Global Trust in Advertising,” a 2015 Nielsen survey of consumers across sixty di erent countries, found that 83 percent of people trusted recommendations from their friends and family (people they typically like), a higher rate than any form of advertising studied. This is why word-of-mouth referrals are so important to businesses. Some even base their entire business model on them. Think of the many businesses that have sellers hold sales parties with their friends. This tactic was popularized in modern business with companies like Tupperware (containers), Amway (health and home products), Avon (skin care), and Cutco (knives, which, incidentally, Gabriel sold as a teenager). Recently this business model has become even more popular, including the hundreds of new businesses enabled by social media, such as LuLaRoe (clothing) and Pampered Chef (food products). A fourth in uence model is known as social proof, drawing on social cues as proof that you are making a good decision. You are more likely to do things that you see other people doing, because of your instinct to want to be part of the group (see in-group favoritism in Chapter 4). Think of fashion and food trends or “trending” stories and memes online. Social proof can be e ective in encouraging good choices. You have probably seen signs in hotels encouraging you to reuse your towels because it is better for the environment. Cialdini and others hypothesized in the October 2008 Journal of Consumer Research that reuse would increase if the signs instead pointed out that most other guests reuse their towels, and they were right. The social proof message increased towel reuse by 25 percent compared to the standard environmental message. Similarly, universities like Sacred Heart University are using social proof to combat binge drinking, informing students that most of their peers do not engage in the dangerous practice. Unfortunately, social proof is also e ective at encouraging bad behavior. Park rangers at Petri ed Forest National Park in Arizona are rightly concerned about theft of petri ed wood, as it is the central attraction of their park. Researchers compared two messages: “Please don’t remove the petri ed wood from the park, in order to preserve the natural state of the Petri ed Forest” and “Many past visitors have removed the petri ed wood from the park, changing the natural state of the Petri ed Forest.” The latter, negatively framed message had the e ect of tripling theft! Sadly, this same concept also extends to suicide rates, which have been shown to increase following media reports of suicide. The form of social proof most prevalent in our society now is arguably social media. With the Russian attempts to in uence elections in the U.S. and other countries, social media is playing an increasingly central role in global politics. In more everyday situations, follower counts are used as a proxy for social proof, brands retweet or otherwise show real people using their products, and Facebook advertisements showcase which friends have already “liked” a certain company or product. Scarcity is another in uence model, this one describing how you become more interested in opportunities the less available they are, triggering your fear of missing out (FOMO). So-called “limited-time o ers” and “once-in-a-lifetime opportunities” prey on this fear. These are easy to spot online, such as the travel site that says there are “only 3 rooms left at this price,” or the retailer reporting “only 5 left in stock.” Scarcity signals also often imply social proof, e.g., this shirt is going to run out because it is so popular. Scarcity Cialdini’s sixth major in uence model is authority, which describes how you are inclined to follow perceived authority gures. In a series of sensational experiments described in his book Obedience to Authority, psychologist Stanley Milgram tested people’s willingness to obey instructions from a previously unknown authority gure. Participants were asked to assist an experimenter (the authority gure) in a “learning experiment.” They were then asked to give increasingly high electric shocks to “the learner” when they made a mistake. The shocks were fake, but the participant wasn’t told that at the time; the learner was really an actor who pretended to feel pain when the “shocks” were sent. This study has been replicated many times, and a meta-analysis (see Chapter 5) found that participants were willing to administer fatal voltages 28 percent to 91 percent of the time! In less dramatic settings, authority can still be powerful. Authority explains why celebrity endorsements work, though which types of celebrity endorsements are the most e ective changes over time. Nowadays kids are less likely to know Hollywood celebrities and more likely to be in uenced by YouTubers or Instagrammers. Similarly, author Michael Ellsberg recounted in Forbes magazine how a guest post on author Tim Ferriss’s blog translated into signi cantly more book sales than a prime-time segment about his book on CNN and an op-ed printed in the Sunday edition of The New York Times. Authority also explains why simple changes in wardrobe and accessories can increase the likelihood of getting you to do something. For instance, lab coats were worn in the Milgram experiments to convey authority. Sometimes people even try to support an argument by appealing to a supposed authority even if that person does not have direct expertise in the relevant area. For example, advocates of extreme dosing of vitamin C cite that Linus Pauling, a two-time Nobel Prize winner, supports their claims, despite the fact that he received his awards in completely unrelated areas. Authorities are often more knowledgeable of the facts and issues in their area of expertise, but even then, it is important to go back to rst principles and evaluate their arguments on merit. In the words of astrophysicist Carl Sagan, from his book The Demon-Haunted World: “One of the great commandments of science is, ‘Mistrust arguments from authority.’... Too many such arguments have proved too painfully wrong. Authorities must prove their contentions like everybody else.” (See paradigm shift in Chapter 1.) Similarly, a lack of a certain credential shouldn’t be the sole basis for refuting a person’s argument either. We rmly believe that any intelligent person could learn about any topic with the right research and enough time. Cialdini’s in uence models can be used in many situations, including in adversarial ones where you are trying to persuade others to make certain choices. If you want a crash course on the use of these mental models in real-life just go to a casino, where all of them are used simultaneously to ultimately take your money. Casinos give away a lot of free stu (reciprocity); they get you to rst buy chips with cash (commitment); they try to personalize your experience to your interests (liking); they show you examples of other people who won big (social proof); they constantly present you with o ers preying on your fear of missing out (scarcity); and dealers will even give you suboptimal advice (authority). Beware. There is a reason why the house always wins! PERSPECTIVE IS EVERYTHING Outside of Cialdini’s six principles, there are several other mental models you can use for in uence in con ict situations (and elsewhere), all of which are related to framing (see Chapter 1). Recall how the framing of a concept or situation can change the perception of it, such as how a newspaper headline can frame the same event in dramatically di erent ways, causing the reader to take away di erent conclusions. This change in perspective can be used as an e ective in uence model for good or bad, especially in moments of con ict. The essay Common Sense by Thomas Paine played a critical role in securing American independence from Great Britain and serves as a potent example from history to illustrate the e ectiveness of framing. As the American Revolution began in 1776, most American colonists still thought of themselves not as Americans but as Britons, but after Paine’s intervention that framing started to reverse. Despite increasing hostility between the two sides, colonists were holding out hope for a peaceful reconciliation with their home country. However, it had become clear to some, such as Paine, that King George III was never going to grant the colonists the rights they deserved, and that declaring and ghting for independence was the only way to secure those rights. Paine thought that it was wishful thinking to believe that the con ict would somehow resolve itself amicably and favorably for the colonists, without the admittedly severe consequences of war. Paine’s genius was in realizing that many more colonists would need to start thinking of themselves as Americans if they were going to secure the rights they sought. In this context, Paine published Common Sense, which made a compelling case for independence in clear, passionate prose. In fact, it was so compelling that it sold more than 500,000 copies in its rst year of publication, when the population of the colonies was only 2.5 million— now, that’s a bestseller! Paine made it clear to many colonists that Britain did not really consider them Britons, citing the way Britain had been treating them. He then made the case, from the perspective of a colonist, that uniting with other colonists to ght for independence as newly minted Americans was the only long- term sensible option. Common Sense ends like this: Under our present denomination of British subjects, we can neither be received nor heard abroad: The custom of all courts is against us, and will be so, until, by an independence, we take rank with other nations. These proceedings may at rst appear strange and di cult; but, like all other steps which we have already passed over, will in a little time become familiar and agreeable; and, until an independence is declared, the Continent will feel itself like a man who continues putting o some unpleasant business from day to day, yet knows it must be done, hates to set about it, wishes it over, and is continually haunted with the thoughts of its necessity. And it worked. Paine successfully framed the argument in a way that got people to buy into his idea, getting them to start to think of themselves as Americans rst, not Britons. This created the necessary support for the United States Declaration of Independence, which was written later that year. In fact, John Adams, the second president of the U.S., wrote that “without the pen of the author of Common Sense, the sword of Washington would have been raised in vain.” In con icts, you may similarly get the outcome you want by winning people over to your point of view. Thomas Paine did this masterfully by building allies when a con ict was unavoidable. Sometimes you may use framing in this way to prevent a direct ght altogether. There are some more subtle aspects of framing to consider, captured in a few mental models that we explore in the rest of this section. Let’s think about a more mundane situation than the American Revolution: getting a babysitter. While mid-career professionals are unlikely to take up babysitting for extra cash, they are likely to babysit for free when a friend is in a pinch. The rst scenario is framed from a market perspective (“Would you babysit my kids for fteen dollars an hour?”) and the second is framed from a social perspective (“Can you please do me a favor?”). The di erence in the way this situation is framed can be thought of as social norms versus market norms and draws on the concept of reciprocity from the previous section. When you consider something from a market perspective (like babysitting for money), you consider it in the context of your own nancial situation and its impact on you in an impersonal way (“I can earn sixty dollars, but it may not be worth my time”). In contrast, when you consider something from the social perspective (like doing your friend a favor), you consider it in the context of whether it is the right thing to do (“My friend needs my help for four hours, so I am going to help her”). In his book Predictably Irrational, economist Dan Ariely o ers another illustrative example, of an Israeli daycare center trying to address the problem of parents showing up late to pick up their kids. As the problem became prevalent, the daycare instituted a ne for showing up late. In spite of the ne, this policy actually resulted in more late pickups. Ariely explains: Before the ne was introduced, the teachers and parents had a social contract, with social norms about being late. Thus, if parents were late—as they occasionally were—they felt guilty about it—and their guilt compelled them to be more prompt in picking up their kids in the future.... But once the ne was imposed, the day care center had inadvertently replaced the social norms with market norms. Now that the parents were paying for their tardiness, they interpreted the situation in terms of market norms. In other words, since they were being ned, they could decide for themselves whether to be late or not, and they frequently chose to be late. Needless to say, this was not what the day care center intended. But the real story only started here. The most interesting part occurred a few weeks later, when the day care center removed the ne. Now the center was back to the social norm. Would the parents also return to the social norm? Would their guilt return as well? Not at all. Once the ne was removed, the behavior of the parents didn’t change. They continued to pick up their kids late. In fact, when the ne was removed, there was a slight increase in the number of tardy pickups (after all, both the social norms and the ne had been removed). You must be careful not to inadvertently replace social norms with market norms, because you may end up eliminating bene ts that are hard to bring back (see irreversible decisions in Chapter 2). Once social norms are undermined, the damage has been done and they are no longer norms. So take pause when you’re thinking about introducing monetary incentives into a situation where social norms are the standard. Social Norms vs. Market Norms Social Norms No money involved No instant payback Community situations Market Norms Money involved Transactional Business situations You may run into similar issues when situations revolve around the perception of fairness. Economists use a game called the ultimatum game to study how the perception of fairness a ects actions. Here’s how it works: The game is played by two people. One person receives some money (say $10). This rst person o ers to split the money with the second person (say $5/$5, $7/$3, $8/$2, or whatever they want). This o er is an ultimatum, so the second person only has two choices: to accept or reject the o er. If its accepted, they both keep the o ered split, and if rejected, they both get nothing. The purely logical way to play the ultimatum game is for the rst person to o er the minimum (e.g., a $9.99/$0.01 split) and for the second person to accept it, since otherwise they would get nothing, and there is no other negotiation possible. In practice, though, across most cultures, the second person usually rejects o ers lower than 30 percent of the total, because of the perceived unfairness of the o er. In these circumstances the second person would rather deny the rst person anything, even at the expense of receiving nothing themselves. It is important that you keep this strong desire for fairness in mind when you make decisions that impact people important to you, such as those in your family (chore distribution, wills, etc.) or your organization (compensation, promotions, etc.). Just like social norms versus market norms, framing can have a substantial e ect on the perception of fairness in various situations. Another pair of framings that come up often is distributive justice versus procedural justice. Distributive justice frames fairness around how things are being distributed, with more equal distributions being perceived as more fair. By contrast, procedural justice frames fairness around adherence to procedures, with more transparent and objective procedures being perceived as more fair. If your rich grandfather leaves his fortune to all his kids equally, that would probably be perceived as fair from a distributive justice perspective. However, if one of the kids was taking care of your grandfather for the last twenty years, then this distribution no longer seems fair from a procedural justice perspective. Many current political debates around topics such as income inequality and a rmative action revolve around these di erent formulations of justice. Sometimes this distinction is framed as fair share versus fair play. For example, in the U.S., K–12 public education is freely available to all. Because of this educational access, some conclude that everyone has an equal opportunity to become successful. Others believe that the quality of public educational opportunities di ers widely depending on where you live, and that education itself doesn’t grant access to the best advancement opportunities, which often come through family and social connections. From this latter perspective, fair play doesn’t really exist, and so there needs to be some corrections to achieve a more fair share, such as a rmative action or similar policies. As Martin Luther King Jr. put it in a May 8, 1967, interview with NBC News: “It’s all right to tell a man to lift himself by his own bootstraps, but it is a cruel jest to say to a bootless man that he ought to lift himself by his own bootstraps.” In any case, perceived unfairness triggers strong emotional reactions. Knowing that, people will try to in uence you by framing situations from a fairness perspective. In fact, many arguments try to sway you from rational decision making by pulling at your emotions, including fear, hope, guilt, pride, anger, sadness, and disgust. In uence by manipulation of emotions, whether created by perceived injustice, violation of social norms, or otherwise, is called appeal to emotion. Fear is a particularly strong in uencer, and it has its own named model associated with it, FUD, which stands for fear, uncertainty, and doubt. FUD is commonly used in marketing (“Our competitor’s product is dangerous”), political speeches (“We could su er dire consequences if this law is passed”), religion (eternal damnation), etc. A related practice is the use of a straw man, where instead of addressing your argument directly, an opponent misrepresents (frames) your argument by associating it with something else (the straw man) and tries to make the argument about that instead. For example, suppose you ask your kid to stop playing video games and do his homework, and he replies that you’re too strict and never let him do anything. He has tried to move the topic of conversation from doing homework to your general approach to parenting. In complex subjects where there are a multitude of problems and potential solutions (e.g., climate change, public policy, etc.), it is easy to have two people talk past each other when they both set up straw men rather than address each other’s points. In these settings it helps to get on the same page and clarify exactly what is under debate. However, sometimes one side (or both) may be more interested in persuading bystanders than in resolving the debate. In these situations, they could be deliberately putting up a straw man, which can unfortunately be an e ective way to frame the argument to their advantage in terms of bystander in uence. Many negative political ads and statements use straw men to take a vote or action out of context. You may be familiar with the National Football League (NFL) controversy regarding the fact that some players kneeled during the national anthem in protest of police brutality against African Americans. Some politicians responded by criticizing the action as disrespectful to the military. Shifting the focus to how the players were protesting drew attention away from the underlying issue of why they were protesting. Another related mental model is ad hominem (Latin for “to the person”), where the person making the argument is attacked without addressing the central point they made. “Who are you to make this point? You’re not an expert on this topic. You’re just an amateur.” It’s essentially name-calling and often involves lobbing much more incendiary labels at the other side. Political discourse in recent years in the U.S. is unfortunately littered with this model, and the usual names leveled are so undigni ed that we don’t want to include them in our book. This model is the ip side of the authority model we examined in the last section. Instead of relying on authority to gain in uence, here another’s authority is being attacked so that they will lack in uence. Again, like straw man and appeal to emotion, these models attempt to frame a situation away from an important issue and toward another that is easier to criticize. When you are in a con ict, you should consider how its framing is shaping the perception of it by you and others. Take the prisoner’s dilemma. The prosecutors have chosen to frame the situation competitively because, for them, the Nash equilibrium with both criminals getting ve years is actually the preferred outcome. However, if the criminals can instead frame the situation cooperatively—stick together at all costs—they can vastly improve their outcome. WHERE’S THE LINE? In Chapter 3, we advised seeking out design patterns that help you more quickly address issues, and watching out for anti-patterns, intuitively attractive yet suboptimal solutions. In uence models like those we’ve been discussing in the past two sections can also be dark patterns when they are used to manipulate you for someone else’s bene t (like at the casino). The name comes from websites that organize their sites to keep you in the dark through using disguised ads, burying information on hidden costs, or making it really di cult to cancel a subscription or reach support. In short, they use these types of patterns to manipulate and confuse you. However, this concept is also applicable to everyday life o ine as well. And knowing a few speci c dark patterns can be helpful in adversarial situations. You’re probably familiar with the mythical tale of the Trojan horse, a large wooden horse made by the Greeks to win a war against the Trojans. The Greeks couldn’t get into the city of Troy, and so they pretended to sail away, leaving behind this supposed parting gift. What the Trojans didn’t know is that the Greeks also left a small force of soldiers inside the horse. The Trojans brought the horse into the city, and under the cover of night, the Greek soldiers exited the horse and proceeded to destroy Troy and win the war. A Trojan horse can refer to anything that persuades you to lower your defenses by seeming harmless or even attractive, like a gift. It often takes the form of a bait and switch, such as a malicious computer program that poses as an innocuous and enticing download (the bait), but instead does something nefarious, like spying on you (the switch). A familiar example would be an advertised low price for an item (such as a hotel room) that doesn’t really exist at that price (after “resort fees” or otherwise). Builders similarly attract buyers to new-construction homes with low list prices that correspond to so-called “builder-grade” nishes that no one really wants. They then proceed to show buyers a model home with more expensive nishes—all upgrades—which in aggregate can easily push the bounds of a buyer’s budget. If it sounds too good to be true, it usually is. Spectacular examples of dark patterns can be found in business. Enron, a now bankrupt energy company, once built a fake trading oor at its Houston headquarters to trick Wall Street analysts into believing that Enron was trading much more than it actually was. When the analysts came to Houston for Enron’s annual meeting, the Enron executives pretended that there was all this action going on, when in fact it was all a ruse that they had been rehearsing, including having an elaborate array of TVs and computers assembled into a “war room.” Theranos, a now bankrupt healthcare company, committed a similar fraud when putting on demonstrations of its “product” for partners, including executives from Walgreens. Theranos machines were put on display, but according to the U.S. Securities and Exchange Commission, the blood samples collected were actually run on outside lab equipment that Theranos purchased through a shell company. The Enron and Theranos tactics both exemplify another dark pattern, called a Potemkin village, which is something speci cally built to convince people that a situation is better than it actually is. The term is derived from a historically questionable tale of a portable village built to impress Empress Catherine II on her 1787 visit to Crimea. Nevertheless, there are certainly real instances of Potemkin villages, including a village built by North Korea in the 1950s near the DMZ to lure South Korean soldiers to defect, and, terribly, a Nazi-designed concentration camp in World War II t to show the Red Cross, which actually disguised a way station to Auschwitz. In lm, The Truman Show depicts a Potemkin village on a massive scale, where the character Truman Burbank (played by Jim Carrey) resides in an entirely fake town lled with actors as part of a reality TV show. A form of this dark pattern can occur online when a website makes it seem like it has more users or content than it actually does in order to get you to participate. For example, the infamous dating site Ashley Madison (which targets people already in relationships) was found to be sending messages from fake female accounts to lure males in. The military has employed this model widely, from dummy guns to dummy tanks and even dummy paratroopers. These were used by all sides in World War II and in many other armed con icts to trick foreign intelligence services. They are also used internally in training exercises. As technology has improved, so have the dummies. Modern dummies can mimic the heat signature of a real tank, even fooling infrared detectors. People similarly make homes and businesses seem secure by putting up fake security cameras, having lights in their home on timers, or even putting up signs for a security service they don’t actually use. A related business practice is known as vaporware, where a company announces a product that it actually hasn’t made yet to test demand, gauge industry reaction, or give a competitor pause from participating in the same market. In any con ict situation, you should be on the lookout for dark patterns. While many in uence models, such as the ones in this section, are commonly thought of as malicious and are therefore easier to look out for (e.g., bait and switch), others from the previous two sections are subtler. Many are considered more innocuous (e.g., scarcity), but they too can all be used to manipulate you. For example, are the common nonpro t uses of reciprocity techniques (free address labels) or social proof (celebrity endorsements) also dark patterns? In one sense, they might lead you to donate more than you would otherwise. However, it may be a good cause and they aren’t tricking you in the same way that a hidden bait-and-switch cost is. This sliding scale poses an interesting ethical question, one that any organization in business or politics is often faced with: Should you focus on truth and clarity in your promotional materials? Or should you look to in uence models to nd language that is more persuasive, perhaps due to its emotional appeal? Do the ends justify the means? Only you can decide where the line is for you. THE ONLY WINNING MOVE IS NOT TO PLAY Considering a con ict through a game-theory lens helps you identify what you have to gain and what you have to lose. We have just looked at models that increase your chances of a good outcome through in uencing other players. Now we will consider the same problem from the inverse (see inverse thinking in Chapter 1) and explore models that decrease your chances of a bad outcome. Often this means nding a way to avoid the con ict altogether. At the climax of the classic 1983 movie WarGames, World War III seems imminent. An arti cial intelligence (known as Joshua) has been put in charge of the U.S. nuclear launch control system. Thinking he has hacked into his favorite game manufacturer, a teenage hacker (played by Matthew Broderick) unwittingly asks Joshua to play a “game” against him called Global Thermonuclear War, setting o a chain of events that has Joshua attempting to launch a real full-scale nuclear attack against Russia. Through dialogue, the character Professor Falken explains why he created Joshua and this game: The whole point was to nd a way to practice nuclear war without destroying ourselves. To get the computers to learn from mistakes we couldn’t a ord to make. Except, I never could get Joshua to learn the most important lesson.... Futility. That there’s a time when you should just give up. Professor Falken then draws an analogy to tic-tac-toe, continuing, There’s no way to win. The game itself is pointless! But back at the war room, they believe you can win a nuclear war. That there can be “acceptable losses.” When all hope seems lost, the teenager recalls this conversation and asks if there is any way to make Joshua play against itself in tic-tac-toe, hoping the computer will learn that any strategy ends in a tie. After learning the futility of playing tic-tac-toe, Joshua proceeds to simulate all the possible strategies for the Global Thermonuclear War game and comes to the same conclusion. He says (in a computer voice): A strange game. The only winning move is not to play. How about a nice game of chess? The reason that there is no winner in Global Thermonuclear War is that both sides have amassed enough weapons to destroy the other side and so any nuclear con ict would quickly escalate to mutually assured destruction (MAD). As a result, neither side has any incentive to use its weapons o ensively or to disarm completely, leading to a stable, albeit tense, peace. Mutually assured destruction isn’t just a military model. A parallel in business is when companies amass large patent portfolios, but generally don’t use them on one another for fear of escalating lawsuits that could potentially destabilize all the companies involved. Occasionally you see these suits and countersuits, such as the ones between Apple and Qualcomm (over chip patents), Oracle and Google (over Java patents), and Uber and Google (over autonomous vehicle patents), but these companies often have so many patents (sometimes tens of thousands each) that there could be literally hundreds of suits like these if not for MAD. There are countless possible destructive outcomes to a con ict besides this arguably most extreme outcome of MAD. Engaging in any direct con ict is dangerous, though, because con icts are unpredictable and often cause collateral damage (see Chapter 2). For example, drawn-out divorce battles can be harmful to the children. That’s why it makes sense to consider con ict prevention measures like mediation, or, more generally, diplomacy (see win-win in Chapter 4 for some related mental models). If diplomacy by itself doesn’t work, though, there is another set of models to turn to, starting with deterrence, or using a threat to prevent (deter) an action by an adversary. Credible mutually assured destruction makes an excellent deterrent. But even one nuclear blast is so undesirable that simply the possession of a nuclear weapon has proven to be a powerful deterrent. For example, North Korea seemingly developed nuclear weapons to secure its survival as a state, despite being an authoritarian dictatorship with a well-documented history of human rights violations. So far, this tactic is working as a deterrence strategy along with other strategies it pursues, including threats of conventional bombing of South Korea and aligning with China. The deterrence model can be appropriate when you want to try to prevent another person or organization from taking an action that would be harmful to you or society at large. In the criminal justice system, punishments may be enacted to try to deter future crime (e.g., three-strikes laws). Government regulations are often designed in part to deter unpleasant future economic or societal outcomes (e.g., deposit insurance deterring bank runs). Businesses also take actions to deter new entrants, for example, by using their scale to price goods so low that new rms cannot pro tably compete (e.g., Walmart) or lobbying for regulations that bene t them at the expense of competition (e.g., anti–net neutrality laws). The primary challenge of this model, though, is actually nding an e ective deterrent. As we discussed in Chapter 2, things don’t always go according to plan. When you want to put a deterrent in place, you must evaluate whether it is truly e ective and whether there are any unintended consequences. For example, what are e ective crime deterrents? Research shows that people are more deterred by the certainty they will be caught and convicted than by the speci c punishment they might receive. If there is little chance of getting caught, some simply do not care what the punishment is. Further, most people are not even aware of the speci c punishments they might face. Financial fraudster Bernie Mado thought he was never going to be caught and probably never considered the possibility of a 150-year prison sentence. Additionally, there is evidence to suggest that not only does prison time not reduce repeat o enses, but there is actually a chance that it increases the probability of committing a crime again. The real solution to deterring crime is likely related to the root cause of why people commit speci c types of crimes rather than to any particular punishment. A tactical approach to deterrence is the carrot-and-stick model, which uses a promise of a reward (the carrot) and at the same time a threat of punishment (the stick) to deter behavior. In our household, we sometimes try to deter ghting between our kids using dessert as the carrot and loss of iPad time as the stick. It’s a form of good cop, bad cop. What you don’t want is for the carrot-stick combination to be too weak such that the rational decision is just to ignore the carrot and deal with the stick. Economic sanctions and corporate nes are hotly debated in terms of e cacy because of this, with the latter often being thought of as more of a cost of doing business than a deterrent. One e ective example of the carrot-and-stick approach is Operation Cease re, an initiative started in Boston that aims to curtail gang-related violence. The stick part of the program focuses on a message to speci c repeat perpetrators of violent crime about the certainty of future enforcement, including a promise that any new violence, especially gun violence, will result in an immediate and intense police response. The carrot part of the program is the o er of help to these same individuals, including money, job training, community support, and one-to-one mentoring in a concerted e ort to get them to live productive lives. In the U.S., cities that have implemented this strategy, such as Boston, Chicago, Cincinnati, and Indianapolis, have amazingly reduced their gun homicide rates nearly 25 to 60 percent by assisting only a handful of people. A sister mental model to deterrence is containment. In global con icts, containment is an attempt to contain the enemy, to prevent its further expansion, be it expanding geographically (e.g., invading a neighboring country), militarily (e.g., obtaining more nuclear weapons), or politically (e.g., spreading an ideology). Containing an ongoing con ict can save you energy and resources. Think of it like treating a cut before it gets infected or removing an early-stage tumor before it metastasizes. Containment acknowledges that an undesirable occurrence has already happened, that you cannot easily undo it, and so instead you’re going to try to stop it from spreading or occurring again in the future. For example, HIV isn’t yet easily curable, but if you catch it early, with modern treatments you can usually contain it such that it does not develop into AIDS. You should apply a containment strategy in situations where you want to stop something bad from spreading, such as a negative rumor or a harmful business practice. For example, Facebook and Twitter probably couldn’t have gotten rid of fake news from their platform in the run-up to the 2016 U.S. election, but they could have done a better job at containing it. These types of situations can get out of hand quickly, so you often rst want to stop the bleeding, by a quick and dirty method if necessary. Once the situation has stabilized, you can take a step back, nd the root cause (see Chapter 1), and then try to nd a more reliable long-term solution. In an emergency medical situation, you may use a tourniquet to stop actual bleeding. The metaphorical equivalent is situationally dependent, but it usually involves doing something fast and de nitive, such as issuing a clear apology. In some cases, the best short-term option might be shutting down the area where the problem exists, kind of like amputating an infected limb to prevent sepsis. In a personal context, that might mean severing a toxic relationship, at least for the time being. In an organizational context, it might mean terminating a project or employee. Another containment tactic is quarantine, the restriction of the movement of people or goods in order to prevent the spread of disease. Your spam folder is a form of quarantine, curbing the impact of suspicious emails. Twitter started dealing with aggressive bots and people by quarantining them behind an additional tap or click so that fewer people see their messages. A related tactic is ypaper theory, which calls for you to deliberately attract enemies to one location where they are more vulnerable, like attracting ies to ypaper, usually also directing them far away from your valuable assets. A former commander of U.S. ground forces in Iraq, General Ricardo Sánchez, described the bene ts of this strategy in a 2003 CNN interview with regard to preventing terrorism on U.S. soil: “This is what I would call a terrorist magnet, where America, being present here in Iraq, creates a target of opportunity.... But this is exactly where we want to ght them.... This will prevent the American people from having to go through their attacks back in the United States.” In a computing context, this is known as a honeypot, which is used to attract and trap malicious actors for study, in the same way honey lures bears. A honeypot may be a special set of servers set up to look like the core servers where valuable data is stored, but which instead are isolated servers set up speci cally to entrap hackers. A sting operation by police where they lure criminals into a place to arrest them could be called an o ine honeypot. Without containment, bad circumstances can spread, possibly leading to a domino e ect, where more negative consequences unfold in inevitable succession like falling dominoes (see also cascading failure in Chapter 4). In the game-theory context, this e ect could be a series of player choices that lands you in a bad outcome. Consider an iterated game of prisoner’s dilemma. While in each turn it is attractive to betray the other players because you get outsized yields that turn, doing so, especially repeatedly, in most cases leads to everyone else following suit, leaving you and everyone else stuck in the suboptimal Nash equilibrium. In the Cold War, the primary worry for the West was the spread of communism, and the dominoes were countries that might fall, one after another, which was justi cation to ght containment wars, as in Korea and Vietnam. The thought was that if Korea and Vietnam fell, then Laos and Cambodia might be next, and more and more countries would fall until all of Asia (even places like India) would eventually be subsumed by communism. Domino E ect However, be aware that the domino e ect is invoked a lot more than is warranted, because people are generally bad at determining both the likelihood that events might occur and the causal relationship between events. These miscalculations often manifest in three related models, usually fallacious though not always, that you should be on the lookout for. The rst is the slippery slope argument: arguing that one small thing leads to an inevitable chain of events and a terrible nal outcome (in the eyes of the person making the argument). Here is an example of a common slippery slope argument: “If we allow any gun control, then it will eventually result in the government taking all guns away.” This line of reasoning is usually fallacious because there often isn’t 100 percent inevitability in each piece of the logical chain. The second model is broken windows theory, which proposes that visible evidence of small crimes, for example broken windows in a neighborhood, creates an environment that encourages worse crimes, such as murder. The thinking goes that broken windows are a sign that lawlessness is tolerated, and so there is a perceived need to hold the line and prevent a descent into a more chaotic state (see herd immunity in Chapter 2). While interventions associated with broken windows theory are intuitively appealing, it is unclear how e ective they are at actually reducing widespread criminal activity relative to alternatives. Related theories often take the form of a contagion metaphor, where something the person doesn’t like (e.g., rap music, homosexuality, socialism) is compared to a disease that will spread through society, continually becoming more virulent if left unchecked. The third model to watch out for is gateway drug theory, which makes the claim that one drug, such as marijuana, is a gateway to more dangerous drug use. However, the evidence for this claim is also murky at best (see correlation does not imply causation in Chapter 5). You should question any situation where one of these models arises and analyze its veracity for yourself (see arguing from rst principles in Chapter 1). Nevertheless, there are instances when a model like this can be true. Consider how businesses sometimes capture customers through a loss leader strategy, where one product is priced low (the gateway drug) to increase demand for complementary products with higher margins. The prototypical example is a supermarket discounting milk to draw in customers, who will almost certainly leave with more items. Similarly, companies sell mobile phones or printers for low prices knowing they will make money up in the long run through monthly service plans or high ink prices. We have nearly given up on letting our kids download free apps because we anticipate the endless nagging about in-app purchases. When analyzing these domino-e ect situations, write down each step in the logical chain (list each domino) and try to ascribe a realistic probability to each event (the probability that each will fall). Even if not 100 percent in every case, it might be likely that some dominoes might fall. In that case, you need to ask yourself: Is that acceptable? Do I need to engage in more active containment, or can a more wait-and-see approach be taken? For example, with gun control, banning assault ri es is extremely unlikely to lead to the government taking away all guns, but it might very well lead to more gun control of other assault-like weapons or add-ons. A 2017 Politico/ Morning Consult poll found 72 percent of Americans favored both “banning assault-style weapons” and “banning high-capacity magazines.” If you are in no position to meaningfully deter or contain an emerging con ict that you’d like to avoid, appeasement may be a necessary evil. This involves appeasing opponents by making concessions in order to avoid direct or further con ict with them. The most famous example of appeasement occurred in 1938 when Britain allowed Germany to annex Sudetenland, an important piece of Czechoslovakia, to avoid an armed con ict with Hitler’s army. Of course, the con ict Britain sought to avoid happened anyway. And that’s the worry with appeasement: you may just be delaying the inevitable. As a parent, sometimes appeasement is necessary to get through the day. For instance, we tend to bend the rules when we are traveling. Everyone is tired, and much of the time is spent in crowded, cramped hotel rooms or cars. At times like these, our normal diplomacy, deterrence, and containment tactics don’t work as smoothly. As a result, the kids often end up with way more snacks and screen time than they would normally get— appeasement tactics that e ectively prevent meltdowns and ghts. Deterrence, containment, and appeasement are all strategic mental models to keep you out of costly direct con ict. You want to enlist these models when other con ict-avoidance models have failed and you are still faced with a situation that you think you can’t “win,” as when engaging would create so much damage it isn’t worth it or when you want to preserve your resources for more fruitful engagements (see opportunity cost in Chapter 3). Finally, as Joshua said in WarGames, sometimes “the only winning move is not to play.” An increasingly common example is con ict with the online troll, someone whose whole game is to irritate people and bait them into arguments they can’t win. As a result, the best move is usually not to engage with them (don’t feed the trolls; don’t stoop to their level; rise above the fray), though, as in any situation, you have to assess it on a case-by-case basis, and where reporting mechanisms exist, you should consider them too. Any parent will similarly tell you that you need to pick your battles. CHANGING THE GAME From a game-theory perspective, deterrence and related models e ectively change a game, adjusting how players perceive their payo matrix and therefore what decisions they make when playing the game. When you practice deterrence through a credible threat, you enumerate a red line, which describes a gurative line that, if crossed, would trigger retaliation (see commitment in Chapter 3). That threat of retaliation causes other players to reconsider their choices. This line is also referred to as a line in the sand, describing a gurative line (drawn in the sand) that you do not intend to be crossed. When using this strategy, you must give enough notice so that others can adjust their strategies based on your threat. You also have to explain exactly what you intend to do when the red line is crossed. The most severe threat is a so-called nuclear option, signaling that you will undertake some kind of extreme action if pressed. For example, North Korea has repeatedly threatened the literal nuclear bombing of South Korea if invaded. Another extreme tactic is a zero-tolerance policy, where even a minor infraction results in strict punishment. For example, a zero-tolerance drug policy would have you red from your job or expelled from school on the rst o ense, as opposed to a series of punishments that escalate to an extreme measure. The problem with these tactics is someone can call your blu , challenging you to act on your threat, claim, or policy, and actually prove it is true, calling you out. At that point, if you don’t follow through on your promise of action, you will lose signi cant credibility and your opponent’s payo matrix might not change the way you want it to. For that reason, you should be prepared to follow through on whatever deterrence threats you make. Another common situation to look out for is a war of attrition, where a long series of battles depletes both sides’ resources, eventually leaving vulnerable the side that starts to run out of resources rst. Each battle in a war of attrition hurts everyone involved, so in these situations you want either to have more resources at the start, make sure you are losing resources at a much slower rate than your opponents, or both. The most famous military example is Germany’s invasion of Russia in World War II, the deadliest con ict in human history. Over the course of the invasion, military losses for the Soviets were more than ten million, compared with more than four million for Germany. Russia had signi cantly more resources, however, and Germany was never able to capture Moscow. This war of attrition accounted for 80 percent of deaths su ered by the German armed forces in all of World War II, depleting their resources enough to open them up to defeat on all fronts. Big companies often use this strategy against upstarts through various means, such as protracted lawsuits, price wars, marketing campaigns, and other head-to-head face-o s, bleeding them dry. In sports, a team may use this strategy if they are more physically t than the other, such that at the end of the game, the more t team can push to victory. It’s essentially a waiting game. Because a war of attrition is a long-term strategy, it can counterintuitively make sense to lose a battle intentionally, or even many battles, to win the eventual war. The winner of such a battle gets a hollow victory, sometimes referred to as an empty victory or Pyrrhic victory. The latter is named after King Pyrrhus of Epirus, Greece, whose army su ered irreplaceable casualties in defeating the Romans at the Battle of Heraclea, and then ultimately lost the war. In sports and gaming, this scenario is known as a sacri ce play. Examples include bunts and sacri ce ies in baseball and intentionally giving up a piece to get better board position in chess. From the other side, though, if you see that you are going to lose a war of attrition, you need to nd a way out or a way to change the game. One way to do that is to engage in guerrilla warfare, which focuses your smaller force on nimbler (guerrilla) tactics that the unwieldy larger force has trouble reacting to e ectively (see leverage in Chapter 3). Max Boot, author of Invisible Armies, recounted in a 2013 interview on NPR, titled “American Revolution Reinvents Guerrilla Warfare,” how the colonists in the American Revolution used guerrilla warfare right from the start of the con ict: Well, it rst of all comes down to not coming out into the open, where you could be annihilated by the superior repower of the enemy. The British got a taste of how the Americans would ght on the very rst day of the Revolution, with the shot heard around the world, the Battle of Lexington and Concord, where the British regulars marched through the Massachusetts countryside. And the Americans did not mass in front of them but instead chose to slither on their bellies—these Yankee scoundrels, as the British called them —and red from behind trees and stone walls. And not come out into the kind of open gentleman’s ght that the British expected, and instead, took a devastating toll on the British regiment. This concept has taken up a direct parallel in guerrilla marketing, where startup businesses use unconventional marketing techniques to promote their products and services on relatively small budgets. Examples of this type of marketing include PR stunts and viral videos, often taking direct aim at larger competitors, much like guerrilla warriors taking aim at a larger army. As an example, Dollar Shave Club, a subscription razor service, launched its product with a viral video. While it couldn’t compete on the bigger businesses’ terms (e.g., expensive TV and print ads), its edgy launch video entitled “Our Blades Are F***ing Great” immediately put the company on the map, setting it on a rapid path ultimately to a one-billion- dollar acquisition. One adage to keep in mind when you nd yourself in a guerrilla warfare situation is that generals always ght the last war, meaning that armies by default use strategies, tactics, and technology that worked for them in the past, or in their last war. The problem is that what was most useful for the last war may not be best for the next one, as the British experienced during the American Revolution. The most e ective strategies, tactics, and especially technologies change over time. If your opponent is using outdated tactics and you are using more modern, useful ones, then you can come out the victor even with a much smaller force. Essentially, you use your tactical advantage to change the game without their realizing it; they think they are still winning a war of attrition. On May 27 and 28, 1905, Japan’s navy decisively beat Russia’s navy in the Battle of Tsushima, sinking twenty-one ships, including seven battleships, with more than ten thousand Russian troops killed, injured, or captured, compared with just three torpedo boats sunk and seven hundred troops killed or injured for Japan. Admiral Tōgō Heihachirō of Japan used advanced tactics for the time and his eet easily overcame his Russian counterparts, who were clearly ghting the last war. Japan’s ships were twice as fast as those of the Russians and equipped with much better guns, shooting 50 percent farther, using mostly high-explosive shells, causing signi cantly more damage on every hit. It was also the rst naval battle where wireless telegraphy was used, and while both sides had some form of it, the Japanese version functioned much better and was more useful in eet formations. Decisive battles have been won on the back of superior technology like this many times in military history. Don’t bring a knife to a gun ght. This concept is far-reaching, describing any situation where circumstances have changed signi cantly, leaving the status quo unequipped to deal with new threats. In business, many well-known companies have lost out because they were focused on the old way of doing business, without recognizing rapidly evolving markets. IBM famously miscalculated the rise of the personal computer relative to its mainframe business, actually outsourcing its PC operating system to Microsoft. This act was pivotal for Microsoft, propelling it to capture a signi cant part of the pro ts of the entire industry for the next thirty years. Microsoft, in turn, became so focused on its Windows operating system that it didn’t adapt it quickly enough to the next wave of operating system needs on the smartphone, ceding most of the pro ts in the smartphone market to Apple, which is now the most pro table company in history. Once you start looking, you can nd generals ghting the last war all over the place: politicians failing to adapt to new campaign strategies (like John McCain’s somewhat staid online presence versus Barack Obama’s modern use of social media in the 2008 U.S. presidential campaign); nance professionals missing the signs of the 2007/2008 nancial crisis (because they thought the past could predict the future); or the U.S. education curriculum misreading the staying power of the digital economy (and continuing to fail to incorporate enough engineering). Employing guerrilla warfare is an example of punching above your weight. In boxing, competitors are grouped by weight, because large di erences in weight, all other things being equal, make a ght unfair. This takes us back to the physics models we discussed in Chapter 4 (see inertia). Heavier boxers pack more powerful punches and are generally harder to knock over. A boxer who punches above their weight intentionally ghts in a heavier class, taking on larger competitors on purpose. As a mental model, punching above your weight occurs any time you try to perform at a higher level than is expected of you, even outside a competitive context. Examples include joining a group made up of more accomplished members or writing an op-ed on a subject on which you are not yet a recognized expert. On the macro scale, whole countries punch above their weight when they engage in prominent roles on the world stage, such as Ireland serving as a tax haven for major corporations. Given the inherent disadvantage of being the smaller player, you should engage in this type of ght only when you can deploy guerrilla tactics that you believe will tilt the game in your favor. If that’s the case, though, you may actively seek out these types of con icts because punching above your weight can have many bene ts. These include the obvious bene t of increasing your chances to reach your goals faster, but also potential exposure to large audiences and opportunities to absorb knowledge from world-class experts. However, following the metaphor can also get you punched hard in the face, so it is inherently risky. It’s like when a new TV show gets marketed to the mainstream but does not retain su cient viewership and gets quickly canceled—not ready for prime time. When deploying such tactics, you will want to reevaluate your odds as the game goes on, to make sure you are on the right track. Are your odds improving? Are you e ectively changing the game to be in your favor? ENDGAME In chess, once most of the pieces have been removed from the board, you enter a stage called the endgame. This concept has been extended to refer to the nal stage of any course of events. Whether you started a con ict or were drawn into one, at some point most con icts will end and you need an e ective plan either to lock in your gains or minimize your losses. Your credible strategy to exit a situation is called your exit strategy. In a military context, the exit-strategy concept has more recently been highlighted with a negative framing in instances where there wasn’t a well- thought-out exit plan, e.g., after the U.S. lost troops in a U.N. peacekeeping mission in Somalia and with U.S. involvement in Iraq and Afghanistan. In a business context, an exit strategy usually describes how a company and its investors will get a payo through either an acquisition, a buyout, or an initial public o ering (IPO). In public policy, devising an exit strategy means thinking about the practicalities and consequences of how an entity might get out of certain situations, such as European countries withdrawing from the eurozone. As applied to your personal life, an exit strategy can be thought of in terms of how you will get out of long-term relationships you don’t want to be in or obligations you no longer wish to be burdened with. What is your strategy to make an eventual graceful exit from something you’re involved in? For example, if you are on the board of an organization, your exit strategy might involve nding your replacement and setting that person up for success. Your exit strategy doesn’t always require a full exit, however. You can also try to nd a way to hand o onerous responsibilities to another team member while holding on to the parts that you do enjoy. In any case, coming up with a well-de ned exit plan will keep you from doing things you might later regret. For instance, given the bene ts of preserving optionality (see Chapter 2), you should probably come up with an exit strategy that avoids burning bridges, or ruining relationships with individuals or organizations in a way that thereafter commits you not to going back to them (to cross that bridge ever again). The short-term satisfaction you might receive from these acts is rarely worth the risk of the escalation, severing of ties, and resulting fallout. Similar acts to avoid are scorched-earth tactics, which refers to burning (scorching) the ground (earth) so it isn’t of use to anyone (including yourself)—for example, destroying records. Sometimes, though, you may just have to exit with the best strategy you can come up with at the moment, even if it means that your exit isn’t that clean or graceful, recognizing that the long-term outcomes of staying the course are worse. If a solid exit strategy isn’t forthcoming, one tactic is to throw a Hail Mary pass, a last-ditch, long-shot nal e ort for a successful outcome. The concept comes from a nal touchdown attempt in American football where the quarterback throws a really long pass into the end zone in the hope of scoring the nal game-winning points. The phrasing became popular after a successful attempt in a 1975 NFL playo game between the Dallas Cowboys and the Minnesota Vikings after which Cowboys quarterback Roger Staubach recounted throwing the ball: “I closed my eyes and said a Hail Mary.” Spanish explorer Hernán Cortés made a counterintuitive Hail Mary pass by actually eliminating his expedition’s default exit strategy. In 1519, Cortés started a war with the Aztecs that led to the destruction of their empire. However, he had only six hundred men, whereas the Aztecs controlled most of modern-day Mexico. The odds were obviously thought to be heavily against the Spanish, and many of Cortés’s soldiers were reasonably wary of his plans. To secure their motivation, Cortés sank his ships to make sure they had no option but to succeed or die. Without the escape hatch of going back to Spain on the boats, the soldiers’ best option was to ght with Cortés. Translation errors led some to believe he burned the boats, but now we know he just had them damaged to the point of sinking. Nonetheless, burn the boats lives on as a mental model for crossing the point of no return. (Sometimes people also say crossing the Rubicon, referencing Julius Caesar’s crossing of the Rubicon River with his troops in 49 B.C., deliberately breaking Roman law, making armed con ict inevitable and ultimately leading to him becoming dictator of Rome.) Game theory can again help you work through your potential exit strategies, assessing likely long-term outcomes and evaluating how various tactics might a ect them. While not all situations parallel game-theory models (like the prisoner’s dilemma or the ultimatum game), most can still fruitfully be examined through a game-theory lens. In any con ict, whether in the endgame stage or otherwise, we encourage you to list the choices currently available to all the “players,” along with the consequences and payo s. This method should help you decide whether a game is worth playing (or continuing), how to approach playing it, and whether there is some way to change the game so the outcome leans in your favor. Thinking this way also helps you with diplomacy, because using a game- theory lens means you must think about how other players will move and react to your moves, which is a forcing function (see Chapter 4) to empathize with their goals and motivations. And through this same process you might also better clarify your own goals and motivations. KEY TAKEAWAYS Analyze con ict situations through a game-theory lens. Look to see if your situation is analogous to common situations like the prisoner’s dilemma, ultimatum game, or war of attrition. Consider how you can convince others to join your side by being more persuasive through the use of in uence models like reciprocity, commitment, liking, social proof, scarcity, and authority. And watch out for how they are being used on you, especially through dark patterns. Think about how a situation is being framed and whether there is a way to frame it that better communicates your point of view, such as social norms versus market norms, distributive justice versus procedural justice, or an appeal to emotion. Try to avoid direct con ict because it can have uncertain consequences. Remember there are often alternatives that can lead to more productive outcomes. If diplomacy fails, consider deterrence and containment strategies. If a con ict situation is not in your favor, try to change the game, possibly using guerrilla warfare and punching-above-your- weight tactics. Be aware of how generals always ght the last war, and know your best exit strategy. 8 Unlocking People’s Potential THE 1992 OLYMPICS WAS THE rst to allow active professional basketball players from the National Basketball Association (NBA) to compete. The United States elded a team dubbed the “Dream Team,” which the Naismith Memorial Basketball Hall of Fame has called “the greatest collection of basketball talent on the planet.” The team included legendary players Michael Jordan, Larry Bird, and Magic Johnson. In fact, eleven of the twelve players are in the Hall of Fame today. Collectively they defeated their opponents by an average of 44 points, including a 32-point victory against Croatia in the nals. Needless to say, it was a spectacle to watch. The 1996 Olympics had a similar result, with the U.S. team returning ve members of the original Dream Team to join new stars like Shaquille O’Neal and Reggie Miller. Again in 2000 the United States won gold with relative ease. But then in 2004 something curious happened. Despite having the most talented players (including LeBron James, Dwyane Wade, and Allen Iverson), the U.S. team lost three games (the most ever for the U.S.) and left with only the bronze medal. In fact, it lost the rst game of the tournament to Puerto Rico by a score of 92–73, the biggest loss ever recorded for any U.S. Olympic basketball team. Argentina then beat the United States in the semi nals in one of the most surprising upsets in Olympic history and went on to win the gold medal that year. Though Argentina had several NBA players itself, including Manu Ginóbili, hardly anyone expected it to be victorious. Why did the talented U.S. team fall short of the gold? The historical analysis converges on the fact that the U.S. “team” wasn’t much of a team at all—more like a loose collection of individual stars. They practiced together only for a few weeks before the tournament, not enough time to get used to playing with one another. They also didn’t have enough players with experience in all the di erent positions. By contrast, other countries selected players to complement one another, and then those players worked together for years, honing their collective playing styles and eventually gelling as teams. We relate this story because most of us are not able to put together or be part of a dream team packed with the most talented people in their elds the world over. Joy’s law is a mental model named after Sun Microsystems cofounder Bill Joy, who remarked at an event in 1990, No matter who you are, most of the smartest people work for someone else. Former U.S. Secretary of Defense Donald Rumsfeld said something similar, known as Rumsfeld’s Rule: You go to war with the army you have. They’re not the army you might want or wish to have at a later time. Both Joy and Rumsfeld acknowledge that organizations hardly ever have perfect resources, nor can they always a ord to wait until they have better ones before moving forward. Joy’s law further stresses that great people are unlikely to be concentrated in a single organization. Don’t be discouraged, though. With the right leadership, a well- constructed team can accomplish incredible things, as Argentina and Puerto Rico did in the 2004 Olympics. As another example, startup companies that disrupt large incumbents routinely start with relatively tiny amounts of resources, often a hundred to a thousand times less. Yet they become successful because they are the right group of people led in the right way. Instagram had only thirteen employees when Facebook bought it for one billion dollars in 2012; a few years later Facebook bought WhatsApp, with fty- ve employees, for a whopping nineteen billion dollars. In the startup world, you will sometimes hear about a 10x engineer, an exceptional engineer who produces many times the output of an average engineer: a world-class all-star. Ten isn’t an exact number here—it’s just meant to signify that a person is much, much better than average, a true outlier. (Of course, this concept applies beyond engineering, as there are top performers in every eld.) Organizations are always on the lookout for 10x individuals because they can be the ingredients of a true dream team. Keeping Joy’s law in mind, however, reminds you that just seeking out 10x people is a trap for two reasons. First, they are extremely rare; not every organization can hire world-class talent, because there just isn’t enough to go around. The second reason is subtler. There are many excellent people who, despite not being world-class, can achieve 10x output in certain situations, but that output may not be replicated when they switch roles, projects, or organizations. In other words, when you see outsized output by an individual, such as on a resume or via a reference, it is usually because they have many things working in their favor all at once to produce that outsized impact: role in the organization or team, personality as applied to this role, types of tasks assigned, resources provided, and the value of their unique set of skills and relationships in that particular situation. When one or more of those variables change, the person may not be able to produce at the same level. We actually view this as a positive. It means that such outsized output can be created within an organization, not by recruiting world-class all- stars, but by crafting the right projects and roles, ones that allow excellent people to reach extraordinary performance given their unique set of characteristics. As a manager, if you can help your team members in this way, you can create a 10x team around you. A 10x team arises when you’ve helped to arrange everything so that multiple people on your team become 10x contributors all at once. These are the teams that punch above their weight (see Chapter 7), as in defeating a U.S. Olympic Dream Team in basketball, competing successfully against much bigger organizations, and achieving other impressive and unexpected accomplishments. If the members of the team were on di erent projects, in di erent roles, or embedded in di erent organizations, they might not perform this well, but on this particular team, you’ve helped everyone achieve their full potential. That’s the dream of management in any situation. This chapter is about using mental models to form and lead such incredible teams, 10x teams. A February 4, 1996, quote from former U.S. senator Bill Bradley in The New York Times is apt: “Leadership is unlocking people’s potential to become better.” When you foster a 10x team, you draw on people’s di erent skills and abilities, allowing each person to play their unique part and collectively achieve outsized impact. IT TAKES A VILLAGE To work toward a 10x team, you must recognize that people are not interchangeable. On the same team, one person’s 10x role on a project might be another person’s 0.1x role on the same project. In guring out who goes where, you must appreciate the nuanced di erences between people, and in particular, appreciate each individual’s unique set of strengths, goals, and personality traits so you can craft roles for them that best utilize those characteristics and motivate them. First, consider personality traits. Both of us are introverts. We strongly prefer small group interactions to large group ones, as we can easily be overstimulated or drained of energy by larger social activities. At the same time, we are totally ne and even thrive when working alone for long periods of time. So we enjoy roles that involve things like reading, writing, planning, and building things like programs and spreadsheets. By contrast, extroverts gain energy from large group interactions. They tend to avoid solitary situations when possible, preferring synchronous interaction. A team role that involves frequent interfacing with others (like many sales roles) and appearing in large group settings (e.g., conferences) is therefore well suited for an extrovert. And conversely, a team role that involves solitary work, like many programming roles, is well suited for an introvert. Extrovert Introvert Where personality traits come from is subject to debate, and that debate is generally referred to as nature versus nurture. Nature refers to traits being explained by genetics, and nurture refers to traits being explained by all the environmental factors that don’t come from your genes (parenting, physical environment, culture, etc.). Studies have shown that many personality dimensions (like introversion/extroversion) arise out of a combination of the two. Regardless of the root causes of people’s di erences, the key insight to remember is that people really are di erent. What’s going on in your head isn’t the same as what’s going on in someone else’s head. You will approach and interpret the same situation di erently, ltered through your personality, culture, and life experiences (see frame of reference in Chapter 1). Also, even if derived largely from nurture, most personality traits aren’t quick to change once established. That means an introvert isn’t likely to become an extrovert (or vice versa) when put in a new situation. You should therefore look to accommodate these traits in the roles you select for yourself or for other people. You should also know that there are other personality dimensions besides introversion versus extroversion, though we nd that one to be the most actionable on a day-to day basis. There is no widespread agreement on the aspects of personality to focus on, but Lewis Goldberg presented one leading theory in “The Structure of Phenotype Personality Traits” that suggests there are ve key factors: 1. Extroversion (outgoing versus reserved) 2. Openness to experience (curious versus cautious) 3. Conscientiousness (organized versus easygoing) 4. Agreeableness (compassionate versus challenging) 5. Neuroticism (nervous versus con dent) Beyond personality, you’re probably familiar with IQ (intelligent quotient), a measure of general intelligence. A form of intelligence that you might not know about is emotional intelligence, measured by EQ (emotional quotient). People with high EQ are typically more empathetic, correlated with high abilities in these areas: Perceiving complex emotional states in others Managing these emotions in themselves and others Using emotions (including their own) to facilitate conversations Thus, roles that involve group dynamics, coordination, or empathy (e.g., project management, leadership, sales, marketing) are best suited for people with high EQ. (Note that IQ and EQ are independent traits, meaning the same person could have any combination of high or low IQ and EQ.) When considering people for roles, you must also consider their individual goals and strengths, which can vary widely. A few mental models can help you make some useful distinctions. For example, some people wish to know a little about a lot (generalists) while others wish to go deeper in one area (specialists). Specialist vs. Generalist Think about physicians: primary care physicians are generalists and do a bit of everything, serving as the starting point for the diagnosis of any ailment. But for speci c conditions, they will refer their patients to specialist physicians, trained and experienced to treat in one area, such as infectious disease or oncology. Or take retail stores: Sometimes you want to go to a general store like Walmart or Target to get a variety of things. Other times a specialty store like Home Depot (home improvement), Best Buy (electronics), or AutoZone is more appropriate. In your organization, you will need people who lean toward one side or the other depending on the situation. In very small organizations, for example, specialists are more of a luxury. You will want generalists because so many types of problems need to be solved but you have only a few people to address them. In these cases, problems that require specialists are often not frequent enough to justify full-time positions, and so organizations usually rely on outside resources to solve them. By contrast, larger organizations employ many specialists, who can usually get better outcomes than generalists because of their long-term specialist experience. A similar model from author Robert X. Cringley in his book Accidental Empires describes three types of people required in di erent phases of an organization’s life cycle—commandos, infantry, and police. Whether invading countries or markets, the rst wave of troops to see battle are the commandos.... A startup’s biggest advantage is speed, and speed is what commandos live for. They work hard, fast, and cheap, though often with a low level of professionalism, which is okay, too, because professionalism is expensive. Their job is to do lots of damage with surprise and teamwork, establishing a beachhead before the enemy is even aware that they exist.... Grouping o shore as the commandos do their work is the second wave of soldiers, the infantry. These are the people who hit the beach en masse and slog out the early victory, building on the start given them by the commandos.... Because there are so many more of these soldiers and their duties are so varied, they require an infrastructure of rules and procedures for getting things done—all the stu that commandos hate.... What happens then is that the commandos and the infantry head o in the direction of Berlin or Baghdad, advancing into new territories, performing their same jobs again and again, though each time in a slightly di erent way. But there is still a need for a military presence in the territory they leave behind, which they have liberated. These third-wave troops hate change. They aren’t troops at all but police. They want to fuel growth not by planning more invasions and landing on more beaches but by adding people and building economies and empires of scale. This model applies equally to well to projects. As entrepreneur Je Atwood put it in a June 29, 2004, post on his blog, Coding Horror: You really need all three groups through the life cycle of a project. Having the wrong group (commandos) at the wrong time (maintenance) can hurt you a lot more than it helps. Sometimes being a commando, even though it sounds really exciting, actually hurts the project. People who like rules and structure are much better suited for police roles, whereas anti-establishment types gravitate toward and excel in commando roles. If you put a commando person in a police role (e.g., project manager, compliance o cer, etc.), they will generally rebel and make a mess of everything, whereas if you put a police person in a commando role (e.g., a position involving rapid prototyping, creative deliverables, etc.), they will generally freeze up and stall out. Another mental model that helps you consider people’s strengths is foxes versus hedgehogs, derived from a lyric by the Greek poet Archilochus, translated as The fox knows many things, but the hedgehog knows one big thing. Philosopher Isaiah Berlin applied the metaphor to categorize people based on how they approach the world: hedgehogs, who like to frame things simply around grand visions or philosophies; and foxes, who thrive on complexity and nuance. Hedgehogs are big picture; foxes appreciate the details. Like other dichotomous pairs, foxes and hedgehogs excel in di erent situations. For example, in his book Good to Great, Jim Collins noted that most of the “great” companies pro led were run by hedgehogs who built up massive companies in dogged pursuit of one simple vision: Those who built the good-to-great companies were, to one degree or another, hedgehogs. They used their hedgehog nature to drive toward what we came to call a Hedgehog Concept for their companies. Those who led the comparison companies tended to be foxes, never gaining the clarifying advantage of a Hedgehog Concept, being instead scattered, di used, and inconsistent. However, many of those “great” companies no longer exist. They were great only for a short period of time, often because times had changed while they held on to the same Hedgehog Concept. By comparison, Pulitzer Prize– winning journalist Nicholas Kristof, writing in The New York Times on March 26, 2009, described research detailing why foxes often make better predictors: Hedgehogs tend to have a focused worldview, an ideological leaning, strong convictions; foxes are more cautious, more centrist, more likely to adjust their views, more pragmatic, more prone to self-doubt, more inclined to see complexity and nuance. And it turns out that while foxes don’t give great sound-bites, they are far more likely to get things right. Again, each type of person should be placed in roles that suit them. For example, a hedgehog will be better at marketing roles, communicating a vision clearly and succinctly. A fox will be better at strategic roles, wading through the nuances of uncertainty and complexity. And you will need both types of people on your teams. Because 10x teams perform at such a high level, leaders should be actively thinking of ways to create and maintain them. Members of 10x teams tend to have di erent skills and backgrounds because this gives the team variety in perspectives (see divergent thinking in Chapter 6) and the ability to assign team roles and responsibilities to people well suited for them. This means that at the organizational level, you bene t from diversity because you can create multiple 10x teams by arranging people the right way, drawing on their wide array of skills and other individual traits that diversity provides. For leaders, when constructing these teams, the starting point is knowing and appreciating the unique characteristics of your team members. Then you can craft team roles and responsibilities based on what will work best for the speci c people available. As needed, you can recruit additional people with complementary skills who can further strengthen the team. You also need to keep individual characteristics in mind when you manage the people on these teams, and adjust your management accordingly. We call this managing to the person, as opposed to managing to the role or managing everyone the same. In other words, good people management is not one-size- ts-all. As with many challenges, Maslow’s hammer (see Chapter 6) can convince you that you should take the technique that worked for one person a

Use Quizgecko on...
Browser
Browser