Cognition: Deductive Reasoning and Decision Making (2023) PDF

Document Details

PositiveCliché

Uploaded by PositiveCliché

Ateneo de Manila University

2023

Thomas A. Farmer, Margaret W. Matlin

Tags

deductive reasoning decision making cognition psychology

Summary

This chapter from the 2023 Cognition textbook by Thomas A. Farmer and Margaret W. Matlin explores deductive reasoning and decision-making processes. It covers conditional reasoning, heuristics, and various biases that influence decisions. The chapter also highlights factors that complicate reasoning and offers applications of decision-making research.

Full Transcript

12 Deductive Reasoning and Decision Making Chapter Introduction Deductive Reasoning Overview of Conditional Reasoning Factors That Cause Difficulty in Reasoning Biases and Deductive Reasoning Heuristics and Decision Making Representativeness Heuristic Availability Heuristic Anchoring an...

12 Deductive Reasoning and Decision Making Chapter Introduction Deductive Reasoning Overview of Conditional Reasoning Factors That Cause Difficulty in Reasoning Biases and Deductive Reasoning Heuristics and Decision Making Representativeness Heuristic Availability Heuristic Anchoring and Adjustment Heuristic Current Status of Heuristics and Decision Making Applications of Decision-­Making Research Framing Effect Overconfidence About Decisions Hindsight Bias Decision-­Making Style and Psychological Well-­Being Chapter Introduction The topics of problem solving, deductive reasoning, and decision making are all interrelated. All three top- ics are included in the general category called “thinking,” a phenomenon that requires you to go beyond the information you were given. Thinking typically involves a goal such as a solution, a belief, or a deci- sion. In other words, you begin with several pieces of information, and you must mentally transform that information so that you can solve a problem or make a decision. Deductive reasoning is a type of reasoning that begins with some specific premises, which are generally assumed to be true. Based on those premises, you judge whether they allow a particular conclusion to be drawn, as determined by the principles of logic. During decision making, you assess information and choose among two or more alternatives. Many deci- sions are trivial: Do you want mustard on your sandwich? Other decisions are momentous: Should you apply to graduate programs for next year, or should you try to find a job? In this chapter, we first explore deductive reasoning, focusing heavily on a series of classic effects that have been empirically utilized to unlock the general cognitive principles that govern our ability to deduce. In the following two sections, we cover the topic of decision making. We first consider several heuristics that guide the decision-­making process, followed by a consideration of phenomena that have direct applications to decision making in our daily lives. 239 240 Deductive Reasoning and Decision Making Deductive Reasoning In deductive reasoning, you begin with some specific premises that are often true, and you need to judge whether those premises allow you to draw a particular conclusion, based on the principles of logic (Goel & Waechter, 2017; Halpern, 2003; Johnson-­Laird, 2005a; Levy, 2010). A deductive-­reasoning task provides you with all the information you need to draw a conclusion. Furthermore, the premises can be either true or false, and you must use the rules of formal logic in order to draw conclusions (Goel & Waechter, 2017; Levy, 2010; Roberts & Newton, 2005; Wilhelm, 2005). One of the most common kinds of deductive reasoning tasks is called conditional reasoning. A con- ditional reasoning task (also called a propositional reasoning task) describes the relationship between conditions. Here’s a typical conditional reasoning task: If a child is allergic to peanuts, then eating peanuts produces a breathing problem. A child has a breathing problem. Therefore, this child has eaten peanuts. Notice that this task tells us about the relationship between two conditions, such as the relationship between eating peanuts and a breathing problem. The kind of conditional reasoning we consider in this chapter explores reasoning tasks that have an “if... then...” kind of structure. When researchers study conditional reasoning, people judge whether the conclusion is valid or invalid. In the example above, the conclusion “Therefore, this child has eaten peanuts” is not valid, because some other substance or medical condition could have caused the problem. Another common kind of deductive reasoning task is called a syllogism. A syllogism consists of two statements that we must assume to be true, plus a conclusion. Syllogisms refer to quantities, so they use the words all, none, some, and other similar terms. Here’s a typical syllogism: Some psychology majors are friendly people. Some friendly people are concerned about poverty. Therefore, some psychology majors are concerned about poverty. In a syllogism, you must judge whether the conclusion is valid, invalid, or indeterminate. In this example, the answer is indeterminate. In fact, those psychology majors who are friendly people and those friendly peo- ple who are concerned about poverty could really be two separate populations, with no overlap whatsoever. Notice that your everyday experience tempts you to conclude, “Yes, the conclusion is valid.” After all, you know many psychology majors who are concerned about poverty. Many people would automatically respond, “valid conclusion.” In contrast, with a little more explicit thinking, you’ll reexamine that syl- logism and realize that the strict rules of deductive reasoning require you to respond, “The conclusion is indeterminate” (Stanovich, 2009, 2011; Tsujii & Watanabe, 2009). In a college course in logic, you could spend an entire semester learning about the structure and solution of deductive reasoning tasks such as these. However, we emphasize the cognitive factors that influence deductive reasoning. Furthermore, we limit ourselves to conditional reasoning, a kind of deductive reason- ing that students typically find more approachable than syllogisms (Schmidt & Thompson, 2008). As it happens, researchers have found that conditional reasoning tasks and syllogisms are influenced by similar cognitive factors (Mercier & Sperber, 2011; Schmidt & Thompson, 2008; Stanovich, 2011). In addition, people’s performance on conditional reasoning tasks is correlated with their performance on syllogism tasks (Stanovich & West, 2000). In the remainder of this section, we first explore four basic kinds of conditional reasoning tasks before turning to a discussion of factors that cause difficulty in reasoning. We then conclude this section with a discussion of two cognitive errors that people often make when they solve these reasoning tasks. Overview of Conditional Reasoning Conditional reasoning situations occur frequently in our daily life. However, these reasoning tasks are surprisingly difficult to solve correctly (Evans, 2004; Johnson-­Laird, 2011). Let’s examine the formal principles that have been devised for solving these tasks correctly. Deductive Reasoning 241 Table 12.1 Propositional Calculus: The Four Kinds of Reasoning Tasks Portion of the statement Action taken Antecedent Consequent Affirm Affirming the antecedent (valid) This is an apple; Affirming the consequent (invalid) This is a fruit; therefore this a fruit. therefore this is an apple. Deny Denying the antecedent (invalid) This is not an Denying the consequent (valid) This is not a fruit; apple; therefore it is not a fruit. therefore this is not an apple. Note: Each of these examples is based on the statement “If this is an apple, then this is a fruit.” Table 12.1 illustrates propositional calculus, which is a system for categorizing the four kinds of rea- soning used in analyzing propositions or statements. Let’s first introduce some basic terminology. The word antecedent refers to the first proposition or statement; the antecedent is contained in the “if...” part of the sentence. The word consequent refers to the proposition that comes second; it is the consequence. The consequent is contained in the “then...” part of the sentence. When we work on a conditional reasoning task, we can perform two possible actions: (1) We can affirm part of the sentence, saying that it is true; or (2) we can deny part of the sentence, saying that it is false. By combining the two parts of the sentence with these two actions, we have four conditional reasoning situations. As you can see, two of them are valid, and two of them are invalid. 1. Affirming the antecedent means that you say that the “if...” part of the sentence is true. As shown in the upper-­left corner of Table 12.1, this kind of reasoning leads to a valid, or correct, conclusion. 2. The fallacy (or error) of affirming the consequent means that you say that the “then...” part of the sentence is true. This kind of reasoning leads to an invalid conclusion. Notice the upper-­right corner of Table 12.1; the conclusion “This is an apple” is incorrect. After all, the item could be a pear, or a mango, or numerous other kinds of nonapple fruit. 3. The fallacy of denying the antecedent means that you say that the “if...” part of the sentence is false. Denying the antecedent also leads to an invalid conclusion, as you can see from the lower-­left corner of Table 12.1. Again, the item could be some fruit other than an apple. 4. Denying the consequent means that you say that the “then...” part of the sentence is false. In the lower-­right corner of Table 12.1, notice that this kind of reasoning leads to a correct conclusion.* Now test yourself on the four kinds of conditional reasoning tasks by trying Demonstration 12.1. Let’s now reconsider the “affirming the consequent” task in more detail, because this task causes the largest number of errors (Byrne & Johnson-­Laird, 2009). It’s easy to see why people are tempted to affirm the consequent. In real life, we are likely to be correct when we make this kind of conclusion (Evans, 2000). For example, consider the two propositions, “If a person is a talented singer, then he or she has musical abilities” and “Paula has musical abilities.” In reality, it’s often a good bet that Paula is a talented singer. However, in logical reasoning, we cannot rely on statements such as “It’s a good bet that....” For example, I remember a student whose musical skills as a violinist were exceptional, yet she sang off-­key. As Theme 2 emphasizes, many cognitive errors can be traced to a heuristic, a general strategy that usually works well. In this example of logical reasoning, however, “it’s a good bet” is not the same as “always” (Leighton & Sternberg, 2003). In the second part of this chapter, you’ll see that decision-­making tasks actually do allow us to use the concept, “it’s a good bet.” However, propositional reasoning tasks require us to use the word “always” before we conclude that the conclusion is valid. Still, many people do manage to solve these reasoning tasks correctly. How do they succeed? When contemporary psychologists study reasoning and decision making, they may adopt a dual-­process theory, *If you have taken courses in research methods or statistics, you will recognize that scientific reasoning is based on the strategy of denying the consequent—­that is, ruling out the null hypothesis. 242 Deductive Reasoning and Decision Making Demonstration 12.1 Propositional Calculus Decide which of the following conclusions are valid and which are 3. Denying the antecedent invalid. The answers are at the end of the chapter. If I am a first-­year student, then I must register for next semester’s 1. Affirming the antecedent classes today. If today is Tuesday, then I have my bowling class. I am not a first-­year student. Today is Tuesday. Therefore, I must not register for next semester’s classes today. Therefore, I have my bowling class. 4. Denying the consequent 2. Affirming the consequent If the judge is fair, then Susan is the winner. If Sarita is a psychology major, then she is a student. Susan is not the winner. Sarita is a student. Therefore, the judge is not fair. Therefore, Sarita is a psychology major. which distinguishes between two types of cognitive processing (De Neys & Goel, 2011; Evans, 2006, 2012; Kahneman, 2011; Stanovich, 2009, 2011). In general, Type 1 processing is fast and automatic; it requires little conscious attention. For example, we use Type 1 processing during depth perception, recog- nition of facial expression, and automatic stereotyping. In contrast, Type 2 processing is relatively slow and controlled. It requires focused attention, and it is typically more accurate. For example, we use Type 2 processing when we think of exceptions to a general rule, when we realize that we made a stereotyped response, and when we acknowledge that our Type 1 response may have been incorrect. With respect to conditional reasoning, people may initially use Type 1 processing, which is quick and generally correct. However, they sometimes pause and then shift to Type 2 processing, which requires a more effortful analytic approach. This approach requires focused attention and working memory so that people can realize that their initial conclusion would not necessarily be correct (De Neys & Goel, 2011; Evans, 2004, 2006; Kahneman, 2011; Stanovich, 2009, 2011). Our performance on reasoning tasks is a good example of Theme 4, which emphasizes that our cogni- tive processes are interrelated. For example, conditional reasoning relies upon working memory, espe- cially the central executive component of working memory that we discussed in Chapter 4 (Evans, 2006; Gilhooly, 2005; Reverberi et al., 2009). Reasoning also requires general knowledge and language skills (Rips, 2002; Schaeken et al., 2000; Wilhelm, 2005). In addition, it often uses mental imagery (Evans, 2002; Goodwin & Johnson-­Laird, 2005). Factors That Cause Difficulty in Reasoning The cognitive burden of deductive reasoning is especially heavy when some of the propositions contain negative terms (rather than just positive terms), and when people try to solve abstract reasoning tasks (rather than concrete terms). In the text that follows, we discuss research that highlights the effects of these factors on reasoning. Theme 3 of this book states that people can handle positive information better than negative informa- tion. As you may recall from Chapter 9, people have trouble processing sentences that contain words such as no or not. This same issue is also true for conditional reasoning tasks. For example, try the following reasoning task: If today is not Friday, then we will not have a quiz today. We will not have a quiz today. Therefore, today is not Friday. This item has four instances of the word not, and it is more challenging than a similar but linguistically positive item that begins, “If today is Friday....” Deductive Reasoning 243 Research shows that people take longer time to evaluate problems that contain linguistically negative information, and they are also more likely to make errors on these problems (Halpern, 2003). A reasoning problem is especially likely to strain our working memory if the problem involves denying the antecedent or denying the consequent. Most of us squirm when we see a reasoning task that includes a statement like “It is not true that today is not Friday.” Furthermore, we often make errors when we translate either the initial statement or the conclusion into more accessible, linguistically positive forms. People also tend to be more accurate when they solve reasoning problems that use concrete examples about everyday catego- ries, rather than abstract, theoretical examples. For instance, you probably worked through the items in Demonstration 12.1 somewhat easily. In contrast, even short reasoning problems are difficult if they refer to abstract items with abstract characteristics (Evans, 2004, 2005). For example, try this problem about geometric objects, and decide whether the conclusion is valid or invalid: If an object is red, then it is rectangular. This object is not rectangular. Therefore, it is not red. Now check the answer to this item, located at the bottom of Demonstration 12.2. Incidentally, the research shows that people’s accuracy typically increases when they use diagrams to make the problem more concrete (Halpern, 2003). However, we often make errors on concrete reasoning tasks if our every- day knowledge overrides the principles of logic (Evans, 2011; Mercier & Sperber, 2011). Biases and Deductive Reasoning In this final section on deductive reasoning, we explore two biases—­belief bias and confirmation bias—­on the deductive reasoning process. Belief-­Bias Effect In our lives outside the psychology laboratory, our background (or top-­down) knowledge helps us function well. Inside the psychology laboratory—­or in a course on logic—­this background information sometimes encourages us to make mistakes. For example, try the following reasoning task (Markovits et al., 2009, p. 112): If a feather is thrown at a window, the window will break. A feather is thrown at a window. Therefore, the window will break. In everyday life, it’s a good bet that this conclusion is incorrect; how could a feather possibly break a window? However, in the world of logic, this feather–window task actually affirms the antecedent, so it must be correct. Similarly, your common sense may have encouraged you to decide that the conclusion was valid for the syllogism about the psychology majors who are concerned about poverty. The belief-­bias effect occurs in reasoning when people make judgments based on prior beliefs and general knowledge, rather than on the rules of logic. In general, people make errors when the logic of a reasoning problem conflicts with their background knowledge (Ball & Thompson, 2017; Dube et al., 2010, 2011; Evans, 2017; Levy, 2010; Markovits et al., 2009; Stanovich, 2011). The belief-­bias effect is one more example of top-­down processing (Theme 5). Our prior expectations help us to organize our experiences and understand the world. For example, when we see a conclusion in a reasoning task that looks correct in the “real world,” we may not pay attention to the reasoning process that generated this conclusion (Stanovich, 2003). As a result, we may question a valid conclusion. People vary widely in their susceptibility to the belief-­bias effect. For example, people with low scores on an intelligence test are especially likely to demonstrate the belief-­bias effect (Macpherson & Stanovich, 2007). People are also likely to demonstrate the belief-­bias effect if they have low scores on a test of flex- ible thinking (Kokis et al., 2002; Stanovich & West, 1997, 1998). An inflexible person is likely to agree with statements such as “No one can talk me out of something I know is right.” In contrast, people who are flexible thinkers agree with statements such as “People should always take into consideration any evidence 244 Deductive Reasoning and Decision Making that goes against their beliefs.” These people are more likely to solve the reasoning problems correctly, without being distracted by the belief-­bias effect. In fact, these people actively block their everyday knowl- edge, such as their knowledge that a feather could not break a window (Markovits et al., 2009). In general, they also tend to carefully inspect a reasoning problem, trying to determine whether the logic is faulty (Macpherson & Stanovich, 2007; Markovitz et al., 2009). Fortunately, when students have been taught about the belief-­bias effect, they make fewer errors (Kruglanski & Gigerenzer, 2011). Confirmation Bias Be sure to try Demonstration 12.2 (below) before you read any further. Peter Wason’s (1968) selection task has inspired more psychological research than any other deductive reasoning problem. It has also raised many questions about whether humans are basically rational (Mercier & Sperber, 2011; Lilienfeld et al., 2009; Oswald & Grosjean, 2004). Let’s first examine the original ver- sion of the selection task and then see how people typically perform better on a more concrete variation of this task. The Standard Wason Selection Task Demonstration 12.2 shows the original version of the selection task. Peter Wason (1968) found that people show a confirmation bias; they would rather try to confirm or support a hypothesis than try to disprove it (Kida, 2006; Krizan & Windschitl, 2007; Levy, 2010). When people try this classical selection task, they typically choose to turn over the E card (Mercier & Sperber, 2011). This strategy allows the participants to confirm the hypothesis by the valid method of affirming the antecedent, because this card has a vowel on it. If this E card has an even number on the other side, then the rule is correct. If the number is odd, then the rule is incorrect. As discussed above, the other valid method in deductive reasoning is to deny the consequent. To accom- plish this goal, you must choose to turn over the 7 card. The information about the other side of the 7 card is very valuable. In fact, it is just as valuable as the information about the other side of the E card. Remem- ber that the rule is: “If a card has a vowel on its letter side, then it has an even number on its number side.” To deny the consequent in this Wason Task, we need to check out a card that does not have an even number on its number side. In this case, then, we must check out the 7 card. We noted that many people are eager to affirm the antecedent. In contrast, they are reluctant to deny the consequent by searching for counterexamples. This approach would be a smart strategy for rejecting a hypothesis, but people seldom choose this appropriate strategy (Lilienfeld et al., 2009). Keep in mind that most participants in these selection-­task studies are college students, so they should be able to master an abstract task (Evans, 2005). You may wonder why we did not need to check on the J and the 6. Take a moment to read the rule again. Actually, the rule did not say anything about consonants, such as J. The other side of the J could show an odd number, an even number, or even a Vermeer painting, and we wouldn’t care. A review of the literature showed that most people appropriately avoid the J card. The rule also does not specify what must appear on the other side of the even numbers, such as 6. How- ever, most people select the 6 card to turn over. People often assume that the two parts of the rule can be switched, so that it reads, “If a card has an even number on its number side, then it has a vowel on its letter side.” Thus, they make an error by choosing the 6. Demonstration 12.2 The Confirmation Bias Imagine that each square below represents a card. Suppose that you are participating in a study in which the experimenter tells you that every card has a letter on one side and a number on the other side. You are then given this rule about these four cards: “IF A CARD HAS A VOWEL ON ONE SIDE, THEN IT HAS AN EVEN NUMBER ON THE OTHER SIDE.” Your task is to decide which card (or cards) you would need (Incidentally, the answer to the problem about the objects is “valid.”) to turn over, so that you can find out whether this rule is valid or Source: The confirmation-­bias task in this demonstration is based on Wason invalid. What is your answer? The correct answer is discussed later (1968). in the chapter. Heuristics and Decision Making 245 Concrete Versions of the Wason Selection Task In most of the recent research on the Wason Task, psychologists focus on versions in which the numbers and letters on the cards are replaced by concrete situations that we encounter in our everyday lives. As you might guess, people perform much better when the task is concrete, familiar, and realistic (Evans, 2011; Mercier & Sperber, 2011). For example, Griggs and Cox (1982) tested college students in Florida using a variation of the selection task. This task focused on the drinking age, which was then 19 in the state of Florida. Specifically, the students were asked to test this rule: “If a person is drinking beer, then the person must be over 19 years of age” (p. 415). Each participant was instructed to choose two cards to turn over—­out of four—­in order to test whether people were lying about their age. Griggs and Cox (1982) found that 73% of the students who tried the drinking age problem made the correct selections, in contrast to 0% of the students who tried the standard, abstract form of the selection task. According to later research, people are especially likely to choose the correct answer when the word- ing of the selection task implies some kind of social contract designed to prevent people from cheating (Barrett & Kurzban, 2006; Cosmides & Tooby, 2006). Applications in Medicine Several studies point out that the confirmation bias can be applied in medical situations. For example, researchers have studied people who seek medical advice for insomnia (Harvey & Tang, 2012). As it happens, when people believe that they have insomnia, they overestimate how long it takes them to fall asleep. They also underestimate the amount of time they spend sleeping at night. One explanation for these data is that people seek confirming evidence that they are indeed “bad sleepers,” and they provide estimates that are consistent with this diagnosis. Another study focused on the diagnosis of psychological disorders (Mendel et al., 2011). Medical stu- dents and psychiatrists first read a case vignette about a 65-­year-­old man, and then they were instructed to provide a preliminary diagnosis of either Alzheimer’s disease or severe depression. Each person then decided what kind of additional information they would like; six items were consistent with each of the two diagnoses. The results showed that 25% of the medical students and 13% of the psychiatrists selected only the information that was consistent with their original diagnosis. In other words, they did not investi- gate information that might be consistent with the other diagnosis. Further Perspectives How can we translate the confirmation bias into real-­life experiences? Try noticing your own behavior when you are searching for evidence. Do you consistently look for information that will confirm that you are right, or do you valiantly pursue ways in which your conclusion can be wrong? The confirmation bias might sound relatively harmless. However, thousands of people die each year because our political leaders fall victim to this confirmation bias (Kida, 2006). For example, suppose that Country A wants to start a war in Country B. The leaders in Country A will keep seeking support for their position. These leaders will also avoid seeking information that their position may not be correct. Here’s a remedy for the confirmation bias: Try to explain why another person might hold the opposite view (Lilienfeld et al., 2009; Myers, 2002). In an ideal world, for example, the leaders of Country A should sincerely try to construct arguments against attacking Country B. This overview of conditional reasoning does not provide much evidence for Theme 2 of this book. At least in the psychology laboratory, people are not especially accurate when they try to solve “if... then...” kinds of problems. However, the circumstances are usually more favorable in our daily lives, where problems are more concrete and situations are more consistent with our belief biases (Mercier & Sperber, 2011). Deductive reasoning is such a challenging task that we are not as efficient and accurate as we are in perception and memory—­two areas in which humans are generally very competent. Heuristics and Decision Making In decision making, you must assess the information and choose among two or more alternatives. Com- pared to deductive reasoning, the area of decision making is much more ambiguous. Some information may be missing or contradictory. In addition, we do not have clear-­cut rules that tell us how to proceed from the information to the conclusions. Also, you may never know whether your decision was correct, the consequences of that decision won’t be immediately apparent, and you may need to take additional factors into account (Johnson-­Laird et al., 2004; Simon et al., 2001). 246 Deductive Reasoning and Decision Making In real life, the uncertainty of decision making is more common than the certainty of deductive reason- ing. However, people have difficulty with both kinds of tasks, and they do not always reach the appropriate conclusions (Goodwin & Johnson-­Laird, 2005; Stanovich, 2009, 2011). When you engage in reasoning, you use the established rules of propositional calculus to draw clear-­cut conclusions. In contrast, when you make a decision, there is no comparable list of rules. Furthermore, you may never even know whether your decision is correct. Some critical information may be missing, and you may suspect that other information is not accurate. Should you apply to graduate school or get a job after college? Should you take social psychology in the morning or in the afternoon? In addition, emotional factors frequently influence our everyday decision making (Kahneman, 2011; Lehrer, 2009; Stanovich, 2009, 2011). As you’ll see, this section emphasizes several kinds of decision-­making heuristics. Heuristics are gen- eral strategies that typically produce a correct solution. When we need to make a decision, we often use a heuristic that is simple, fast, and easy to access (Kahneman, 2011; Kahneman & Frederick, 2005; Stanovich, 2009, 2011). These heuristics reduce the difficulty of making a decision (Shah & Oppenheimer, 2008). In many cases, however, humans fail to appreciate the limitations of these heuristics. When we use this fast, Type 1 processing, we can make inappropriate decisions. However, if we pause and shift to slow, Type 2 processing, we can correct that original error and end up with a good decision. Throughout this section, you will often see the names of two researchers, Daniel Kahneman and Amos Tversky. Kahneman won the Nobel Prize in Economics in 2002 for his research in decision making. Kahneman and Tversky proposed that a small number of heuristics guide human decision making. As they emphasized, the same strategies that normally guide us toward the correct decision may sometimes lead us astray (Kahneman, 2011; Kahneman & Frederick, 2002, 2005). Notice that this heuristics approach is consistent with Theme 2 of this book: Our cognitive processes are usually efficient and accurate, and our mistakes can often be traced to a rational strategy. In this part of the chapter, we discuss many studies that illustrate errors in decision making. These errors should not lead us to conclude that humans are foolish creatures. Instead, people’s decision-­making heuristics are well adapted to handle a wide range of problems (Kahneman, 2011; Kahneman & Frederick, 2005). How- ever, these same heuristics become a liability when they are applied too broadly—­for example, when we emphasize heuristics rather than other important information. We now explore three classic decision-­making heuristics: representativeness, availability, and anchor- ing and adjustment. We conclude this section by considering the current status of heuristics in decision-­ making research. Representativeness Heuristic Here’s a remarkable coincidence: Three early U.S. presidents—­Adams, Jefferson, and Monroe—­all died on the Fourth of July, although in different years (Myers, 2002). This information doesn’t seem correct, because the dates should be randomly scattered throughout the 365 days a year. Now consider this example. Suppose that you have a regular penny with one head (H) and one tail (T), and you toss it six times. Which outcome seems most likely, T H H T H T or H H H T T T? Most people choose T H H T H T (Teigen, 2004). After all, you know that coin tossing should produce heads and tails in random order, and the order T H H T H T looks much more random. A sample looks representative if it is similar in important characteristics to the population from which it was selected. For instance, if a sample was selected by a random process, then that sample must look random in order for people to say that it looks representative. Thus, T H H T H T is a sample that looks representative because it has an equal number of heads and tails (which would be the case in random coin tosses). Furthermore, T H H T H T looks more representative because the order of the Ts and Hs looks random rather than orderly. The research shows that we often use the representativeness heuristic; we judge that a sample is likely if it is similar to the population from which this sample was selected (Galavotti et al., 2021; Kahneman, 2011; Kahneman & Tversky, 1972; Levy, 2010). According to the representativeness heuristic, we believe that random-­looking outcomes are more likely than orderly outcomes. Suppose, for example, that a cashier adds up your grocery bill, and the total is $21.97. This very random-­looking outcome is a representative kind of answer, and so it looks “normal.” However, suppose that the total bill is $22.22. This total does not look random, and you might even decide to check the arithmetic. After all, addition is a process that should yield a random-­looking outcome. Heuristics and Decision Making 247 In reality, though, a random process occasionally produces an outcome that looks nonrandom. In fact, chance alone can produce an orderly sum like $22.22, just as chance alone can produce an orderly pattern like the three presidents dying on the Fourth of July. The representativeness heuristic raises a major problem: This heuristic is so persuasive that people often ignore important statistical information that they should consider (Kahneman, 2011; Newell et al., 2007; Thaler & Sunstein, 2008). We see that two especially useful statistics are the sample size and the base rate. In addition, people have trouble thinking about the probability of two combined characteristics. Sample Size and Representativeness When we make a decision, representativeness is such a compelling heuristic that we often fail to pay attention to sample size. For example, Kahneman and Tversky (1972) asked college students to consider a hypothetical small hospital, where about 15 babies are born each day, and a hypothetical large hospital, where about 45 babies are born each day. Which hospital would be more likely to report that more than 60% of the babies on a given day would be boys, or would they both be equally likely to report more than 60% boys? The results showed that 56% of the students responded, “About the same.” In other words, the majority of students thought that a large hospital and a small hospital were equally likely to report having at least 60% baby boys born on a given day. Thus, they ignored sample size. In reality, however, sample size is an important characteristic that you should consider whenever you make decisions. A large sample is statistically more likely to reflect the true proportions in a population. In contrast, a small sample will often reveal an extreme proportion (e.g., at least 60% baby boys). However, people are often unaware that deviations from a population proportion are more likely in these small sam- ples (Newell et al., 2007; Teigen, 2004). In one of their first publications, Tversky and Kahneman (1971) pointed out that people often commit the small-­sample fallacy because they assume that a small sample will be representative of the population from which it is selected (Poulton, 1994). Unfortunately, the small-­sample fallacy leads us to incorrect decisions. We often commit the small-­sample fallacy in social situations, as well as in relatively abstract statistics problems. For example, we may draw unwarranted stereotypes about a group of people on the basis of a small number of group members (Hamilton & Sherman, 1994). One effective way of combating inap- propriate stereotypes is to become acquainted with a large number of people from the target group—­for example, through exchange programs with groups of people from other countries. Base Rate and Representativeness Representativeness is such a compelling heuristic that people often ignore the base rate, or how often the item occurs in the population. Be sure you have tried Demonstration 12.3 before reading further. Demonstration 12.3 Base Rates and Representativeness Imagine that a psychologist wrote the following description of Tom likelihood that Tom W is now a student in that program. Write 1 for W, when Tom was a senior in high school. This description was “most likely” and 7 for “least likely.” based on some psychological tests that had uncertain validity. _____ business administration Tom W is highly intelligent, but he is not genuinely creative. Tom needs everything to be orderly and clear, and he likes every _____ computer science detail to be in its appropriate place. His writing is quite dull and _____ engineering mechanical, although he loves corny puns. He sometimes makes up _____ humanities and education plots about science fiction. Tom has a strong drive for competence. _____ law He seems to have little feeling for other people, and he has little sympathy for their problems. He does not actually like interacting _____ medicine with others. Although he is self-­centered, he does have a deep moral _____ library science sense (based on a description by Kahneman, 2011, p. 147). _____ physical and life sciences Now suppose that Tom W is a graduate student at a large univer- _____ social sciences and social work sity. Rank the following nine fields of specialization, in terms of the 248 Deductive Reasoning and Decision Making Using problems such as the ones in Demonstration 12.3, Kahneman and Tversky (1973) demonstrated that people rely on representativeness when they are asked to judge category membership. In other words, we focus on whether a description is representative of members of each category. When we emphasize representativeness, we commit the base-­rate fallacy, paying too little attention to important information about base rate (Kahneman, 2011; Levy, 2010; Swinkels, 2003). If people pay appropriate attention to the base rate in this demonstration, they should select graduate programs that have a relatively high enrollment (base rate). These would include the two options “humani- ties and education” and “social science and social work.” However, most students in this study used the representativeness heuristic, and they most frequently guessed that Tom W was a graduate student in either computer science or engineering (Kahneman, 2011; Kahneman & Tversky, 1973). The description of Tom W was highly similar to (i.e., representative of) the stereotype of a computer scientist or an engineer. You might argue, however, that the Tom W study was unfair. After all, the base rates of the various graduate programs were not even mentioned in the problem. Maybe the students failed to consider that there are more graduate students in the “social sciences and social work” category than in the “computer science” category. However, when Kahneman and Tversky’s (1973) study included this base-­rate informa- tion, most people ignored it. Instead, they judged mostly on the basis of representativeness. In fact, this description for Tom W is highly representative of our stereotype for students in computer science. As a result, people tend to select this particular answer. We should emphasize, however, that the representativeness heuristic—­like all heuristics—­frequently helps us make a correct decision (Levy, 2010; Newell et al., 2007; Shepperd & Koch, 2005). Heuristics are also relatively simple to use (Hogarth & Karelaia, 2007). In addition, some problems—­and some alterna- tive wording of problems—­produce more accurate decisions (Gigerenzer, 1998; Shafir & LeBoeuf, 2002). Incidentally, research on this kind of “base-­rate” task provides support for the dual-­process approach. Specifically, different parts of the brain are activated when people use automatic, Type 1 processing, rather than slow, Type 2 processing (De Neys & Goel, 2011). Furthermore, training sessions can encourage stu- dents to use base-­rate information appropriately (Krynski & Tenenbaum, 2007; Shepperd & Koch, 2005). Training would make people more aware that they should pause and use Type 2 processing to examine the question more carefully. Be sure to try Demonstration 12.4 before you read further. The Conjunction Fallacy and Representativeness After completing Demonstration 12.4, inspect your answers, and compare which of these two choices you ranked more likely: (1) Linda is a bank teller or (2) Linda is a bank teller and is active in the feminist movement. Tversky and Kahneman (1983) presented the “Linda” problem and another similar problem to three groups of people. One was a “statistically naïve” group of undergraduates. The “intermediate-­knowledge” group consisted of first-­year graduate students who had taken one or more courses in statistics. The “sta- tistically sophisticated” group consisted of doctoral students in a decision science program who had taken Demonstration 12.4 The Conjunction Fallacy Read the following paragraph: ____ Linda is active in the feminist movement. ____ Linda is a psychiatric social worker. Linda is 31 years old, single, outspoken, and very bright. ____ Linda is a member of the League of Women Voters. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, ____ Linda is a bank teller. and she also participated in antinuclear demonstrations. ____ Linda is an insurance salesperson. ____ Linda is a bank teller and is active in the feminist movement. Now rank the following options in terms of the probability of Source: Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive rea- their describing Linda. Give a ranking of 1 to the most likely option soning: The conjunction fallacy in probability judgment. Psychological Review, and a ranking of 8 to the least likely option: 90, 293–315. ____ Linda is a teacher at an elementary school. ____ Linda works in a bookstore and takes yoga classes. Heuristics and Decision Making 249 several advanced courses in statistics. In each case, the participants were asked to rank all eight statements according to their probability, with the rank of 1 assigned to the most likely statement. Figure 12.1 shows the average rank for each of the three groups for the two critical statements: (1) “Linda is a bank teller” and (2) “Linda is a bank teller and is active in the feminist movement.” Notice that the peo- ple in all three groups believed—­incorrectly—­that the second statement would be more likely than the first. Think for a moment about why this conclusion is mathematically impossible. According to the con- junction rule, the probability of the conjunction of two events cannot be larger than the probability of either of its constituent events (Newell et al., 2007). In the Linda problem, the conjunction of the two events—­bank teller and feminist—­cannot occur more often than either event by itself. Consider another situation where the conjunction rule operates: The number of murders last year in Detroit cannot be greater than the number of murders last year in Michigan (Kahneman & Frederick, 2005). As we saw earlier in this section, representativeness is such a powerful heuristic that people may ignore useful statistical information, such as sample size and base rate. Apparently, they also ignore the math- ematical implications of the conjunction rule (Kahneman, 2011; Kahneman & Frederick, 2005). Spe- cifically, when most people try the “Linda problem,” they commit the conjunction fallacy. When people commit the conjunction fallacy, they judge the probability of the conjunction of two events to be greater than the probability of either constituent event. Tversky and Kahneman (1983) traced the conjunction fallacy to the representativeness heuristic. They argued that people judge the conjunction of “bank teller” and “feminist” to be more likely than the simple event “bank teller.” After all, “feminist” is a characteristic that is very representative of (i.e., similar to) someone who is single, outspoken, bright, a philosophy major, concerned about social justice, and an antinuclear activist. A person with these characteristics doesn’t seem likely to become a bank teller, but seems instead highly likely to be a feminist. By adding the extra detail of “feminist” to “bank teller,” the description seems more representative and also more plausible—­even though this description is statisti- cally less likely (Swoyer, 2002). Psychologists are intrigued with the conjunction fallacy, especially because it demonstrates that peo- ple can ignore one of the most basic principles of probability theory. Furthermore, research by Keith Stanovich (2011) shows that college students with high SAT scores are actually more likely than other students to demonstrate this conjunction fallacy. The results for the conjunction fallacy have been replicated many times, with generally consistent find- ings (Fisk, 2004; Kahneman & Frederick, 2005; Stanovich, 2009). For example, the probability of “spilling hot coffee” seems greater than the probability of “spilling coffee” (Moldoveanu & Langer, 2002)... until you identify the conjunction fallacy. Very 7 unlikely Bank teller 6 Bank teller and feminist Likelihood ranking 5 FIGURE 12.1 The influence of the type 4 of statement and level of statistical sophisti- cation on likelihood 3 rankings. Low numbers on the ranking indicate that people think the 2 event is very likely. Source: Adapted from Tversky, A., & Kahneman, D. Very (1983). Extensional versus likely 1 intuitive reasoning: The Statistically Intermediate Statistically conjunction fallacy in naive knowledge sophisticated probability judgment. Psychological Review, 90, Level of statistical sophistication 293–315. 250 Deductive Reasoning and Decision Making Availability Heuristic A second important heuristic that people use in making decisions is availability. You use the availability heuristic when you estimate frequency or probability in terms of how easy it is to think of relevant exam- ples of something (Hertwig et al., 2005; Kahneman, 2011; Reber, 2004; Tversky & Kahneman, 1973). In other words, people judge frequency by assessing whether they can easily retrieve relevant examples from memory or whether this memory retrieval is difficult. The availability heuristic is generally helpful in everyday life. For example, suppose that someone asked you whether your college had more students from Illinois or more from Idaho. You haven’t memo- rized these geography statistics, so you would be likely to answer the question in terms of the relative availability of examples of Illinois students and Idaho students. Let’s also say that your memory has stored the names of dozens of Illinois students, and so you can easily retrieve their names (“Jessica, Akiko, Bob...”). Let’s also say that your memory has stored only one name of an Idaho student, so you cannot think of many examples of this category. Because examples of Illinois students were relatively easy to retrieve, you conclude that your college has more Illinois students. In general, then, this availability heu- ristic can be a relatively accurate method for making decisions about frequency (Kahneman, 2011). A heuristic is a general strategy that is typically accurate. The availability heuristic is accurate as long as availability is correlated with true, objective frequency—­and it usually is. However, the availability heuris- tic can lead to errors (Levy, 2010; Thaler & Sunstein, 2008). As we will see in a moment, several factors can influence memory retrieval, even though they are not correlated with true, objective frequency. These fac- tors can bias availability, and so they may decrease the accuracy of our decisions. We will see that recency and familiarity—­both factors that influence memory—­can potentially distort availability. Figure 12.2 illus- trates how these two factors can contaminate the relationship between true frequency and availability. Before exploring the research about availability, let’s briefly review how representativeness—­the first decision-­making heuristic—­differs from availability. When we use the representativeness heuristic, we are given a specific example (such as T H H T H T or Linda the bank teller). We then make judgments about whether the specific example is similar to the general category that it is supposed to represent (such as frequency and estimated frequency, with recency and familiarity as “contaminating” factors coin tosses or philosophy majors concerned about social justice). In contrast, when we use the availability heuristic, we are given a general category, and we must recall the specific examples (such as examples of Illinois students). We then make decisions based on whether the specific examples come easily to mind. So here is a way to remember the two heuristics: 1. If the problem is based on a judgment about similarity, you are dealing with the representativeness heuristic. 2. If the problem requires you to remember examples, you are dealing with the availability heuristic. Recency and Familiarity Effects As you know from Chapters 4–6, your memory is better for items that you’ve recently seen, compared to items you saw long ago. In other words, those more recent items are more available. As a result, we judge recent items to be more likely than they really are. “Contaminants” 1. Recency 2. Familiarity FIGURE 12.2 The relationship between true frequency and esti- mated frequency, with True Estimated Availability recency and familiarity frequency frequency as “contaminating” factors. Heuristics and Decision Making 251 The familiarity of the examples—­as well as their recency—­can also produce a distortion in frequency esti- mation (Kahneman, 2011). Norman Brown and his colleagues conducted research on this topic in Canada, the United States, and China (Brown, Cui, & Gordon, 2002; Brown & Siegler, 1992). They discovered that the media can distort people’s estimates of a country’s population. Brown and Siegler (1992), for example, conducted a study during an era when El Salvador was frequently mentioned in the news because of U.S. intervention in Latin America. In contrast, Indonesia was seldom mentioned. Brown and Siegler found that the students’ estimates for the population of these two countries were similar, even though the population of Indonesia was about 35 times as large as the population of El Salvador. The media can also influence view- ers’ ideas about the prevalence of different points of view. For instance, the media often give equal coverage to several thousand protesters and to several dozen counter protesters. Notice whether you can spot the same tendency in current news broadcasts. Does the media coverage create our cognitive realities? How can we counteract Type 1 processing, which happens when we first encounter some information? Kahneman (2011) suggests that we can overcome that initial reaction by using critical thinking and shift- ing to Type 2 processing. For example, someone might analyze a friend’s use of the availability heuristic and argue, “He underestimates the risks of indoor pollution because there are few media stories on them. That’s an availability effect. He should look at the statistics.” (p. 136). The Recognition Heuristic We have emphasized that the decision-­making heuristics are generally helpful and accurate. However, most of the examples have emphasized that judgment accuracy is hindered by factors such as recency and familiarity. Let’s discuss a special case of the availability heuristic, which often leads to an accurate deci- sion (Goldstein & Gigerenzer, 2002; Kahneman, 2011; Volz et al., 2006). Suppose that someone asks you which of two Italian cities has the larger population, Milan or Modena. Most U.S. students have heard of Milan, but they may not recognize the name of a nearby city called Modena. The recognition heuristic typically operates when you must compare the relative frequency of two categories; if you recognize one category, but not the other, you conclude that the recognized category has the higher frequency. In this case, you would correctly respond that Milan has the greater population (Volz et al., 2006). Keep this example of correct decision making in mind as you read the remainder of this chapter. Anchoring and Adjustment Heuristic You’ve probably experienced several incidents like this one. A friend asks you, “Can you meet me at the library in 15 minutes?” You know that it takes longer than 15 minutes to get there, so you make a modest adjustment and agree to meet in 20 minutes. However, you didn’t count on needing to find your coat, or your cell phone ringing, or stopping to tie a shoelace, or several other trivial events. Basically, you could have arrived in 20 minutes (well, maybe 25), if everything had gone smoothly. In retrospect, you failed to make large enough adjustments to account for the inevitable delays. (Try Demonstration 12.5 when it’s convenient, but complete Demonstration 12.6 before you read further.) According to the anchoring and adjustment heuristic—­also known as the anchoring effect—­we begin with a first approximation, which serves as an anchor; then we make adjustments to that number based on additional information (Mussweiler et al., 2004; Thaler & Sunstein, 2008; Tversky & Kahneman, 1982). This heuristic often leads to a reasonable answer, just as the representativeness and availability Demonstration 12.5 The Anchoring and Adjustment Heuristic Copy the two multiplication problems listed below on separate Now, tally the answers separately for the two problems, pieces of paper. Show Problem A to at least five friends, and show listing the answers from smallest to largest. Calculate the Problem B to at least five other friends. In each case, ask the partic- median for each problem. (If you have an uneven number ipants to estimate the answer within five seconds. of participants, the median is the answer in the middle of the distribution—­with half larger and half smaller. If you have A. 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 an even number of participants, take the average between the B. 1 × 2 × 3 × 4 × 5 × 6 × 7 × 8 two answers in the middle of the distribution.) 252 Deductive Reasoning and Decision Making Demonstration 12.6 Estimating Confidence Intervals For each of the following questions, answer in terms of a range, 4. In 2009, what was the average life expectancy in Canada? rather than a single number. Specifically, you should supply a 98% 5. How many dollars did the United States spend for military confidence interval. A confidence interval is the range within which expenditures in 2010? you expect the correct answer to fall. For example, suppose you 6. In what year did New Zealand give women the right to vote? answer a question by supplying a 98% confidence interval that is 2,000 to 7,000. This means that you think there is only a 2% chance 7. What was the median salary of a U.S. male college graduate that the real answer is either less than 2,000 or more than 7,000. The in 2009? correct answers can be found at the end of the chapter. 8. What is the total area of Canada (in either square kilometers or 1. How many full-­time students were enrolled in U.S. colleges and square miles)? universities in 2011? 9. What was the estimated population of France in 2010? 2. According to the official count, how many people died in the 10. Of the residents of Canada, what percentage report that they are 2011 earthquake and tsunami in Japan? Roman Catholics? 3. In what year did Martin Van Buren begin his term as the presi- Source: All questions are based on information from “Countries of the World,” dent of the United States? 2012; “Student Demographics,” 2011; and Statistics Canada (2012b). heuristics often lead to reasonable answers. However, people typically rely too heavily on the anchor, such that their adjustments are too small (Kahneman, 2011). The anchoring and adjustment heuristic illustrates once more that people tend to endorse their current hypotheses or beliefs, rather than trying to question them (Baron, 2000; Kida, 2006). That is, they empha- size top-­down processing, consistent with Theme 5. We’ve seen several other examples of this tendency in the present chapter: 1. The belief-­bias effect: We rely too heavily on our established beliefs. 2. The confirmation bias: We prefer to confirm a current hypothesis, rather than to reject it. 3. The illusory correlation: We rely too strongly on one well-­known cell in a 2 × 2 data matrix, and we fail to seek information about the other three cells. Let’s begin by considering some research on the anchoring and adjustment heuristic. Then, we will see how this heuristic can be applied to estimating confidence intervals. Research on the Anchoring and Adjustment Heuristic Demonstration 12.5 illustrates the anchoring and adjustment heuristic. In a classic study, high school students were asked to estimate the answers to these two multiplication problems (Tversky & Kahneman, 1982). The students were allowed only five seconds to respond. The results showed that the two prob- lems generated widely different answers. If the first number in this sequence was 8, a relatively large number, the median of their estimates was 2,250. (That is, half the students estimated higher than 2,250, and half estimated lower.) In contrast, if the first number was 1, a small number, their median estimate was only 512. Furthermore, both groups anchored too heavily on the initial impression that every number in the prob- lem was only a single digit, because both estimates were far too low. The correct answer for both problems is 40,320. Did the anchoring and adjustment heuristic influence the people you tested? The anchoring and adjustment heuristic is so powerful that it operates even when the anchor is obvi- ously arbitrary or impossibly extreme, such as a person living to the age of 140. It also operates for both novices and experts (Herbert, 2010; Kahneman, 2011; Mussweiler et al., 2004; Tversky & Kahneman, 1974). Researchers have not developed precise explanations for the anchoring and adjustment heuristic. However, one likely mechanism is that the anchor restricts the search for relevant information in memory. Specifically, people concentrate their search on information relatively close to the anchor, even if this anchor is not a realistic number (Kahneman, 2011; Pohl et al., 2003). The anchoring and adjustment heuristic has many applications in everyday life (Janiszewski, 2011; Mussweiler et al., 2004; Newell et al., 2007). For example, Englich and Mussweiler (2001) studied anchor- ing effects in courtroom sentencing. Trial judges with an average of 15 years of experience listened to a Heuristics and Decision Making 253 typical legal case. The role of the prosecutor was played by a person who was introduced as a computer science student. This student was obviously a novice in terms of legal experience, so the judges should not take him seriously. However, when the “prosecutor” demanded a sentence of 12 months, these experienced judges recommended 28 months. In contrast, when the “prosecutor” demanded a sentence of 34 months, these judges recommended a sentence of 36 months. Estimating Confidence Intervals We use anchoring and adjustment when we estimate a single number. We also use this heuristic when we estimate a confidence interval. A confidence interval is the range within which we expect a number to fall a certain percentage of the time. For example, you might guess that the 98% confidence interval for the number of students at a particular college is 3,000 to 5,000. This guess would mean that you think there is a 98% chance that the number is between 3,000 and 5,000, and only a 2% chance that the number is outside of this range. Demonstration 12.6 tested the accuracy of your estimates for various kinds of numerical information. Turn to the end of this chapter to see how many of your confidence-­interval estimates included the cor- rect answer. Suppose that many people were instructed to provide a confidence interval for each of these 10 questions. Then, we would expect their confidence intervals to include the correct answer about 98% of the time, assuming that their estimation techniques had been correct. Studies have shown, however, that people provide 98% confidence intervals that actually include the correct answer only about 60% of the time (Hoffrage, 2004). In other words, our estimates for these confidence intervals are definitely too narrow. The research by Tversky and Kahneman (1974) pointed out how the anchoring and adjustment heuristic is relevant when we make confidence-­interval estimates. We first provide a best estimate, and we use this figure as an anchor. Next, we make adjustments upward and downward from this anchor to construct the confidence-­interval estimate. However, our adjustments are typically too small. Consider, for example, Question 1 in Demonstration 12.6. Perhaps you initially guessed that the United States currently has eight million full-­time students in college. You might then say that your 98% confidence interval was between six million and 10 million. This interval would be too narrow, because you had made a large error in your original estimate. Check the correct answers at the end of this chapter. Again, we establish our anchor, and we do not wander far from it in the adjustment process (Kahneman, 2011; Kruglanski, 2004). When we shut our minds to new possibilities, we rely too heavily on top-­down processing. An additional problem is that most people don’t really understand confidence intervals. For instance, when you estimated the confidence intervals in Demonstration 12.6, did you emphasize to yourself that each confidence interval should be so wide that there was only a 2% chance of the actual number being either larger or smaller than this interval? Teigen and Jørgensen (2005) found that college students tend to misinterpret these confidence intervals. In their study, the students’ 90% confidence intervals were associ- ated with an actual certainty of only about 50%. You can overcome potential biases from the anchoring and adjustment heuristic. First, think carefully about your initial estimate. Then, ask yourself whether you are paying enough attention to the features of this specific situation that might require you to change your anchor, or else to make large adjustments away from your initial anchor. Current Status of Heuristics and Decision Making Some researchers have argued that the heuristic approach—­developed by Kahneman and Tversky—­may underestimate people’s decision-­making skills. For example, research by Adam Harris and his colleagues found that people make fairly realistic judgments about future events (Harris et al., 2009; Harris & Hahn, 2011). Gerd Gigerenzer and his colleagues agree that people are not perfectly rational decision makers, especially under time pressure. They emphasize that people can, however, do relatively well when they are given a fair chance on decision-­making tasks. For instance, we saw that the recognition heuristic is reason- ably accurate. Other research shows that people answer questions more accurately in naturalistic settings, especially if the questions focus on frequencies, rather than probabilities (e.g., Gigerenzer, 2006a, 2006b, 2008; Todd & Gigerenzer, 2007). Peter Todd and Gerd Gigerenzer (2007) devised a term called ecological rationality to describe how people create a wide variety of heuristics to help themselves make useful, adaptive decisions in the real world. For example, only 28% of U.S. residents become potential organ donors, in contrast to 99.9% of French residents. Gigerenzer (2008) suggests that both groups are using a simple default heuristic; 254 Deductive Reasoning and Decision Making specifically, if there is a standard option—­which happens if people do nothing—­then people will choose it. In the United States, you typically need to sign up to become an organ donor. Therefore, the majority of U.S. residents—­using the default heuristic—­remain in the nondonor category. In France, you are an organ donor unless you specifically opt out of the donor program. Therefore, the majority of French residents—­ using the default heuristic—­remain in the donor category. Furthermore, people bring their world knowledge into the research laboratory, where researchers often design the tasks to specifically contradict their schemas. For example, do you really believe that Linda wouldn’t be a feminist given her long-­time commitment to social justice? The two approaches—­one proposed by Kahneman and one by Gigerenzer—­may seem fairly differ- ent. However, both approaches suggest that decision-­making heuristics generally serve us well in the real world. Furthermore, we can become more effective decision makers by realizing the limitations of these important strategies (Kahneman & Tversky, 2000). Applications of Decision-­Making Research Decision making is an interdisciplinary field that includes research in all the social sciences, including psy- chology, economics, political science, and sociology (LeBoeuf & Shafir, 2012; Mosier & Fischer, 2011). It also includes other areas such as statistics, philosophy, medicine, education, and law (Reif, 2008; Mosier & Fischer, 2011; Schoenfeld, 2011). Within the discipline of psychology, decision making inspires numerous books and articles each year. For example, many books provide a general overview of decision making (e.g., Bennett & Gibson, 2006; Hallinan, 2009; Herbert, 2010; Holyoak & Morrison, 2012; Kahneman, 2011; Kida, 2006; Lehrer, 2009; Schoenfeld, 2011; Stanovich, 2009, 2011). Other recent books consider decision-­ making approaches, such as critical thinking (Levy, 2010). And, many other books consider decision making in specific areas, such as business (Henderson & Hooper, 2006; Mosier & Fischer, 2011; Useem, 2006); pol- itics (Thaler & Sunstein, 2008; Weinberg, 2012); the neurological correlates of decision making (Delgado et al., 2011; Vartanian & Mandel, 2011); healthcare (Groopman, 2007; Mosier & Fischer, 2011); and educa- tion (Reif, 2008; Schoenfeld, 2011). In general, the research on decision making examines concrete, realistic scenarios, rather than the kind of abstract situations used in research on deductive reasoning. Research on decision making can be particularly useful with respect to helping us develop strategies to make better decisions in real-­life situations. In this section, we focus more squarely on the applied nature of decision-­making research. Framing Effect The framing effect demonstrates that the outcome of your decision can be influenced by two factors: (1) the background context of the choice and (2) the way in which a question is worded—­or, framed (LeBoeuf & Shafir, 2012; McGraw et al., 2010). However, before we discuss these two factors, be sure you have tried Demonstration 12.7, which appears below. Take a moment to read Demonstration 12.7 once more. Notice that the amount of money is $20 in both cases. If decision makers were perfectly “rational,” they would respond identically to both problems Demonstration 12.7 The Framing Effect and Background Information Try the following two problems: Problem 2 Problem 1 Imagine that you decided to buy a ticket for a concert; the ticket will cost $20. You go to the theater box office. Then you open your Imagine that you decided to see a concert, and you paid $20 for wallet and discover that a $20 bill is missing. (Fortunately, you still the admission price of one ticket. You are about to enter the the- have $40 left in your wallet.) Would you pay $20 for a ticket for the ater, when you discover that you cannot find your ticket. The theater concert? doesn’t keep a record of ticket purchases, so you cannot simply get another ticket. You have $60 in your wallet. Would you pay $20 for Source: Based on Tversky and Kahneman (1981). another ticket for the concert? Applications of Decision-­Making Research 255 (Kahneman, 2011; LeBoef & Shafir, 2012; Moran & Ritov, 2011). However, the decision frame differs for these two situations, so they seem psychologically different from each other. We frequently organize our mental expense accounts according to topics. Specifically, we view going to a concert as a transaction in which the cost of the ticket is exchanged for the experience of seeing a concert. If you buy another ticket, the cost of seeing that concert has increased to a level that many people find unacceptable. When Kahneman and Tversky (1984) asked people what they would do in the case of Problem 1, only 46% said that they would pay for another ticket. In contrast, in Problem 2, people did not tally the lost $20 bill in the same account as the cost of a ticket. In this second case, people viewed the lost $20 as being generally irrelevant to the ticket. In Kahneman and Tversky’s (1984) study, 88% of the participants said that they would purchase the ticket in Problem 2. In other words, the background information provides different frames for the two problems, and the specific frame strongly influences the decision. The Wording of a Question and the Framing Effect In Chapter 11, we saw that people often fail to realize that two problems may share the same deep struc- ture, for instance in algebra problems. In other words, people are distracted by the differences in the surface structure of the problems. When people make decisions, they are also distracted by differences in surface structure. For example, people who conduct surveys have found that the exact wording of a ques- tion can have a major effect on the answers that respondents provide (Bruine de Bruin, 2011). Complete Demonstration 12.8 before reading further. Tversky and Kahneman (1981) tested college students in both Canada and the United States, using Problem 1 in Demonstration 12.8. Notice that both choices emphasize the number of lives that would be saved. They found that 72% of their participants chose Program A, and only 28% chose Program B. Notice that the participants in this group were “risk averse.” That is, they preferred the certainty of saving 200 lives, rather than the risky prospect of a one-­in-­three possibility of saving 600 lives. Notice, however, that the benefits of Programs A and B in Problem 1 are statistically identical. Now inspect your answer to Problem 2, in which both choices emphasize the number of lives that would be lost (i.e., the number of deaths). Tversky and Kahneman (1981) presented this problem to a different group of students from the same colleges that they had tested with Problem 1. Only 22% favored Program C, but 78% favored Program D. Here the participants were “risk taking”; they preferred the two-­in-­three chance that 600 would die, rather than the guaranteed death of 400 people. Again, however, the benefits of the two programs are statistically equal. Furthermore, notice that Problems 1 and 2 have identical deep structures. The only difference is that the outcomes are described in Problem 1 in terms of the lives saved, but in Problem 2 in terms of the lives lost. The way that a question is framed—­lives saved or lives lost—­has an important effect on people’s decisions (Hardman, 2009; Moran & Ritov, 2011; Stanovich, 2009). This framing changes people from Demonstration 12.8 The Framing Effect and the Wording of a Question Try the following two problems: Problem 2 Problem 1 Now imagine the same situation, but with these two alternatives: Imagine that a country in Europe is preparing for the outbreak of an If Program C is adopted, 400 people will die. unusual disease, which is expected to kill 600 people. The public health officials have proposed two alternative programs to combat If Program D is adopted, there is a one-­third probabil- the disease. Assume that these officials have scientifically estimated ity that no one will die, and a two-­thirds probability that the consequences of the programs, as follows: 600 people will die. Which of these two programs would you choose? If they adopt Program A, 200 people will be saved. Source: Based on Tversky and Kahneman (1981). If they adopt Program B, there is a one-­third probability that 600 people will be saved, and a two-­thirds probability that zero people will be saved. Which of these two programs would you choose? 256 Deductive Reasoning and Decision Making focusing on the possible gains (lives saved) to focusing on the possible losses (lives lost). In the case of Problem 1, we tend to prefer the certainty of having 200 lives saved, so we avoid the option where it’s pos- sible that no lives will be saved. In the case of Problem 2, however, we tend to prefer the risk that nobody will die (even though there is a good chance that 600 will die); we avoid the option where 400 face certain death. Tversky and Kahneman (1981) chose the name prospect theory to refer to people’s tendencies to think that possible gains are different from possible losses. Specifically: 1. When dealing with possible gains (e.g., lives saved), people tend to avoid risks. 2. When dealing with possible losses (e.g., lives lost), people tend to seek risks. Numerous studies have replicated the general framing effect, and the effect is typically strong (Kahneman, 2011; LeBoeuf & Shafir, 2012). Furthermore, the framing effect is common among statisti- cally sophisticated people as well as statistically naive people, and the magnitude of the effect is relatively large. In addition, Mayhorn and his colleagues (2002) found framing effects with both students in their 20s and with older adults. The research on framing suggests some practical advice: When you are making an important decision, try rewording the description of this decision. For example, suppose that you need to decide whether to accept a particular job offer. Ask yourself how you would feel about having this job, and then ask yourself how you would feel about not having this job. This kind of Type 2 processing can help you make wiser decisions (Kahneman, 2011). Overconfidence About Decisions In the previous section, we saw that decisions can be influenced by three decision-­making heuristics: the representativeness heuristic, the availability heuristic, and the anchoring and adjustment heuristic. Further- more, the framing effect—­discussed in this section—­demonstrates that both the background information and the wording of a statement can encourage us to make unwise decisions. Given these sources of error, people should realize that their decision-­making skills are nothing to boast about. Unfortunately, how- ever, the research shows that people are frequently overconfident (Kahneman, 2011; Krizan & Windschitl, 2007; Moore & Healy, 2008). Overconfidence means that your confidence judgments are higher than they should be based on your actual performance on the task. We have already discussed two examples of overconfidence in decision making in this chapter. In an illusory correlation, people are confident that two variables are related, when in fact the relationship is either weak or nonexistent. In anchoring and adjustment, people are so confident in their estimation abili- ties that they supply very narrow confidence intervals for these estimates. Let’s now consider research on several aspects of overconfidence before considering several factors that help to create overconfidence. General Studies on Overconfidence A variety of studies show that humans are overconfident in many decision-­making situations. For example, people are overconfident about how long a person with a fatal disease will live, which firms will go bank- rupt, and whether the defendant is guilty in a court trial (Kahneman & Tversky, 1995). People typically have more confidence in their own decisions than in predictions that are based on statistically objective measurements. In addition, people tend to overestimate their own social skills, creativity, leadership abili- ties, and a wide range of academic skills (Kahneman & Renshon, 2007; Matlin, 2004; Matlin & Stang, 1978; Moore & Healy, 2008). In addition, physicists, economists, and other researchers are overconfident that their theories are correct (Trout, 2002). We need to emphasize, however, that individuals differ widely with respect to overconfidence (Oreg & Bayazit, 2009; Steel, 2007). For example, a large-­scale study showed that 77% of the student partici- pants were overconfident about their accuracy in answering general-­knowledge questions such as those in Demonstration 12.6. Still, these results tell us that 23% were either on target or underconfident (Stanovich, 1999). Furthermore, people from different countries may differ with respect to their confidence (Weber & Morris, 2010). For example, a cross-­cultural study in three countries reported that Chinese residents showed greater overconfidence, and the U.S. residents were intermediate. However, the least-­confident group was Japanese residents, who also took the longest to make their decisions (Yates, 2010). Applications of Decision-­Making Research 257 Overconfidence About Completing Projects on Time Are you surprised to learn that students are frequently overly optimistic about how quickly they can com- plete a project? In reality, this overconfidence applies to most people. Even Daniel Kahneman (2011) describes examples of his own failure in completing projects on time. According to the planning fallacy, people typically underestimate the amount of time (or money) required to complete a project; they also estimate that the task will be relatively easy to complete (Buehler et al., 2002, 2012; Kahneman, 2011; Peetz et al., 2010; Sanna et al., 2009). Notice why this fallacy is related to overconfidence. Suppose that you are overconfident when you make decisions. You will then estimate that your paper for cognitive psychology will take only 10 hours to complete, and you can easily finish it on time if you start next Tuesday. Researchers certainly have not discovered a method for eliminating the planning fallacy. However, research suggests several strategies that can help you make more realistic estimates about the amount of time a large project will require. 1. Divide your project into several parts, and estimate how long each part will take. This process will provide a more realistic estimate of the time you will need to complete the project (Forsyth & Burt, 2008). 2. Envision each step in the process of completing your project, such as gathering the materials, organizing the project’s basic structure, and so forth. Each day, rehearse these components (Taylor et al., 1998). 3. Try thinking about some person other than yourself, and visualize how long this person took to complete the project; be sure to visualize the potential obstacles in your imagery (Buehler et al., 2012). The planning fallacy has been replicated in several studies in the United States, Canada, and Japan. How can we explain people’s overconfidence that they will complete a task on time? One factor is that people create an optimistic scenario that represents the ideal way in which they will make progress on a project. This scenario fails to consider the large number of problems that can arise (Buehler et al., 2002). People also recall that they completed similar tasks relatively quickly in the past (Roy & Christenfeld, 2007; Roy et al., 2005). In addition, they estimate that they will have more free time in the future, com- pared to the free time they have right now (Zauberman & Lynch, 2005). In other words, people use the anchoring and adjustment heuristic, and they do not make large enough adjustments to their original sce- nario, based on other useful information. Reasons for Overconfidence We have seen many examples demonstrating that people tend to be overconfident about the correctness of their decisions. This overconfidence arises from errors during many different stages in the decision-­ making process: 1. People are often unaware that their knowledge is based on very tenuous, uncertain assumptions and on information from unreliable or inappropriate sources (Bishop & Trout, 2002). 2. Examples that confirm our hypotheses are readily available, but we resist searching for counterexamples (Hardman, 2009; Lilienfeld et al., 2009; Mercier & Sperber, 2011). You’ll recall from the discussion of deductive reasoning that people also persist in confirming their current hypothesis, rather than looking for negative evidence. 3. People have difficulty recalling the other possible hypotheses, and decision making depends on memory (Theme 4). If you cannot recall the competing hypotheses, you will be overly confident about the hypothesis you have endorsed (Trout, 2002). 4. Even if people manage to recall the other possible hypotheses, they do not treat them seriously. The choice once seemed ambiguous, but the alternatives now seem trivial (Kida, 2006; Simon et al., 2001). 5. Researchers do not educate the public about the overconfidence problem (Lilienfeld et al., 2009). As a result, we typically do not pause—­on the brink of making a decision—­and ask ourselves, “Am I relying only on Type 1 thinking? I need to switch over to Type 2 thinking!” 258 Deductive Reasoning and Decision Making When people are overconfident in a risky situation, the outcome can often produce disasters, deaths, and widespread destruction. The term my-­side bias describes the overconfidence that your own view is correct in a confrontational situation (Stanovich, 2009; Toplak & Stanovich, 2002). Conflict often arises when individuals (or groups or national leaders) each fall victim to my-­side bias. People are so confident that their position is correct that they cannot even consider the possibility that their opponent’s position may be at least partially correct. If you find yourself in conflict with someone, try to overcome my-­side bias. Could some part of the other people’s position be worth considering? More generally, try to reduce the overconfidence bias when you face an important decision. Emphasize Type 2 processing, and review the five points listed above. Are you perhaps overconfident that this deci- sion will have a good outcome? Hindsight Bias People are overconfident about predicting events that will happen in the future. In contrast, hindsight refers to our judgments about events that already happened in the past. The hindsight bias occurs when an event has happened, and we say that the event had been inevitable; we had actually “known it all along” (Hastie & Dawes, 2010). In other words, the hindsight bias reflects our overconfidence that we could have accurately predicted a particular outcome at some point in the past (Hardt et al., 2010; Pezzo & Beckstead, 2008; Pohl, 2004; Sanna & Schwarz, 2006). The hindsight bias demonstrates that we often reconstruct the past so that it matches our present knowledge (Schacter, 2001). The hindsight bias can operate for the judgments we make about people. In a thought-­provoking study, Linda Carli (1999) asked students to read a two-­page story about a young woman named Barbara and her relationship with Jack, a man she had met in graduate school. The story, told from Barbara’s viewpoint, provided background information about Barbara and her growing relationship with Jack. Half of the stu- dents read a version that had a tragic ending, in which Jack rapes Barbara. The other half read a version that was identical except that it had a happy ending, in which Jack proposes marriage to Barbara. After reading the story, each student then completed a true/false memory test. This test examined recall for the facts of the story, but it also included questions about information that had not been mentioned in the story. Some of these questions were consistent with a stereotyped version of a rape scenario, such as “Barbara met many men at parties.” Other questions were consistent with a marriage-­proposal scenario, such as “Barbara wanted a family very much.” The results of Carli’s (1999) study demonstrated the hindsight bias. People who read the version about the rape responded that they could have predicted Barbara would be raped. Furthermore, people who read the marriage-­proposal version responded that they could have predicted Jack would propose to Barbara. (Remember that the two versions were actually identical, except for the final ending.) Further- more, each group committed systematic errors on the memory test. Each group recalled items that were consistent with the ending they had read, even though this information had not appeared in the story. Carli’s (1999) study is especially important because it helps us understand why many people “blame the victim” following a tragic event such as a rape. In reality, this person’s earlier actions may have been per- fectly appropriate. However, people often search the past for reasons why a victim deserved that outcome. As we’ve seen in Carli’s research, people may even “reconstruct” some reasons that did not actually occur. The hindsight bias has been de

Use Quizgecko on...
Browser
Browser