Chapter 7 / The Trait Approach PDF
Document Details
Uploaded by BraveJubilation
Tags
Summary
This chapter discusses the trait approach to personality, focusing on correlations between traits and behavior. It examines the importance of correlation strength, using examples from medical studies and social psychology, and also mentions considerations of central versus secondary traits. The chapter concludes by discussing the limitations of using personality testing alone.
Full Transcript
152 Chapter 7 / The Trait Approach Although we would expect extraverts to initiate more social contacts than introverts, the researchers found an insignificant correlation between any one day’s total of social contacts and extraversion scores. However, when the researchers looked at the r elatio...
152 Chapter 7 / The Trait Approach Although we would expect extraverts to initiate more social contacts than introverts, the researchers found an insignificant correlation between any one day’s total of social contacts and extraversion scores. However, when the researchers looked at the r elation between the scale score and the student’s 2-week total of initiated social contacts, they found an impressive correlation of .52. Another investigation looked at trait measures of aggression and the number of aggressive acts students performed (e.g., getting into an argument, yelling at someone) over the course of 2 weeks (Wu & Clark, 2003). The researchers found a correlation of .51 between the aggregated aggression measure and the trait score. Another reason personality trait measures often fail to break the .30 to .40 barrier is that researchers may be looking at the wrong traits. Recall Allport’s distinction between central and secondary traits. A trait is more likely to predict a person’s behavior if that trait is important, or central, for the person. Suppose you were interested in the trait independence. You might give an independence scale to a large number of people, and then correlate the scores with how independently people acted in some subsequent situation. But in doing this, you probably would group together those people for whom independence is an important (central) trait and those for whom it is a relatively unimportant (secondary) trait. You undoubtedly would do a better job of predicting independent behavior by limiting your sample to people who consider independence an important personality dimension. By including people for whom the trait is only secondary, you are likely to dilute the correlation between the trait score and the behavior (Britt & Shepperd, 1999). Indeed, when researchers limit their samples to people for whom the trait is central, they find significantly higher correlations between trait scores and behavior (Baumeister, 1991; Baumeister & Tice, 1988; Bem & Allen, 1974; Britt, 1993; Reise & Waller, 1993; Siem, 1998). The Importance of 10% of the Variance But even if we accept that personality measures often can only account for about 10% of the variance in measures of behavior, we need to ask this question—how high does a correlation have to be before it is considered important? One team of researchers responded to this question by looking at several social–psychological (situation-focused) investigations often cited for their “important” findings (Funder & Ozer, 1983). The researchers converted the data from these studies into correlation coefficients and found they ranged from .36 to .42. In other words, the “important” effects of situational variables are, statistically speaking, no more important than the effects deemed weak by critics of personality traits. Yet another way to examine the importance of these correlations is to compare the amount of variance accounted for in personality research with results from other sciences. One psychologist looked at some highly acclaimed research in the field of medicine (Rosenthal, 1990). One large medical study he examined made headlines when researchers found that aspirin significantly reduced the risk of heart attacks. In fact, the investigators ended the experiment earlier than planned because the results were so clear. To continue to give one group of patients placebo pills instead of aspirin would have been unethical. Obviously, the researchers considered this an important finding. Yet when we examine the data, we find that the medical researchers were dealing with a correlation of around .03, which accounted for less than 1% of the Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. Application: The Big Five in the Workplace 153 variance! Another team of investigators looked at the relation between personality measures and important life events like mortality, divorce, and success at work (Roberts, Kuncel, Shiner, Caspi, & Goldberg, 2007). Not only were personality traits significant predictors of these events, but they also accounted for as much or more variance than either socioeconomic status or cognitive ability (e.g., IQ), two concepts typically considered to be important determinants of behavior. The point is that importance is a subjective judgment. When dealing with medical treatments, reliably saving a small number of lives is important. When trying to predict behavior from personality test scores, we must remember that most of the behaviors we are interested in are determined by a large number of causes. No one will ever discover a single cause for why students do well in school or why consumers buy one product over another. Rather, the goal of most studies is to account for some of the variance. When we think about all the complex influences on our behavior, we probably should be impressed that personality psychologists can explain even 10%. Application: The Big Five in the Workplace I magine you own your own business and have to make a quick hiring decision. You have five applications on your desk, all nearly identical. You notice that each applicant’s file includes some personality test scores. Specifically, you have scores for each job candidate on each of the Big Five personality dimensions. A quick glance through the applications tells you that each applicant has one score that distinguishes him or her from the rest of the pack. One applicant is high in Extraversion, another scored very low on Neuroticism, and one is notably high in Openness. Predictably, another applicant is especially high in Agreeableness, whereas the final applicant’s distinguishing score is his or her high level of Conscientiousness. Time is running out, and you have to make your decision based on this information alone. Looking back at the descriptions of the Big Five factors on page 143, which of these five people do you suppose you will hire? Of course, the answer to the question depends on the kind of job and many other important variables. But if you had to make a quick decision based on this limited amount of information, you might consider a growing body of research that in fact points to the best answer. Employers have used scores from personality tests to make hiring and promotion decisions for many years (Roberts & Hogan, 2001). And for just about the same length of time, critics have complained that employers misuse and misinterpret personality test scores when making these decisions. Just as Mischel criticized clinical psychologists for relying too heavily on test scores to make diagnoses about psychological disorders, these critics pointed to research indicating low correlations between test scores and job performance (Reilly & Chao, 1982; Schmidt, Gooding, Noe, & Kirsch, 1984). But the debate about using personality tests to predict success in the workplace changed with the development of the Big Five model (Goldberg, 1993; Landy, Shankster, & Kohler, 1994). Rather than examining a large number of personality variables that may or may not be related to how well people perform their jobs, researchers addressed the question of personality and job performance by using the Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. 154 Chapter 7 / The Trait Approach five larger personality dimensions. The findings from that research provided much stronger e vidence for the relationship between personality and job performance than had been previously demonstrated (Tett, Jackson, & Rothstein, 1991). So, which of the five applicants is likely to make the best employee? Although a case can be made for each of the five, a great deal of research indicates that, of the Big Five factors, Conscientiousness may be the best predictor of job performance (Barrick & Mount, 1991; Barrick, Mount, & Judge, 2001; Hurtz & Donovan, 2000; Sackett & Walmsley, 2014). To understand why, we need only to look at some of the characteristics that make up this personality dimension. People who score high in Conscientiousness are said to be careful, thorough, and dependable. That is, they don’t rush through a job but take time to do the job correctly and completely. Highly conscientious people tend to be organized and to lay out plans before starting a big project. These individuals also are hardworking, persistent, and achievement oriented. It’s not difficult to see why people who exhibit this combination of traits make great employees. Researchers in one study looked at the way sales representatives for a large appliance manufacturer did their jobs (Barrick, Mount, & Strauss, 1993). As in other studies, the investigators found Conscientiousness scores were fairly good predictors of how many appliances the employees sold. A closer examination of the work styles of these salespeople helped to explain their success. Highly conscientious workers set higher goals for themselves than did the other employees. From the beginning, they had their eyes on ambitious end-of-the-year sales figures. In addition, these highly conscientious salespeople were more committed to reaching their goals than were other workers. That is, they were more likely to expend extra effort to hit their targets and were more persistent when faced with the inevitable obstacles and downturns that got in their way. In other words, there are many reasons a person high in Conscientiousness would make an excellent employee. As one team of investigators put it, “It is difficult to conceive of a job in which the traits associated with the Conscientiousness dimension would not contribute to job success” (Barrick & Mount, 1991, pp. 21–22). And these efforts do not go unnoticed. Highly conscientious employees typically receive higher evaluations from their supervisors (Barrick et al., 1993). They also are more likely to be promoted and to receive higher salaries (Judge, Higgins, Thoresen, & Barrick, 1999). Moreover, one study found that workers who scored high on Conscientiousness were among the least likely to lose their jobs when companies were forced to lay off employees (Barrick, Mount, & Strauss, 1994). This connection between Conscientiousness and achievement shows up even before people enter the w orkforce. Compared to people low on this dimension, highly conscientious individuals do better in high school classes (Ivcevic & Brackett, 2014; Spengler, Ludtke, Martin, & Brunner, 2013; Trautwein et al., 2015) and in college (Kappe & van der Flier, 2010; Kling, Noftle, & Robins, 2013; McAbee & Oswald, 2013; Poropat, 2009; Richardson, Abraham, & Bond, 2012). This is not to say that Conscientiousness is the only Big Five dimension related to job performance. On the contrary, a strong case can be made for hiring people high in Agreeableness (Sackett & Walmsley, 2014). These individuals are trusting, cooperative, and helpful. They are pleasant to have around the office and probably work Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. Assessment: Self-Report Inventories 155 especially well in jobs calling for teamwork. Other studies indicate that extraverts often have an edge in the business world over introverts and that openness to e xperience can be beneficial in some job settings (Barrick & Mount, 1991; Caldwell & Burger, 1998; Mount, Barrick, & Strauss, 1994; Tett et al., 1991). Although knowing where an applicant falls on the Big Five personality dimensions may be useful when making hiring decisions, one caveat is in order. It would be an egregious oversimplification to conclude that employers should always hire the person highest in Conscientiousness. Personality may account for a significant proportion of variance in job performance, but it is only one of many important variables that contribute to how well an individual performs his or her job. Just as it is inappropriate to base decisions about mental health or education solely on personality test scores, making hiring and promotion decisions on test score data alone is unwise and unfair. Assessment: Self-Report Inventories I t is unlikely you have reached college age without completing a number of self- report inventories. You may have received interest and abilities tests from a counselor, achievement and aptitude tests from a teacher, or personality and diagnostic inventories from a therapist. You may have even tried a few of those magazine quizzes for your own entertainment. Self-report inventories are the most widely used form of personality assessment. Typically, these tests ask people to respond to a series of questions about themselves. Relatively simple scoring procedures allow the tester to generate a score or a set of scores that can be compared with others along a trait continuum. Over the years, hundreds if not thousands of self-report inventories have been created by psychologists, some carefully constructed with attention to reliability and validity, others not. Self-report inventories are popular among professional psychologists for several reasons. They can be given in groups or even online and can be administered quickly and easily by someone with relatively little training. Contrast this experience with the Rorschach inkblot test, which must be administered and interpreted by a trained psychologist one test at a time. Scoring a self-report inventory is also relatively easy and objective. Researchers typically count matched items or total response values. Self-report measures are also popular because they usually have greater face validity than other instruments. For example, we can be reasonably confident from looking at the items on a self-esteem test that they probably measure self-esteem. Although face validity alone does not establish the value of a test (Chapter 2), psychologists are less likely to disagree about what the test is measuring when the intent of the items is so obvious. Self-report inventories come in all forms and sizes. Some have fewer than 10 items, others more than 500. Some provide detailed computer analyses on a number of subscales and comparison groups, others a single score for a specific trait dimension. Self-report inventories are used by researchers investigating individual differences, personnel managers making hiring decisions, and clinical psychologists putting together a quick profile of their client’s personality to aid in making diagnoses. Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. 156 Chapter 7 / The Trait Approach The Minnesota Multiphasic Personality Inventory The prototypic self-report inventory used by clinical psychologists is the Minnesota Multiphasic Personality Inventory (MMPI). The original MMPI was developed in the late 1930s. A revised version of the scale, the MMPI-2, was published in 1989. A shorter version of the revised scale based on different statistical procedures, the MMPI-2-RF, was published in 2008, although most practitioners continue to use the MMPI-2 rather than the revision (Williams & Lally, 2017). A version of the test specifically for use with adolescents has also been created (the MMPI-A). The MMPI-2 contains 567 true–false items; the MMPI-2-RF has 338 items. These items generate several scale scores that are combined to form an overall profile of the test taker. The original scales were designed to measure psychological disorders. Thus psychologists obtain scores for such dimensions as depression, hysteria, paranoia, and schizophrenia. However, most psychologists look at the overall pattern of scores rather than one specific scale when making their assessments. Of particular interest are scores that are significantly higher or lower than those obtained by most test takers. A sample profile from the MMPI-2 is shown in Figure 7.2. Many additional scales have been developed since the original MMPI scales were presented. Researchers interested in a particular disorder or concept usually determine which items separate a normal population from the group they are interested in. For example, to develop a creativity scale, you would identify test items that highly creative people tend to answer differently from people who are not very creative. For many years, the MMPI and its revised versions have ranked among the most widely used clinical assessment tools (Wright et al., 2017). The scale is included as part of graduate training in most clinical psychology programs (Childs & Eyde, 2002). The MMPI has also been used in an enormous amount of research (Butcher, 2006). However, this does not mean the MMPI is without its critics. Psychologists continue to debate the validity of some scales, the appropriateness of some of the norm data provided by the test makers, and the nature of some of the constructs the test is designed to measure, among other issues. As you will see in the following section, scores from self-report inventories are not as easy to interpret as the seemingly precise and objective numbers generated from these tests sometimes suggest. Problems with Self-Report Inventories Despite their widespread use, self-report inventories have several limitations that need to be considered when constructing a scale or interpreting test scores. Researchers who use self-report inventories still must depend on participants’ ability and willingness to provide accurate information about themselves. Sometimes these inaccuracies can be identified and test scores discarded, but more often the misinformation probably goes undetected. Clinical psychologists who rely too heavily on self-report measures run the risk of making inaccurate assessments of their clients’ mental health (Shedler, Mayman, & Manis, 1993). Faking Sometimes test takers intentionally give misleading information on self-report inventories. Some people “fake good” when taking a test. This means they try to Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. Assessment: Self-Report Inventories 157 Figure 7.2 Sample MMPI Profile The scales identified by numbers 1 through 0 are Hypochondriasis, Depression, Hysteria, Psychopathic Deviancy, Masculinity—Femininity, Paranoia, Psychasthenia (anxiety), Schizophrenia, Mania, and Social Introversion. Source: Minnesota Multiphasic Personality Inventory Profile Form. Copyright 1943, 1948, (renewed 1970), 1976, 1982 by the Regents of the University of Minnesota. All rights reserved. Reprinted by permission of the University of Minnesota Press. Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. 158 Chapter 7 / The Trait Approach present themselves as better than they really are. This strategy is not uncommon when scales are used to make employment decisions (Rosse, Stecher, Miller, & Levin, 1998). Why would applicants admit something negative about themselves if an employer is using that information to decide whom to hire? On the other hand, sometimes people are motivated to “fake bad.” These test takers want to make themselves look worse than they really are. A person who wants to escape to a “safe” hospital environment might try to come across as someone with psychological problems. What can a tester do in these cases? To start, important decisions probably should not be made on test data alone. An employer would be foolish to promote a worker who scores high on a leadership measure if that person has never shown leadership qualities in several years of employment. Beyond this, test makers sometimes build safeguards into tests to reduce faking. If possible, the purpose of a test can be made less obvious, and filler items can be added to throw the test taker off track. However, these efforts are probably, at most, only partially successful. Another option is to test for faking directly (Nelson, Sweet, & Demakis, 2006; Sellbom, & Bagby, 2008; Sellbom, Wygant, & Bagby, 2012). Like many large personality inventories, the MMPI contains scales designed to detect faking. To create these scales, test makers compare responses of people instructed to fake good or fake bad with the responses of other populations. Test makers find certain items distinguish between fakers and, for example, genuine schizophrenics. People trying to look schizophrenic tend to check these items, thinking they indicate a psychological disorder, but real schizophrenics do not. When testers detect faking, they can either throw out the results or adjust the scores to account for the faking tendency. However, some psychologists challenge the usefulness of relying on these methods to obtain accurate scores (Piedmont, McCrae, Riemann, & Angleitner, 2000). Carelessness and Sabotage Although the person administering a test usually approaches the session very seriously, this cannot always be said for the test taker. Participants in experiments and newly admitted patients can get bored with long tests and not bother to read the test items carefully. Sometimes they don’t want to admit to poor reading skills or their failure to fully understand the instructions. As a result, responses may be selected randomly or after only very briefly skimming the question. Moreover, this problem is not limited to poorly educated individuals. Researchers in one study allowed university students taking a standard personality test to indicate when they did not know the meaning of a word (Graziano, Jensen-Campbell, Steele, & Hair, 1998). The investigators found that some test questions were not understood by as many as 32% of the students. Even worse, test takers sometimes report frivolous or intentionally incorrect information to sabotage a research project or diagnosis. I once found a test answer booklet that appeared normal at first, but at second glance discovered that the test taker had spent the hour-long research session filling in answer spaces to form obscene words. A similar lack of cooperation is not uncommon among those who resent medical personnel or law enforcement officials. The best defense against this problem may be to explain instructions thoroughly, stress the importance of the test, and maintain some kind of surveillance throughout the testing session. Beyond this, tests can be constructed to detect carelessness. Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. I N THE NE WS Gao Kao: The World’s Largest Test T he use of entrance exams in admission decisions by American universities and colleges has been the subject of debate for decades. At the heart of this issue is the question of validity, that is, what do exams like the SAT and the ACT really measure? Are they valid indicators of a student’s academic potential? Critics also raise questions about fairness. They point to test score differences based on gender, ethnicity, parent’s income, and parent’s education. These concerns have led several prominent universities to no longer consider entrance exam scores when evaluating prospective students (Balf, 2014; Wilner, 2013). This trend stands in sharp contrast to the situation in China. Each summer more than nine million students take China’s National College Entrance Examination, which is known as the gao kao (the “big” or “high” test). Scores on the 9-hour test are the single determinant of which students are admitted to Chinese universities. Higher scores earn admission to more prestigious institutions. Regardless of other achievements or skills, students whose scores fall in the bottom 25% have to take the test again if they want to go to college in China. Because a college degree is the only hope most Chinese have for obtaining a good paying, white-collar job, the pressure to do well is intense (Tan, 2016). The gao kao is offered only once a year, but students spend months and sometimes years preparing for the test. In large cities, police cordon off streets near test sites so that test takers are not disturbed by traffic noise, and airplanes that might fly overhead are rerouted (McDonald, 2012). In some areas, police are barred from using their sirens during testing hours and some cities halt construction projects at night so that test takers can get a good night’s sleep (Siegel, 2007). Parents often stand outside keeping vigil during the test. And any student caught cheating on the gao kao can face up to seven years in prison and is banned from taking the test in the future (Tan, 2016). Although the competition for admission into the best Chinese universities is intense, the situation is better now than when the test was reinstated in 1977. That year nearly six million students competed for only 220,000 university spots (Siegel, 2007). As in the United States, the Chinese entrance exam has its critics. Some Chinese educators complain that the test emphasizes memorization over problem solving and creativity (Siegel, 2007). Issues of fairness have also been raised. Students from rural areas tend to perform more poorly on the gao kao than urban students who typically receive a superior education (Wong, 2012). Despite these concerns, no one expects the system to change any time soon. On the morning of the gao kao, after countless hours of preparation, many test takers eat a special breakfast—a bread stick next to two eggs. The meal is said to symbolize 100%, the hoped-for score on the test. For example, some tests present items more than once. The tester examines the repeated items to determine if the test taker is answering consistently. A person who responds A one time and B the next when answering two identical items might not be reading the item or might be sabotaging the test. Response Tendencies Before reading this section, you may want to take the test presented on page 162. This test is designed to measure a response tendency called social desirability—the 159 Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. 160 Chapter 7 / The Trait Approach extent to which people present themselves in a favorable light. This is not the same as faking, in which people answer test items in a manner they know is inaccurate. People high in social desirability unintentionally present themselves in a way that is slightly more favorable than the truth. A look at the items on the scale illustrates the point. Few of us can say we have never covered up our mistakes. Yet someone who only rarely covers up mistakes might exaggerate the truth slightly and indicate that this statement is true for him or her. What can be done about this mild deception? By measuring social desirability tendencies directly, a tester can adjust the interpretation of other scores accordingly. When social desirability scores are especially high, researchers sometimes drop participants from the study. However, the extent to which social desirability undermines the validity of psychological tests and whether adjusting scores actually improves the validity of the test remain a matter of debate (Connelly & Chang, 2016; McGrath, Mitchell, Kim, & Hough, 2010; Paunonen & LeBel, 2012). Social desirability scores are also useful when testing the discriminant validity of a new personality scale (Chapter 2). Suppose you developed a self-report inventory to measure the trait friendliness. Most of your items would be fairly straightforward, such as “Do you make a good friend?” High scores on this test might reflect an underlying trait of friendliness, but they might also reflect the test takers’ desire to present themselves as nice people. For this reason, test makers often compare scores on their new inventory with scores on a social desirability measure. If the two are highly correlated, test makers have no way to know which of the two traits their test is measuring. However, if scores on your new friendliness inventory do not correlate highly with social desirability scores, you would have more confidence that high scorers are genuinely friendly people and not just those who want to be seen that way. But presenting oneself in a favorable light is not the only response tendency testers have to worry about. Some people are more likely than others to agree with test questions (Danner, Aichholzer, & Rammstedt, 2015). If you ask these people “Do you work a little harder when given a difficult task?” they probably will say “Yes.” If you ask them a little later “Do you usually give up when you find a task difficult?” they are likely to say “Yes” again. This acquiescence (or agreement) response can translate into a problem on some self-report scales. If the score for the trait is simply the number of “true” or “agree” answers on a scale, someone with a strong acquiescence tendency would score high on the scale regardless of the content of the items. Moreover, people susceptible to an acquiescence response tendency tend to be different from typical test takers on several demographic and personality variables (Knowles & Nathan, 1997; Lechner & Rammstedt, 2015; Rammstedt & Farmer, 2013), and the response tendency may affect some personality scales more than others (Rammstedt & Kemper, 2011). Thus, if not accounted for, the tendency for some people to agree with test items could distort the meaning of test scores. Just how seriously acquiescence response tendencies affect test scores is still a matter of debate (Paulhus, 1991). However, to be safe, many test makers word half the items in the opposite manner. That is, sometimes “agree” is indicative of the trait, and sometimes “disagree” is. In this case, any tendency to agree or disagree with statements should not affect the final score. Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.