Methods in the Study of Personality PDF

Summary

This document outlines methods used in the study of personality, such as case studies and experience sampling. It also includes the concept of generality in personality research.

Full Transcript

Chapter 2 Methods in the Study of Personality Learning Objectives 2.1 Define case study, experience sampling, and the concept of generality 2.2 Examine the process of establishing two kinds of relationships between variables Sam and Dave are taking a break from studying. Sam says, "My roommate's...

Chapter 2 Methods in the Study of Personality Learning Objectives 2.1 Define case study, experience sampling, and the concept of generality 2.2 Examine the process of establishing two kinds of relationships between variables Sam and Dave are taking a break from studying. Sam says, "My roommate's girl at home broke up with him. Chicks here better watch out, 'cause he's gonna be doin' some serious partying to forget her." "What makes you think so?" "What kind of dumb question is that? It's obvious. That's what I'd do." "Huh. I know guys whose hometown girls dumped them, and none of them did that. It was exactly the opposite. They laid around moping. I don't think you know anything about how people react to breakups." When people try to understand personality, where do they start? How do they form theories? How do they test them? How do psychologists decide what to believe? These are all questions about the methods of science. They exist in all kinds of science, from astronomy to zoology. They're particularly challenging, though, when applied to personality. 2.1: Gathering Information 2.1 Define case study, experience sampling, and the concept of generality There are two main sources of information about personality. One of them is your own personal experience of the world. The other is other people and how they react to the world. Each is useful but neither is perfect. 2.1.1: Observe Yourself and Observe Others One way to get information about personality is to look to your own experience---a process called introspection. This technique (what Sam in the opening example did) is open to everyone. Try it. You have a personality. If you want to understand personality, take a look at yours. Sit back and think about your life. Think about what you did in various situations and how you felt. Pull from those recollections a thread of continuity. From this you might even start to form a theory---a set of ideas to explain your thoughts, feelings, and actions. Looking at your own experience is an easy beginning, but it has a problem. Specifically, your consciousness has a special relationship to your memories because they're yours. It's hard to be sure this special relationship doesn't distort what you're seeing. For instance, you can misremember something you experienced, yet feel sure your memory is correct. This problem goes away when you look at someone else instead of yourself (like Dave in the opening example). That's the second method of gathering information: Observe someone else. This method also has a problem, though---the opposite of introspection's problem. Specifically, it's impossible to be "inside another person's head," to really know what that person is thinking and feeling. This difference in perspective can create vast differences in understanding. It can also lead to misinterpretation. Which is better? Each has a place in the search for truth. Neither is perfect, but they sometimes can be used to complement one another. 2.1.2: Depth Through Case Studies These two starting points lead in several directions. Personality psychologists sometimes try to understand an entire person at once, rather than just part of the person. Henry Murray (1938), who emphasized the need to study the person as a coherent entity, coined the term personology to refer to that effort. This view led to a technique called the case study. A case study is an in-depth study of one person, usually a long period of observation and typically some unstructured interviews. Sometimes, it involves spending a day or two being around the person to see how he or she interacts with others. Repeated observations let the observer confirm initial impressions or correct wrong ones. Confirming or disconfirming an impression can't happen if you make only one observation. The depth of probing in a case study can reveal detail that otherwise wouldn't be apparent. This, in turn, can yield insights. Case studies are rich in detail and can create vivid descriptions of the people under study. Particularly compelling incidents or examples may illustrate broader themes in the person's life. Because case studies see the person in his or her life situation instead of settings created by the researcher, the information pertains to normal life. Because they're open ended, the observer can follow whatever leads seem interesting, not just ask questions chosen ahead of time. 2.1.3: Depth from Experience Sampling Another kind of depth is provided by what are called experience sampling studies, or diary studies (Kamarck, Shiffman, & Wethington, 2011; Laurenceau & Bolger, 2005; Smyth & Heron, 2013). These studies are also conducted across longish periods of time, like case studies. Instead of an external observer, though, experience sampling involves repeatedly prompting the person to stop and report on some aspect of his or her current experience. The prompt often is in the form of a signal from a cell phone. Sometimes these studies are very intensive, with reports made several times a day. Sometimes they are less so (e.g., morning and evening reports). An important advantage of experience sampling methods is that they don't require the person to think back very far in time (maybe a half-day, maybe only an hour or so, maybe not at all). This yields less distortion in recall of what the experiences actually were. Unfortunately, people often don't do a very good job of remembering details of an event many hours later (Kamarck, Muldoon, Shiffman, & Sutton-Tyrrell, 2007; Stone, Kennedy-Moore, & Neale, 1995). Experience sampling methods let you get the events more "on line" than other methods. Experience sampling shares with case studies the fact that it gets a lot of information about each person being studied. In both cases, it's possible to search within this information for patterns within a given person across many situations and points in time. This is referred to as an idiographic method (Conner, Tennen, Fleeson, & Barrett, 2009; Molenaar & Campbell, 2009), because the focus is on the individual. (The word idiographic has the same source as idiosyncratic.) The generality of a conclusion can be established only by studying a mix of people from different backgrounds. Case studies can provide insights about life. They provide useful information for researchers and often are an important source of ideas. But case studies aren't the main source of information about personality today. In large part, this is because a case study, no matter how good, has an important problem: It deals with just one person. When you're forming theories or drawing conclusions, you want them to apply to many people---if possible, to all people. How widely a conclusion can be applied is called its generality or its generalizability. For a conclusion to be generalizable, it must be based on many people, not one or two. The more people studied, the more convinced you can be that what you see is true of people in general, instead of only a few people. In most research on personality, researchers look at tens---even hundreds---of people to increase the generality of their conclusions. To truly ensure generality, researchers should study people of many ages and from all walks of life---indeed, from all cultures. For various reasons, this isn't always done, though it's becoming more common. As a matter of convenience, a lot of research on personality is done on college students. Do college students provide a good picture of what's important in personality? Maybe yes, maybe no. College students differ from older people in several ways. For one, they have a sense of self that may be more rapidly changing. This may affect the findings. It's not really clear how different college students are from everyone else. It does seem clear, though, that we should be cautious in assuming that conclusions drawn from research on college students always apply to "people in general." Most observations on personality also come from the United States and western Europe. Most research is done with middle- to upper-middle-class people. Some of it uses only one sex. We must be careful not to assume that conclusions from such studies apply to people from other cultures, other socioeconomic groups, and (sometimes) both sexes. Generalizability, then, is a continuum. Rarely does any study range broadly enough to ensure total generalizability. Some are better than others. How far a conclusion can be generalized is an issue that must always be kept in mind in evaluating research results. The desire for generality and the desire for in-depth understanding of a person are competing pressures. They force a trade-off. That is, given the same investment of time and energy, you can know a great deal about one person (or a very few people), or you can know a little bit about a much larger number of people. It's nearly impossible to do both at once. As a result, researchers tend to choose one path or the other, according to which pressure they find more important. 2.2: Establishing Relationships among Variables 2.2 Examine the process of establishing two kinds of relationships between variables Insights from introspection or observation suggest relationships between variables. A variable is a dimension along which there are at least two values or levels, although some variables have an infinite number of values. For example, sex is a variable with values of male and female. Self-esteem is a variable that has a virtually limitless number of values (from very low to very high) as you make smaller discriminations among people. It's important to distinguish between a variable and its values. Conclusions about relationships always involve the whole dimension, not just one end of it. Thus, researchers always study at least two levels of the variable they're interested in. You can't understand the effects of low self-esteem by looking only at people with low self-esteem. If there's a relationship between self-esteem and academic performance, for example, the only way to find it out is to look at people with different levels of self-esteem (see Figure 2.1). If there's a relationship, people with low self-esteem should have low grades and people with higher self-esteem should have higher grades. Figure 2.1 Whether a relationship exists between variables can be determined only by looking at more than one value on each variable. For instance, knowing that people low in self-esteem have poor academic performances leaves open the question of whether everyone else's performances are just as poor. This question is critically important in establishing a relationship between the two variables. Figure 2.1 Full Alternative Text The last part of that statement is just as important as the first part. Knowing that people low in self-esteem have low grades tells you nothing, if people high in self-esteem also have low grades. It can be hard to keep this in mind. In fact, people often fail to realize how important this issue is. If you don't keep it in mind, though, you can draw seriously wrong conclusions (for illustrations, see Chapman, 1967; Crocker, 1981). The need to examine people who form a range of levels of a given variable is a second reason why it's important to go beyond case studies (the issue of generality was the first one). The need to examine a range of variability underlies several research methods. 2.2.1: Correlation between Variables Two kinds of relationship can be established between variables. The first is called correlation. A correlation between two variables means that as you measure the variables across many people or instances, the values on one tend to go together with values on the other in a systematic way. There are two aspects of this relationship, which are separate from each other. They are called the direction of the correlation and the strength of the correlation. To clarify what these terms mean, let's return to the example of self-esteem and academic performance. A correlation between two variables means they covary in some systematic way. Here, there is a correlation between height and place in line. Suppose you've decided to study whether these two variables really do go together. You've gone out and recruited 40 students. They've completed a measure of self-esteem and given you their grade point average (GPA). You now have two pieces of information for each person (Figure 2.2, A). You can organize this information visually in what's called a scatterplot (Figure 2.2, B). In a scatterplot, the variables are represented by the axes of the graph. The point where the lines meet is zero for both variables. Being farther away from zero on each line means having a larger value on that variable. Because the lines are at right angles, the combination of any score on one variable and any score on the other one can be portrayed as a point in two-dimensional space. For example, in Figure 2.2, Tim has a self-esteem score of 42 (and is toward the right side on the horizontal line) and a GPA of 3.8 (and is toward the top on the vertical line). The scatterplot for your study would be the points that represent the combinations of self-esteem scores and GPAs for all 40 people in the study. Figure 2.2 Thinking about the meaning of correlation (with hypothetical data): (A) For each person (subject), there are two pieces of information: a self-esteem score and a grade-point average (GPA). (B) The data can be arranged to form a scatterplot by plotting each person's self-esteem score along the horizontal dimension and his or her GPA along the vertical dimension, thereby locating the combination in a two-dimensional space. Figure 2.2 Full Alternative Text To ask whether the two variables are correlated means (essentially) asking this question about the scatterplot: When you look at points at low versus high values on the horizontal dimension, do they differ in how they line up on the vertical dimension? If low values tend to go with low values and high values tend to go with high values (as in Figure 2.3, A), the variables are said to be positively correlated. If people low in self-esteem tend to have low GPAs and people high in self-esteem tend to have high GPAs, you would say that self-esteem correlates positively with GPA. Figure 2.3 \(A) If high numbers on one dimension tend to go with high numbers on the other dimension (and low with low), there is a positive correlation. (B) If high numbers on one dimension tend to go with low numbers on the other dimension, there is an inverse, or negative, correlation. Figure 2.2 Full Alternative Text To ask whether the two variables are correlated means (essentially) asking this question about the scatterplot: When you look at points at low versus high values on the horizontal dimension, do they differ in how they line up on the vertical dimension? If low values tend to go with low values and high values tend to go with high values (as in Figure 2.3, A), the variables are said to be positively correlated. If people low in self-esteem tend to have low GPAs and people high in self-esteem tend to have high GPAs, you would say that self-esteem correlates positively with GPA. Figure 2.3 \(A) If high numbers on one dimension tend to go with high numbers on the other dimension (and low with low), there is a positive correlation. (B) If high numbers on one dimension tend to go with low numbers on the other dimension, there is an inverse, or negative, correlation. Figure 2.3 Full Alternative Text Sometimes, however, a different kind of pattern occurs. Sometimes, high values on one dimension tend to go with low values on the other dimension (and vice versa). When this happens (Figure 2.3, B), the correlation is termed inverse or negative. You might get this kind of correlation if you studied the relationship between GPA and the frequency of going to parties. That is, you might find that students who party the most tend to have lower GPAs, whereas those who party the least tend to have higher GPAs. The direction of the association between variables (positive vs. negative) is one aspect of correlation. The second aspect---which is entirely separate from the first---is the strength of the correlation. Think of strength as the "sloppiness" of the association between the variables. More formally, it refers to the degree of accuracy with which you can predict values on one dimension from values on the other one. For example, assume a positive correlation between self-esteem and GPA. Suppose that you know that Barbie has the second-highest score on self-esteem in your study. How well could you guess her GPA? The answer to this question depends on how strong the correlation is. Because the correlation is positive, knowing that Barbie is on the high end of the self-esteem dimension would lead you to predict she has a high GPA. If the correlation is also strong, you're very likely to be right. If the correlation is weaker, you're less likely to be right. A perfect positive correlation---the strongest possible---means that the person who has the very highest value on one variable also has the very highest value on the other, the person next highest on one is also next highest on the other, and so on throughout the list (see Figure 2.4, A). Figure 2.4 Six correlations: (A) Perfect positive correlation, (B) moderate positive correlation, (C) weak positive correlation, (D) perfect inverse correlation, (E) moderate inverse correlation, and (F) weak inverse correlation. The weaker the correlation, the more "scatter" in the scatterplot. Figure 2.4 Full Alternative Text The strength of a correlation is expressed by a number called a correlation coefficient (often labeled with a lowercase r). An absolutely perfect positive correlation (as in Figure 2.4, A) is expressed by the number 1.0. This is the largest numerical value a correlation can have. It indicates totally accurate prediction from one dimension to the other. If you know the person's score on one variable, you can predict with complete confidence where he or she is on the other. The scatterplot of a somewhat weaker correlation is shown in Figure 2.4, B. As you can see, there's more "scatter" among the points than in the first case. There's still a strong tendency for higher values on one dimension to match up with higher ones on the other and for lows to match up with lows, but the tendency is less exact. As the correlation becomes weaker, the number expressing it becomes smaller (thus, virtually all correlations are decimal values). Correlations of 0.6 to 0.8 are strong. Correlations of 0.3 to 0.5 are moderately strong. Below 0.3 or 0.2, the prediction from one variable to the other is getting poorer. As you can see in Figure 2.4, C, weak correlations have even more scatter. The tendency toward a positive relation is there, but prediction definitely isn't good. A correlation of 0.0 means the two variables aren't related at all. The scatterplot of a zero correlation is random dots. As we said before, a correlation's strength is entirely separate from its direction. Strength refers only to degree of accuracy. Thus, it's eminently sensible to talk about a perfect negative correlation as well as a perfect positive correlation. A perfect negative correlation (see Figure 2.4, D) means that the person who had the highest value on one variable also had the very lowest value on the other variable, the person with the next-highest value on one had the next-to-lowest value on the other, and so on. Just as with positive correlations, weaker inverse correlations have more "scatter" among the points (Figure 2.4, E and F). Negative correlations are also expressed in numbers, just like positive correlations. But to show that the relationship is inverse, a minus sign is placed in front. Thus, an r value of 20.75 is precisely as strong as an r value of 0.75. The first expresses an inverse correlation, though, whereas the second expresses a positive correlation. 2.2.2: Two Kinds of Significance We've been describing the strength of correlations in terms of the size of the numbers that represent them. Although the size of the number gives information about its strength, the size of the number by itself doesn't tell you whether the correlation is believable. Maybe it's a fluke. This is a problem for all kinds of statistics. You can't tell just by looking at the number, or looking at a graph, whether the result is real. One further kind of information that bears on this is whether the result is statistically significant. Significant in this context has a very specific meaning: It means that the correlation would have been that large or larger only very rarely if no true relationship exists. When the probability of that happening is small enough (just under 5%), the correlation (or whatever statistic it is) is said to be statistically significant. At that point, we conclude that the relationship is a real one. The word significant is also used in another way in psychology, which more closely resembles its use in day-to-day language. An association is said to be clinically significant or practically significant if the effect is both statistically significant (so it's believable) and large enough to have some practical importance. How large is large enough varies from case to case. It's possible, though, for an association to be statistically significant but to account for only a tiny fraction of the behavior, in which case its practical significance generally isn't very great. 2.2.3: Causality and a Limitation on Inference Correlations tell us whether two variables go together (and in what direction and how strongly). But they don't tell us why they go together. The why question takes us past correlation to a second kind of relationship. This one is called causality---the relationship between a cause and an effect. Correlational research cannot provide evidence on this second relationship. A correlational study often gives people strong intuitions about causality, but no more. Why? The answer is shown in Figure 2.5. Each arrow there represents a possible path of causality. What this figure shows is that there are always three ways to account for a correlation. Consider the correlation between self-esteem and academic performance. What causes it? Your intuition may say bad academic outcomes cause people to get lower self-esteem, whereas good outcomes cause people to feel good about themselves (arrow 1 in Figure 2.5). Or maybe you think that having low self-esteem causes people not to try, resulting in poorer performance (arrow 2). Both of these explanations, though they go in the opposite causal directions, are plausible. Figure 2.5 Correlation does not imply cause and effect, because there are always three possibilities: (1) Variations in one variable (academic performance) may be causing variations in the second (self-esteem), (2) variations in the second may be causing variations in the first, or (3) a third variable may actually be causing both observed effects. Knowing only the single correlation between self-esteem and GPA doesn't allow you to distinguish among these possibilities. Figure 2.5 Full Alternative Text It could also be, however, that a third variable---not measured, perhaps not even thought of---actually has a causal influence over both variables that were measured (the pair of arrows labeled 3). Perhaps having a high level of intelligence causes a positive sense of self-esteem and also causes better academic performance. In this scenario, both self-esteem and academic performance are effects, and something else is the cause. The possible involvement of another variable in a correlation is sometimes called the third-variable problem. It's a problem that can't be handled by correlational research. Correlations cannot tell which of the three possibilities in Figure 2.5 is actually right. 2.2.4: Experimental Research There is a method that can show cause and effect, however. It's called the experimental method. It has two defining characteristics. First, in an experiment, the researcher manipulates one variable---creates the existence of at least two levels of it. The one the researcher is manipulating is called the independent variable. This is the one the researcher is testing as the possible cause in a cause--effect relationship. When we say the researcher is "creating" two (or more) levels of this variable, we mean exactly that. The researcher actively creates a difference between the experience of some people and the experience of other people. Sometimes psychologists do experiments in order to better understand what they've seen in correlational studies. Let's illustrate the experimental method by doing just that. Let's look closer at the example just discussed. Suppose you have a hunch that variations in academic performance have a causal effect on self-esteem. To study this possibility, you do an experiment, in which you hypothesize (predict) that academic outcomes cause effects on self-esteem. You're not going to be able to manipulate something like GPA in this experiment, but it's fairly easy to manipulate other things with overtones of academic performance. For instance, you could arrange to have some people experience a success and others a failure on a task (using one rigged to be easy or impossible). By arranging this, you would create the difference between success and failure. You'd manipulate it---not measure it. You're sure that a difference now exists between the two sets of people in your experiment, because you made it exist. As in all research, you'd do your best to treat every participant in your experiment exactly the same in all ways besides that one. Treating everyone the same---making everything exactly the same except for what you manipulate---is called experimental control. Exerting a high degree of control is important to the logic of the experimental method, as you'll see momentarily. Control is important, but you can't control everything. It's rarely possible to have everyone do the experiment at the same time of day or the same day of the week. More obviously, perhaps, it's impossible to be sure the people in the experiment are exactly alike. One of the main themes of this book, after all, is that people differ. Some people in the experiment are just naturally going to have higher self-esteem than other people when they walk in the door. How can these differences be handled? This question takes us to the second defining characteristic of the experimental method: Any variable that can't be controlled---such as personality---is treated by random assignment. In your experiment, you would randomly assign each participant to have either the success or the failure. Random assignment is often done by such means as tossing a coin or using a list of random numbers. Random assignment is an important hallmark of the experimental method. The experimenter randomly assigns participants to a condition, much as a roulette wheel randomly catches the ball in a black or red slot. The use of random assignment rests on an assumption: that if you study enough people in the experiment, any important differences due to personality (and other sources as well) will balance out between the groups. Each group is likely to have as many tall people, fat people, depressed people, and confident people as the other group---if you study a fairly large number of participants and use random assignment. Anything that matters should balance out. So, you've brought people to your research laboratory one at a time, randomly assigned them to the two conditions, manipulated the independent variable, and exerted experimental control over everything else. At some point, you would then measure the variable you think is the effect in the cause--effect relationship. This one is termed the dependent variable. In this experiment, your hypothesis was that differences in success and failure on academic tasks cause people to differ in self-esteem. Thus, the dependent measure would be a measure of self-esteem (e.g., self-report items asking people how they feel about themselves). After getting this measure for each person in the experiment, you would compare the groups to each other (by statistical procedures that don't concern us here). If the difference between groups was statistically significant, you could conclude that the experience of success versus failure causes people to differ in self-esteem. What would make you so confident in that cause-and-effect conclusion? The answer, despite all the detail, is quite simple. The logic is displayed graphically in ­Figure 2.6. At the start of the experiment, you separated people into two groups. (By the way, the reasoning applies even if the independent variable has more than two levels or groups.) If the assumption about the effect of random assignment is correct, then the two groups don't differ from each other at this point. Because you exercise experimental control, the groups still don't differ as the experiment unfolds. Figure 2.6 The logic of the experimental method: (A) Because of random assignment and experimental control, there is no systematic ­difference between groups at first. (B) The experimental manipulation creates---for the first time---a specific difference. (C) If the groups then are found to differ in another fashion, the manipulation must have caused this difference. Figure 2.6 Full Alternative Text At one point, however, a difference between groups is introduced---when you manipulate the independent variable. As we said before, you know there's a difference now, and you know what the difference is, because you created it yourself. For this reason, if you find the groups differ from each other on the dependent measure at the end, you know there's only one thing that could have caused the difference (see Figure 2.6). It had to come from the manipulation of the independent variable. That was the only place where a difference was introduced. It was the only thing that could have been responsible for causing the effect. This reasoning is straightforward. We should note, however, that this method isn't entirely perfect. Its problem is this: When you do an experiment, you show that the manipulation causes the difference on the dependent measure---but you can't always be completely sure what it was about the manipulation that did the causing. Maybe it was the aspect of the manipulation that you focused on, but maybe it was something else. For example, in the experiment we've been considering, low self-esteem may have been caused by the failure and the self-doubt to which it led. But it might have been caused instead by other things about the manipulation. Maybe the people who failed were worried that they had spoiled your experiment by not solving the problems. They didn't feel a sense of failure but were angry with themselves for creating a problem for you. This interpretation of the result wouldn't mean quite the same thing as your first interpretation of it. This issue requires us always to be a bit cautious in how we view results, even from experiments. 2.2.5: Recognizing Types of Studies When you read about correlational studies and experiments in this book, how easy will it be to tell them apart? It seems simple. An experiment makes a comparison between groups, and a correlational study gives you a correlation, right? Well, no. Results of correlational studies don't always show up as correlations. Sometimes the study compares groups with each other on a dependent measure, and the word correlation is never even mentioned. Suppose you studied some people who were 40% overweight and some who were 40% underweight. You interviewed them individually and judged how sociable they were, and you found that heavy people were more sociable than thin people. Is this an experiment or a correlational study? To tell, recall the two defining characteristics of the experiment: manipulation of the independent variable and random assignment of people to groups. You didn't randomly assign people to be heavy or thin (and you didn't create these differences). Therefore, this is a correlational study. The limitation on correlational research (the inability to conclude cause and effect) applies to it (see Box 2.1). Box 2.1 Correlations in the News The fact that a correlation cannot establish causality is ignored to an amazing degree, pretty much every place you look. Here's an example, straight from a recent report on the national evening news. An article had just been published that day in a medical journal showing that people who retire earlier from their jobs showed greater cognitive decline (poorer mental function) compared with people who hadn't retired. This was exciting news (so exciting that the news was announced in breathless terms by the network's medical correspondent, a physician). The description of the result was followed by comments about how beneficial it is for people to keep working, that staying active in your job keeps you mentally fit, that we should all think twice about retiring. There's just one problem. The finding, despite being described in terms of groups, was in fact correlational in nature. The finding was that retiring earlier was associated with poorer cognitive function. That doesn't mean that retiring caused poorer cognitive function. It is entirely possible that poorer cognitive function led people to retire earlier, a possibility that was never mentioned in the report. A good rule of thumb is that any time groups represent naturally occurring differences or are formed on the basis of some characteristic that you measure, the study is correlational. This means that all studies of personality differences are, by definition, correlational. Why do personality researchers make their correlational studies look like experiments? Often it's because they study people from categories, such as cultural groups or genders. It has the side effect, however, of making it hard to express the finding as a correlation. The result is correlational studies that look at first glance like experiments. 2.2.6: What Kind of Research Is Best? Which kind of research is better: experiments or correlational studies? Both have advantages, and the advantage of one is the disadvantage of the other. The advantage of the experiment, of course, is its ability to show cause and effect, which correlational methods cannot do. But experiments also have drawbacks. One drawback (as noted) is that it's sometimes unclear which aspect of the manipulation was important. Another drawback is that experiments on people usually involve events of relatively short duration, in carefully controlled conditions. The correlational method, in contrast, lets you examine events that take place over long periods (even decades) and events that are much more elaborate. Correlational studies also let you get information about events in which experimental manipulation would be unethical---for example, how being raised by divorced parents affects people's personality. Personality psychologists sometimes also criticize experiments on the grounds that the kinds of relationships they obtain often have little to do with the central issues of personality. Even experiments that seem to bear on important issues in personality may tell less than they seem to. Consider the hypothetical experiment earlier, in which you manipulated academic success versus failure and measured self-esteem. Assume for the moment that those with a failure had lower self-esteem afterward than those with a success. You might be tempted to conclude from this that having poor academic outcomes over one's life course causes people to develop low self-esteem. This conclusion, however, may not be justified. The experiment dealt with a brief task outcome, manipulated in a particular way. The broader conclusion you're tempted to reach deals with a basic, ingrained quality of personality. This latter quality may differ in many ways from the momentary state you manipulated. The "reasoning by analogy" that you're tempted to engage in can be misleading. To many personality psychologists, the only way to really understand personality is to look at naturally occurring differences between people (Underwood, 1975). Many are willing to accept the limitation on causal inference that's inherent in correlations; they regard it as an acceptable price to pay. On the other hand, many of these psychologists are comfortable combining the correlational strategy with experimental techniques, as described next. 2.2.7: Experimental Personality Research and Multifactor Studies We've been describing studies as though they always involve predicting a dependent variable from a single predictor variable (an experimental manipulation or an individual difference). In reality, however, studies often look at several predictors at once using multifactor designs. In a multifactor study, two (or more) variables are varied separately, which means creating all combinations of the various levels of the predictor variables. The study shown in Figure 2.7 has two factors, but more than two can be used. The more factors in a study, of course, the larger the resulting array of combinations, and the harder it is to keep track of things. Figure 2.7 Diagram of a hypothetical two-factor study. Each square represents the combination of the value listed above it and the value listed to the left. In multifactor studies, all combinations of values of the predictor variables are created in this fashion. Figure 2.7 Full Alternative Text Sometimes the factors are all experimental manipulations. Sometimes they're all personality variables. Often, though, experimental manipulations are crossed by individual-difference variables. The example shown in Figure 2.7 is such a design. The self-esteem factor is the level of self-esteem the people had when they came to the study. This is a personality dimension (thus correlational). The success--failure factor is an experimental manipulation, which takes place during the session. In this particular experiment, the dependent measure is performance on a second task, which the participants attempt after the success--failure manipulation. These designs allow researchers to examine how different types of people respond to situations. They thus offer a glimpse into the underlying dynamics of the individual-difference variable. Because this type of study combines experimental procedures and individual differences, it's often referred to as experimental personality research. 2.2.8: Reading Figures from Multifactor Research Because multifactor designs are more complex than single-factor studies, what they can tell you is also potentially more complex. Indeed, people who do experimental personality research use these designs precisely for this reason. You don't always get a complex result from a multifactor study. Sometimes you find only the same outcomes you would have found if you had studied each predictor separately. When you find that a predictor variable is linked to the outcome in a systematic way, completely separate from the other predictor, the finding is called a main effect. For example, the study outlined in Figure 2.7 might find simply that people of both initial self-esteem levels perform worse after a failure than after a success, but they don't differ in how much worse. Complexity emerges when a study finds what's termed an interaction. Figure 2.8 portrays two interactions, each a possible outcome of the hypothetical study of Figure 2.7. In each case, the vertical dimension portrays the dependent measure: performance on the second task. The two marks on the horizontal line represent the two values of the manipulated variable: initial success versus failure. The color of the line depicts the other predictor variable: the colored line represents people high in self-esteem, and the black line represents those low in self-esteem. Figure 2.8 Two hypothetical outcomes of a two-factor study looking at self-esteem and an initial success-versus-failure experience as predictors of performance on a second task. (A) This graph indicates that experiencing a failure causes people low in self-esteem to perform worse later on than if they had experienced a success, but that experiencing a failure does not have any effect at all on people high in self-esteem. (B) This graph indicates that experiencing a failure causes people low in self-esteem to perform worse later on, but that experiencing a failure causes people high in self-esteem to perform better later on. Thus, the failure influences both kinds of people but does so in opposite ways. Figure 2.8 Full Alternative Text We emphasize that these graphs show hypothetical outcomes. They are intended only to give you a clearer understanding of what an interaction means. Figure 2.8, A, portrays a finding that people who are low in self-esteem perform worse after an initial failure than after a success. Among people high in self-esteem, however, this doesn't occur. Failure apparently has no effect on them. Thus, the effect of one variable (success vs. failure) differs across the two levels of the other variable (degree of self-esteem). That is the meaning of the term interaction. In the case in Figure 2.8, A, a failure has an effect at one level of the second variable (the low self-esteem group) but has no effect at the other level of the second variable (the high self-esteem group). Two more points about interactions: First, to find an interaction, it's absolutely necessary to study more than one factor at a time. It's impossible to find an interaction unless both variables in it are studied at once. This is one reason researchers often use multifactor designs: They allow the possibility for interactions to emerge. The second point is revealed by comparing Figure 2.8, A, with 2.8, B. This point is that interactions can take many forms. In contrast to the interaction just described, the graph in panel B says that failure has effects on both kinds of people---but opposite effects. People low in self-esteem perform worse after failure (as in the first graph), but people high in self-esteem actually perform better after a failure, perhaps because the failure motivates them to try harder. These two graphs aren't the only forms interactions can take. Exactly what an interaction means always depends on its form. Thus, exploring interactions always requires checking to see exactly how each group was influenced by the other variable under study. Summary: Methods in the Study of Personality Research in personality relies on observations of both the self and others. The desire to understand a person as an integrated whole led to case studies: in-depth examinations of specific persons. The desire for generalizability---conclusions that would apply to many rather than to just a few people---led to studies involving examination of many people. Gathering information is only the first step toward examining relationships between and among variables. Relationships among variables are examined in two ways, corresponding to two kinds of relationships. Correlational research determines the degree to which two variables tend to go together in a predictable way when measured at different levels along the dimensions. This technique determines two aspects of the relationship: its direction and its strength. The special relationship of cause and effect cannot be determined by this kind of study, however. A second technique, called the experimental method, is a test for cause and effect. In an experiment, an independent variable is manipulated, other variables are controlled (made constant), and anything that cannot be controlled is treated by random assignment. An effect caused by the manipulation is measured in the dependent variable. Experimental and correlational techniques are often combined in multifactor studies. When the study contains a personality variable and an experimental manipulation, it's termed experimental personality research. Multifactor studies permit the emergence of interactions.

Use Quizgecko on...
Browser
Browser