PSY1101 Chapter 2: Methods PDF

Summary

This chapter from a psychology textbook describes methods used in psychology to investigate behavior and mental processes. It explains how psychology uses the scientific method, compares different research approaches (descriptive, correlational, experimental), and discusses research validity, ethics, and statistics. The chapter starts by examining the historical evolution of psychological thought, emphasizing the transition from rationalism to experimental methods.

Full Transcript

10/9/24, 1:10 AM OneNote Chapter 2 Wednesday, October 09, 2024 1:09 AM Chapter 2: Methods Figure 2.1: Sensors attached to a translucent model skull are use...

10/9/24, 1:10 AM OneNote Chapter 2 Wednesday, October 09, 2024 1:09 AM Chapter 2: Methods Figure 2.1: Sensors attached to a translucent model skull are used to measure explosive shock velocity. Data recorded by these sensors are used to study traumatic brain injuries. 2.0 Learning Objectives After completing this chapter, you should be able to: 1. Explain how psychology uses the scientific method to investigate behavior and mental processes. 2. Compare and contrast the advantages and disadvantages of descriptive, correlational, and experimental methods of research. 3. Understand how and why the validity of research may be reduced, whether due to experimenter error (e.g., not using random assignment or including confounding variables) or participant bias (e.g., reactivity or social desirability bias). 4. Describe how research with human participants is conducted ethically through the application of specific ethical principles. 5. Explain what measures of central tendency and variability (“spread”) tell us about data, and how psychologists draw conclusions using inferential statistics. 2.1 Introduction: How Do We Know? “Psychology has a long past but a short history.” These are famous words that Hermann Ebbinghaus used to begin his influential textbook on psychology (Ebbinghaus, 1908). But what does this mean? Ebbinghaus’s quote signified a revolution in psychology as a science. Asking questions about the mind was nothing new; philosophers had been asking questions about the mind, thought, and reasoning for literally thousands of years. However, how they answered questions about how the mind worked relied upon a process of rationalism; the view that reason and logical argument, but not experience, is most important for how we acquire knowledge. Aristotle (fourth century BCE) used rationalism to reason that human thoughts, perceptions, and emotions were products of the heart rather than the brain. Aristotle recognized the heart as a central part of our being, both literally and figuratively. Our heart is positioned in the center of our bodies, is connected by blood to all other organs of our bodies (specifically our senses), and the beating of our heart is affected by our emotional state. Therefore, the heart must be the seat of our senses and emotion (Gross, 1995). The function of other organs, like the brain, were not to be forgotten, but rather were reasoned to be among a group of secondary organs (including the lungs) that existed to cool the blood and, in doing so, help maintain a tempered and rational state of mind (see Figure 2.2). Much has changed since Aristotle’s time, but even still we acknowledge this history when we describe people as kindhearted, openhearted, fainthearted, or heartless. Figure 2.2: Aristotle reasoned that the heart was the origin of all emotion. The brain, not to be forgotten, served to cool the blood. As a discipline, psychology and our understanding of behavior were wrapped up in rationalism and philosophical reasoning until the middle of the nineteenth century (Hatfield, 2009). Until this time, psychologists had few methods, other than logic and reasoning, to substantiate their claims. It was widely believed that it was not even possible to conduct experiments on the mind. But the flaw in rationalism is clear: What we “think” is true about behavior is often different from how we actually behave. We can even demonstrate this to you right now. https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd=… 1/27 10/9/24, 1:10 AM OneNote Consider a simple question about how you would react to the following situation: Would you notice if a person you are talking to is replaced by another person? Most of us would like to think we would. However, 53% of participants in one study and 67% of participants in a second study failed to notice that the person changed midway through their conversation (Simons & Levin, 1998). We don’t realize just how little attention we pay to the identity of a person. Take a look at the video below on the “door” study to see “obvious” changes to which many people are completely oblivious in the moment. Loading... The differences between how we think we might act and how we behave highlight the limitations of rationalism to explain behavior. Our reasoning about behavior can be contradicted when put to the test. When Ebbinghaus referred to the “short history” of psychology, he meant that it had moved beyond rationalism to using experimental methods (or simply experiments) to collect data, test theories, and allowing experience and observation to be the primary sources of knowledge. Using experimental methods, researchers gather facts and observations of phenomena to form scientific theories: rational explanations to describe and predict future behavior. Mini-lecture by Dr. Kousaie: 2.2 Psychology as a Science: The Scientific Method Psychology uses experience-driven approaches to understand behavior. The scientific method is a common approach in which researchers methodologically answer questions. The steps of the scientific method are as follows (see Figure 2.3): 1. Identify the problem 2. Gather information 3. Generate a hypothesis 4. Design and conduct experiments 5. Analyze data and formulate conclusions 6. Restart the process Figure 2.3: Steps of the scientific method. Let’s use an example of a teaching problem to see how the scientific method can be applied. Identify the problem: The first step in the process is to identify the problem of interest, which may be based on observation, previous research, established theory, or intuition. Consider a professor, who has a question: How do I get students to come to class prepared, having already completed their pre-class activities? The professor thinks students will do more preparation for a class if she gives them the right motivation for doing so. Can she find support for her ideas? To do so, she would have to do an experiment. Gather information: Once the topic of interest is identified, it is important to review the scientific literature and examine existing theories of behavior. After doing a database search of scientific journal articles, the professor finds a number of related articles. Research indicates that most students (about 70–80%) do not prepare before class for a variety of reasons, including that many students do not see the link between doing the pre-class readings, their learning, and the effect it https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd=… 2/27 10/9/24, 1:10 AM OneNote has on their course grades (Heiner et al., 2014). The implication is that if reading is made a part of student assessments, students will be more likely to complete the assigned readings. Develop a hypothesis: After evaluating available information about the area of investigation, researchers develop a hypothesis, or an educated prediction, about the outcome of the experiment. Based on the professor's research, she hypothesizes that students who have completed a graded assessment will be more likely to prepare for class than students who did not complete a graded assessment. Design and conduct an experiment: The next step is to develop an experiment to test the hypothesis and collect data. The professor has to compare how assigning grades will affect the likelihood that the reading will be completed, so she needs to know how much reading students typically complete and how adding an assessment will affect the chances that students will complete more of the readings. To test her hypothesis, she gives one section of a class a pre-class quiz to earn course credit for having come to class. A second section will complete the same quiz before class without course credit for completion. The key is that there are two different groups under study, and the only difference between groups is that one section gets credit and the other does not. To measure how likely students are to complete their reading, the professor could assess how well each group performed on the test, giving her an indirect measure that should be correlated with studying. Analyze the data and draw conclusions: This step involves determining whether the findings support the experimenter’s predictions. The professor hypothesized that giving incentive (course credit) would make students more likely to do the pre- class readings and come to class better prepared than other students who are not given incentives. Once the experiment is complete, the professor needs to determine whether the data support that hypothesis. Are those students given course credit more likely to report having completed the assigned reading? Do these same students also perform better on quizzes? These pieces of evidence help to form opinions and provide some insight into the problem the professor started with. If the data analysis indicates that students who completed a graded assessment are more prepared for class, the professor could conclude that her hypothesis was supported by the evidence collected. However, she must be clear that this does not mean that she has definitively “proven” her hypothesis to be “true” in any absolute sense. Rather, she has made conclusions based on the data available. The professor must always leave open the possibility that new evidence may come along to refute her hypothesis. Restart the process: The process starts over again at the point where the researcher reconsiders the original question/problem and may choose to either replicate (or redo) the same experiment, conduct a similar experiment with some modifications (replication with extension), or move on to an entirely new research topic. The professor could continue to investigate how to best motivate students to come to class prepared by replicating the results in another class or a different context. She could also extend the research to other ways that she can motivate student behavior, which can become part of a programmatic research study, which is simply a continued area of inquiry. Loading... Loading... Loading... Loading... To review the scientific method, click through the interactive slides below: Loading... Click here to view the Interactive Timeline of the Scientific Method in a new browser tab. Loading... Loading... Loading... Mini-lecture by Dr. Kousaie: https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd=… 3/27 10/9/24, 1:10 AM OneNote 2.3 Descriptive Methods Descriptive methods are any means to capture, report, record, or otherwise describe a group. Descriptive research is usually interested in identifying “what is” without necessarily understanding “why it is.” There are four popular methods to describe groups: naturalistic observation, participant observation, case studies, and surveys. 2.3.1 Naturalistic Observation Observational research (or field research) is a type of non-experimental research of behavior. Naturalistic observation is best described as observation of behavior as it happens in a natural environment, without an attempt to manipulate or control the conditions of the observation. Compared with more controlled observations, it is similar to the difference between observing the behavior of animals in a zoo compared to animals in their natural habitat. The lack of manipulation is a key distinction between other approaches in natural settings, like field experiments, in which a researcher manipulates and controls the conditions of the behavior under observation. Observations can be captured either qualitatively (by collecting opinions, notes, or general observations of behavior) or quantitatively (any attempt to measure or count specific behaviors). The benefit of naturalistic observation is that it can often help us generate new ideas about an observed phenomenon. In the video below demonstrating a naturalistic observation, an actor drops his wallet or money on the ground around one or more people. What behaviors will you look for? Make sure the behavior can be counted (frowning, attempting to say something to the person who dropped the wallet, etc.) How do you expect people to react? Can you think of a situation in which people might be more or less likely to return the money? For instance, do you expect this behavior might be different if the wallet/money is dropped in front of more than one person? Can you create a hypothesis to predict when people might be more likely to keep the money rather than return it? Naturalistic observation allows us to better understand behavior exactly as it happens in the real world. This kind of description of behavior is said to be ecologically valid because the observations are a product of genuine reactions. When observing others, it is important to stay as unobtrusive as possible so people don’t realize they are being watched. In many cases, animals reactively change their behavior once they become aware they are being observed. This reactivity is known as the Hawthorne effect (Chiesa & Hobbs, 2008). In the video below, Damon Brown talks in more detail about the studies that led to the discovery of the Hawthorne effect. Loading... Loading... Loading... Loading... In some instances, naturalistic observation might also be the only way to observe behavior, like in the case of natural disasters or any other condition that would be deemed unethical to conduct in a controlled setting (e.g., inciting a riot). 2.3.1.1 Disadvantages of Naturalistic Observation https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd=… 4/27 10/9/24, 1:10 AM OneNote While naturalistic observation is a powerful approach to collecting insight into behavior, there are a few disadvantages. When conducting naturalistic observation, researchers lack control over the environment and the many different factors that can affect behavior. Therefore, we may not always be sure of what is influencing behavior. With regard to the observation of a wallet being dropped, we are not aware of all the circumstance that may lead a person to keep the money instead of returning it. For example, can you be absolutely sure that the person who picked up the money actually saw who dropped it in the first place? This lack of control over the environment may weaken the conclusions we can draw. It may also make it difficult for another researcher to repeat the exact same experiment. Researchers’ perspectives and bias may also influence the interpretation of behaviors they find relevant. In fact, two observers might take away different observations from the same event (see Figure 2.4). As such, it is important to train researchers how to count observations, and to compare their results with other raters to see if they are also making similar observations. It is important for researchers to share results to ensure the validity of the data they collect and ensure interrater reliability. Figure 2.4: Two people, looking at the same thing, may observe the situation from different perspectives. When conducting naturalistic observation, the researcher should be as unobtrusive as possible to avoid influencing the findings. However, in some situations, the only way to gain access to an environment or group is by participation. 2.3.2 Participant Observation Participant observation is a research method in which a researcher becomes part of the group under investigation. Sometimes this is the only way to gain access to a group. As you will see in Chapter 13: Social Psychology, to understand doomsday cults, scientists had to pose as new cult members. Historically, this form of research has been associated with researchers temporarily living in small communities. Being part of the group can provide a more enriching experience and afford greater access to the daily life and activities of group members. However, there are some limitations to this method. Having the observer immersed in the experience can increase reactivity, as their mere presence may inherently change behavior (remember the Hawthorne effect?). As the observer spends time and interacts with group members, he or she could become biased and “see” only those things that fit the initial hypothesis. At times, though, participant observation does offer unique clues about the group and its cultures. In the late 1960s and early 1970s, David Rosenhan, a professor at Stanford University, was highly skeptical of the diagnostic abilities of clinicians and questioned the accuracy of diagnostic techniques. At the heart of his doubt was whether clinicians could reliably distinguish the sane from the insane; this led to a three-year investigation unlike any other (Rosenhan, 1973). Eight healthy researchers (five men and three women, including Rosenhan himself) tested the notion that psychiatrists were incapable of making accurate distinctions between sanity and insanity. The participants tested 12 hospitals in five different states that represented a wide range of psychiatric care facilities. At the intake interview, all participants (using false identities) reported hearing voices, saying words such as “empty,” “hollow,” and “thud.” The words themselves have little meaning or intention associated with them and are not inherently dangerous or commonly spoken by those with schizophrenia. But those three simple words were all it took to admit all of the researchers to the psychiatric care facilities. Once inside, the participants (or pseudopatients) behaved “normally” and made no further indication of hearing voices in their heads. The participants became a part of the psychiatric ward, taking notes about their experiences and perceptions of clinician attitudes from the perspective of a patient. Eleven of the 12 hospitalized researchers were given a diagnosis of schizophrenia, while the remaining pseudopatient was diagnosed with manic-depressive psychosis. While clinicians could not see that these pseudopatients were sane, patients in the hospital routinely suspected that the researchers were “faking it.” Other patients described note-taking as a sign of “checking up on the hospital,” whereas psychologists, nurses, and other staff members saw note- taking as an aspect of their supposed illness. Rosenhan suggested that physicians operate with a strong bias toward what statisticians call a “false positive”—in this case, the inclination to call a healthy person sick. Participant observation was used to demonstrate that clinicians, at the time, could not reliably tell the difference between people who are sane and those who are insane. Fortunately, our means of clinical assessment have developed quite a bit since this study to establish more checks and balances to minimize patient misdiagnosis. The following video describes the Rosenhan study. https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd=… 5/27 10/9/24, 1:10 AM OneNote Loading... As you can see, participant observation can be valuable research because the researcher is privy to new perspectives and insights that would not be obtainable from naturalistic observation. However, there are some important drawbacks that lead critics of participant observation to question the method’s validity. Clearly, a researcher’s views and bias can affect the interpretation of events. In some cases, a researcher can become so involved and sympathetic to the group that it interferes with research objectivity. Furthermore, because the observer is a participant in the ongoing activities, the researcher can, knowingly or not, influence participants’ behavior, thereby creating the problem of reactivity and affecting the behavior being observed. Another potential disadvantage of participant observation is a low degree of reliability (the consistency or repeatability of research findings). The observations made are highly dependent on the unique conditions of participation, and what may be true for one person’s experience may not be readily shared by others. 2.3.3 Case Studies A case study is an in-depth analysis of a unique circumstance or individual. For instance, what happens when a child is raised without human contact? How does an accident that destroys a part of the brain affect our personality? Fortunately, these events rarely occur. But when they do happen, they present an incredible learning opportunity. Case studies have been made popular in medicine, whereby clinicians observe an unusual patient and attempt to investigate the patient's condition in more detail to provide a broader understanding of a phenomenon. In clinical neuroscience, Henry Molaison—usually referred to as “H.M.” in the scientific literature—is an excellent example of how a case study can be used to gain insight into behavior (Squire, 2009). As a young boy, Henry started to experience mild seizures after falling off his bike and hitting his head (bicycle helmets were rarely used in the 1930s). While manageable at first, his seizures became progressively worse as he aged and could not be treated by conventional means. By the time Henry reached his late 20s, he could no longer live a normal life because of the frequency and severity of his seizure attacks (Corkin, 1984). On the advice of his neurosurgeon, his last resort was bilateral ablation (surgical damage) to his ventral medial temporal lobes (which includes the hippocampus and the entorhinal cortex), as this brain tissue was believed to be the point of origin of Henry’s seizures (see Figure 2.5). Figure 2.5: The brain that changed everything: After surgical removal of both hippocampi to treat uncontrollable epileptic seizures, Henry Molaison was unable to form new memories. The surgery did effectively treat Henry’s seizures; however, there was an unforeseen, disturbing side effect. It seemed that he could no longer form new memories. In the decades that followed, H.M. would become the most studied person in the history of psychology (Squire, 2009). While the details of his case are unique, this was an opportunity for researchers to explore the role of the hippocampus in the formation of memory (Penfield & Milner, 1958), which ultimately led to the identification of different types of memories, like episodic, semantic, and procedural memories (Cohen & Squire, 1980; see Chapter 8: Memory for more details). The challenge of case studies is to generalize findings from a unique case. Because a case study is focused on only one person, group, or event, we can never be sure the conclusions drawn from this particular case can be broadly generalized to other cases. For example, one person’s experience, otherwise known as an anecdote, likely cannot be easily or fairly applied to a broader population of people. Even in Henry’s case, an autopsy revealed damage to his frontal lobes that may also have contributed to his poor memory (The Brian Observatory, n.d.). In summary, naturalistic observation, participant observation, and case studies allow researchers to study small groups (or even individuals) to produce rich descriptive data of behavior. Although a powerful perspective, we are likely not capturing a https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd=… 6/27 10/9/24, 1:10 AM OneNote representation of an entire population of people. As such, researchers often turn to surveys to describe larger patterns of behavior. 2.3.4 Surveys Surveys are an efficient way to quickly collect information and gather an understanding of the current state of people’s opinions or attitudes. Do you want to predict the outcomes of the election? Are consumers planning to spend more this Christmas? Are educational leaders focused on providing degree experiences that lead to good jobs? Finding the answer is simple: Go out and ask people. Surveys offer a quick way of collecting lots of information about the current state of people’s opinions, perspectives, and experiences and can be administered in a variety of ways, including online surveys, mailed questionnaires, person-to-person interviews, and phone interviews. For example, end-of-term course evaluations are a popular way to capture your perceptions of a course. What makes for a good course? Did you learn a lot? Was your instructor effective? Were you fairly assessed? All of these questions create a picture of your experiences within the course. It may be impossible, or simply too time consuming, to survey every single member of a group (called a population), but surveys can be administered to a smaller subset of the population, called a sample. It is vital that the sample selected is representative of the broader population you wish to study. For example, if course evaluations are only administered to the top students in a class, we might expect those students to provide more favorable reviews (on average) than if we also polled students who were struggling with the course. Sampling error or bias is any pooled selection of students that differs from the entire population in meaningful ways. For example, this might include a sample of only females, if we have reason to believe that female-identified students outperform male-identified students in the subject. When sampling errors occur, results and conclusions of the experiment cannot be applied back to the entire population. In this case, the instructor may not be as effective as the survey reports. Surveys are prone to potential disadvantages that must be carefully considered. The questions asked in these surveys must be carefully worded to avoid biasing the outcome in either a positive or negative way (Borgers & Sikke 2004). This is known as wording effects. For instance, how might you respond to the following two questions? Loading... Loading... Notice in the statements above that only one word is different. But incorporating the words forbid or allow can have a powerful influence on our opinion. In response to the first question, the majority of respondent tend to say “Yes,” but when presented with the second phrasing (forbid), even more students respond “No.” Both responses are consistent with the overall thematic representation of the question, but people react more strongly to the word forbid and thereby increase the tendency to respond one way (Adams, 1956). Consider another classic example of how judgment/opinion depends on the specifics of the question. In 1986, a British Gallup poll asked Britons whether their country’s nuclear weapons made them feel safe (Lelyveld, 1986). In this case, 40% of those surveyed agreed. Another poll modified the question to use the word safer, and 50% of respondents agreed. In context, that slight change in wording (one letter, to be exact) shifted about five-and-a-half million people’s votes. These results indicate how important the smallest detail of wording can be to a question. Surveys should also consider response bias from the participants themselves: the tendency for people to answer the question the way they feel they are expected to answer or in systematic ways that are otherwise inaccurate. In the simplest instance, the validity of surveys can be influenced by the acquiescent response bias (otherwise called “yea-saying”). Acquiescence refers to a tendency for participants to indiscriminately “agree” with most if not all items on a survey regardless of their actual https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd=… 7/27 10/9/24, 1:10 AM OneNote opinion (Krosnick, 1991). The socially desirable bias is another systematic approach to answering questions (van de Mortel, 2008). In this case, the bias is not indiscriminate, but participants respond in specific ways that they believe would be seen as acceptable by others. For example, many people would be hesitant to admit to illegal or immoral behavior, especially if the survey results are not kept confidential. Finally, it seems that we all have biased perceptions of our own behavior. For instance, consider the following: Loading... Figure 2.6: Are you a better-than-average driver? Most of us think we are. Did you rate yourself better than average? Probably at around 70%, right? If this describes you, you wouldn’t be alone; in fact, 50% of people think that they are better than 70% of the driving population. This, of course, can’t be true, as only 30% of people can be better than 70% of the population (Roy & Liersch, 2013). The tendency to describe our own behavior is called the better-than-average effect, or illusory superiority (Hoorens, 1993). The effect is not limited to our driving abilities, but is generalizable across a range of personal attributes. In a classic 1977 study, 94% of professors rated themselves as above average relative to their peers (Cross, 1977). As famously quoted by the author, Patricia Cross (1977), “When more than 90 percent of faculty members rate themselves as above-average teachers, and two-thirds rate themselves among the top quarter, the outlook for improvement in teaching seems less than promising” (p. 1). Knowing all this, do you think you are susceptible to response bias? Loading... (Answer: The answer to this can be funny. Despite being exposed to the response bias effect, people will, on average, continue to identify their own behavior as better than average. This again affirms the response bias effect.) Generally speaking, the response or return rate from surveys can vary dramatically depending on the size of the survey and the motivation of the participants. On average, most researchers receive responses from 30–50% of all people surveyed (Baruch & Holtom 2008). We also must consider that some participants who complete surveys may not always carefully consider their responses or provide accurate information, as shown in the cartoon below. Figure 2.7: Some participants may not answer surveys accurately. However, even with these deficiencies, surveys can be tremendously powerful. In a classic example, Alfred Kinsey (1894–1956) revolutionized our understanding of people’s sexual attitudes and behaviors by collecting surveys from more than 18,000 people. Kinsey compiled the surveys into two publications known as the Kinsey reports: Sexual Behavior in the Human Male (1948) and Sexual Behavior in the Human Female (1953). These publications provided a comprehensive and unprecedented insight into people's sexual attitudes, preferences, and orientation. Both publications highlighted the differences between social attitudes of sexuality and actual sexual practice. Both books quickly became best sellers, and Kinsey’s contributions helped spark a massive cultural and social upheaval: the sexual revolution of the 1960s. However, statisticians who evaluated Kinsey’s methods felt that his findings may have been subject to a key survey-related bias: who was willing to participate in this survey, and were those who did representative of the rest of the population? Among the conservative sexual attitudes of the 1950s, many Americans were reluctant to discuss their sex lives publicly. The assumption was that people that do volunteer to be interviewed about taboo subjects like sex may not be representative of the rest of the population. This is known as volunteer bias; that is, the small few who were willing and ready to talk about their sex lives were likely overrepresented in the survey (Strassberg & Lowe, 1995). Aspects of the survey also required adult respondents to think back over https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd=… 8/27 10/9/24, 1:10 AM OneNote several years, even decades, about early sexual experiences. However, despite the caution of surveys presented here, the power of surveys to inform and shape our perspectives is clear. Loading... Loading... Loading... Loading... Loading... Loading... Loading... Loading... Loading... Mini-lecture by Dr. Kousaie: Loading... 2.4 Research Ethics for Human Participants Now that we have established the process of how psychology research is designed and carried out, we must consider the set of principles that have been established for psychologists to follow when they carry out a research study. These standards of behavior are called research ethics. When most people think about the meaning of ethics, they think about how one person should treat another—kind of like the golden rule (“Treat others in the same way you would want to be treated”). In research, ethics is a set of general principles that outline how people should be educated, treated, and respected when participating in any study. Unfortunately, there are examples in history (such as the Tuskegee syphilis study discussed below) that failed to treat participants in ethical ways and will be remembered as a driving force in the creation of formal ethical processes in research. 2.4.1 The Tuskegee Syphilis Study Watch the following video on the Tuskegee syphilis study (1932–1972). The mistreatment of participants in this study was a critical reflection point in developing a consensus of guidelines for the treatment of all research participants in the United States. https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd=… 9/27 10/9/24, 1:10 AM OneNote The Tuskegee syphilis study was intended to follow the natural progression of syphilis, a contagious disease spread primarily through sexual contact. Over 600 African- American men, including 400 known to have already acquired syphilis, were recruited to participate in the study with the promise of free meals, medical treatment for “bad blood” (a generic term for a variety of ailments), and burial insurance (Reverby & Foster, 2010). Unfortunately, the researchers’ only goal was to follow the time course of the disease—they had no intention of treating participants for their “bad blood.” Over the 40- year span of the study, researchers misled participants about the actual purpose of the study and denied them medical treatment, despite numerous medical advances in the treatment of syphilis during this time. This negligence ultimately led to the preventable deaths of hundreds of participants and needlessly contributing to the spread of syphilis. In 1972, the New York Times released a story about the Tuskegee syphilis study, and the public reacted in shock. Shortly thereafter, the government moved to establish federal ethical principles and guidelines outlining how all researchers should conduct research studies (Heller, 1972). 2.4.2 General Ethical Principles of Psychologists The American Psychological Association (APA) has developed a series of five ethical principles to help psychologists develop their research practice, which are discussed in detail below (American Psychological Association, 2002). 2.4.2.1 Principle A: Beneficence and Non-maleficence This principle states that research should strive to do good (beneficence) and avoid creating experiments that can intentionally harm (maleficence) participants. Psychologists must carefully weigh the benefits of the research against the costs that participants may experience and put in place safeguards to protect the mental and physical well-being of research participants. 2.4.2.2 Principle B: Fidelity and Responsibility When people agree to participate in experiments, they are entrusting themselves to the researcher. In turn, the principle of fidelity and responsibility inspires researchers to maintain that trust. The word fidelity is often associated with the meaning “loyal” or “faithful.” In the context of research, this means that researchers should be honest and reliable with participants, data, and when they report their findings. For example, if a study is known to include potential risks of participation, like making participants feel embarrassed or upset, the psychologist should let people know ahead of time so they are prepared for what to expect and can make an informed decision whether to participate or not. Psychologists also have a responsibility to hold themselves and their colleagues to high standards of conduct and take action if needed. Psychologists have a responsibility to protect the well-being of participants by intervening if they see any situation that may harm participants. 2.4.2.3 Principle C: Integrity The principle of integrity states that psychologists should engage in accurate, honest, and non-biased practices in the science, teaching, and practice of psychology. For example, psychologists should always strive to communicate results to colleagues and the public accurately, without making up data (fabrication) or manipulating research data (falsification). 2.4.2.4 Principle D: Justice The concept of justice strives to establish “equality” in the research process. Specifically, those people who participate in the research process should also be the same people who stand to benefit from the research outcomes. Justice is explicitly stated because researchers have historically included or excluded populations from participation in research. For example, women and children have historically been treated as vulnerable populations and unreasonably excluded from participation in clinical research. As a result, important research on the effects of medical treatments has often been collected from male-only populations and then generalized to women and children (Dresser, 1992). This is problematic, because there are fundamental differences between the sexes and people of different ages that may affect the efficacy and safety of treatments. For reasons like this, researchers should not include or exclude any group from participation for reasons that are unrelated to the study. There are sometimes practical reasons to limit participation in a research activity. For instance, a study on child development might only include children within a small range of ages because it captures how children perceive, react, or behave at that particular time. In this case, age is an inclusion criterion—a participant attribute that is essential to answer the research question. Conversely, exclusion criteria are any attributes that would prevent participation because they cannot address the research question. For example, adults would not be included as participants in a child development study because they are not a part of the age range that is being studied. The combination of inclusion and exclusion criteria form a study's eligibility criteria, a set of characteristics https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd… 10/27 10/9/24, 1:10 AM OneNote shared by all participants that ensure that those participating will meaningfully help to address the research question. 2.4.2.5 Principle E: Respect for People’s Rights and Dignity This principle states that each person is valued in the research process and that researchers should take measures to respect and protect participants' rights, privacy, and welfare. In the practice of research, this means that researchers should communicate openly and honestly about the details of the study before asking for participants' consent to participate in the research process. This also includes a requirement for respect of privacy and confidentiality of all participants. It is important to ensure that data are kept private and even made anonymous to ensure that identifying information cannot be traced back to an individual. Respect for people's dignity also includes understanding the vulnerabilities of participant populations (e.g., socioeconomic status, religion, race, disability) and taking measures to ensure that participants are not coerced into participating in an experiment that they otherwise might not feel comfortable doing. For example, compensating research participants with money or course credit is a common practice in psychology research, but the amount of compensation should be reasonable and not an excessive amount that would motivate people to participate in activities they would not otherwise feel comfortable with (Grant & Sugarman, 2004). Loading... Loading... Loading... 2.4.3 The Practice of Ethical Research In the practice of research, federally funded institutions are required to have safeguards in place to ensure the general ethical principles are being upheld. Before any study can begin, research projects conducted in the United States must be reviewed by a research ethics board, called an Institutional Review Board, or IRB. The IRB is a committee of independent people who review and assess if the research project will be carried out in a way that is consistent with the general ethical principles (Ethical Principles of Psychologists and Code of Conduct, 2017). This includes ensuring the following: The proposed study will use sound research design. Risks associated with participation in the study are minimized and reasonable. The benefits of the research outweigh any potential risks. All participants can make an informed decision to participate in the study, and that decision may be withdrawn at any time without consequence to the participant. Safeguards are in place to protect the well-being of participants. All data collected will be kept private and confidential. Once a study receives IRB approval, researchers can begin to recruit participants for the study. It is not just a simple matter of people saying “yes,” though. Rather, potential participants should have a complete understanding of what they agree to. Researchers must obtain informed consent from all participants. Informed consent is the process whereby researchers work with participants to describe essential details of the study. These details include the experimental procedures, the risks and benefits associated with participation in the study, how personal information will be protected, and the rights of participants. For example, if a participant is about to take part in a social stress experiment that might make them feel uncomfortable around others, they should be told in advance not only so that they do not feel “tricked” or deceived by the researcher, but more importantly because the individual may experience negative and unexpected consequences. For instance, a participant who has a history of anxiety problems might not want to take part in a study designed to produce stress (Seedat et al., 2004). Participants can only make an informed decision to consent to their participation once this process has taken place. Loading... Loading... 2.4.3.1 The Facebook Emotional Contagion Experiment: A Question of Informed Consent Figure 2.8: Hundreds of thousands of Facebook users have participated in psychology research without informed consent. Research in controlled laboratory settings has examined how emotional states, like joy or sadness, can be affected and transferred from one person to another through the https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd… 11/27 10/9/24, 1:10 AM OneNote process of “emotional contagion” (kind of like a behavioral version of “catching a cold” from a close friend). We have a pretty good understanding of this process in a laboratory setting, but researchers wanted to know if this would extend through virtual social networks, like Facebook. For one week in January 2012, researchers changed (or manipulated) the amount of positive or negative information in 689,000 users' news feeds (Kramer et al., 2014). Some people saw more posts with positive emotional words, while others saw more negative emotional content on their news feeds. People who saw more positive events in their news feeds were more likely to create a few more positive posts of their own, whereas those who were exposed to more negative posts were more likely to create more negative posts. So, just like in the lab, the contagiousness of emotions extends to social media. Loading... The study was not received well by the public, however, mainly because of the lack of informed consent. Basically, 689,003 users unknowingly participated in a study (Flick, 2015). Participants were not informed of the study or given the “choice” to participate. Editors of the journal that published the study argued that data collection via Facebook did not require the same level of consent as research conducted in federally funded institutions (“Editorial Expression of Concern: Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks”, 2014). The study continued to be a source of ethical debate for years after publication and highlights a case in which research was conducted in an ethical gray area. 2.4.4 Special Ethical Considerations 2.4.4.1 Vulnerable populations When considering the rights of participants in psychology research, it is important to highlight some ethical situations in which potential research participants may not be able to provide free and informed consent. These vulnerable populations include any individual or group of individuals with either of the following two criteria: 1. Decisional impairment: This refers to any instance when a potential participant has diminished capacity to provide informed consent. Typical examples include children and the mentally disabled, who may not be able to understand their rights as participants or the risks associated with their participation. 2. Situational vulnerability: This refers to instances when the freedom of “choice” to participate in research is compromised as a result of undue influence from another source. Common examples include military personnel and prisoners who may feel coerced or obligated to participate in research out of fear of being punished if they do not. Other examples include people in economically disadvantaged situations who may be inclined to participate with the expectation of benefits, like money or medical care, that would not be provided if they do not agree to participate. It might seem simple to say that these populations should simply not participate in research studies. However, the principle of justice suggests that no person be denied the possible benefits associated with participating in a research study. For example, research on the health and well-being of military personnel can be an insightful tool to provide appropriate mental health programming and improve the conditions in which these men and women serve (e.g., Hoge et al., 2006). In these (and similar) instances, the researcher must construct experiments with additional safeguards in place to ensure the protection of vulnerable populations during the research process. Specifically, researchers should consider the following: No study should ever be conducted on vulnerable populations if the research question could be reasonably carried out using participants without these vulnerabilities. When research is carried out with vulnerable populations, researchers should be responsive to the needs, conditions, and priorities of these individuals. IRB committees should include members with expertise on these populations. In instances of decisional impairment, consent to participate in the research process requires that two types of consent are acquired: Parents and guardians must provide informed consent on behalf of the participant and the participant must provide assent (affirmative permission to take part in the study). In this case, both parties are needed to give the okay to participate before their participation can begin. In cases of situational vulnerability, additional safeguards should be put in place to prevent exploitation. For instance, a study may include an impartial third party to advocate on behalf of individuals who might not otherwise feel comfortable doing so. 2.4.4.2 Deception Researchers may feel that informing participants of the real intent of their research may change the way participants react during the experiment, thereby affecting the outcome of the study. For example, imagine a researcher is interested in understanding how the presence of others can influence your behavior and make you do things that you would not normally do (e.g., Latané, & Rodin, 1969). If the experimenter brought this to your attention before the experiment, this information would likely influence the decisions you make in key moments during the experiment. In this case, the researcher would be unable to capture the true nature of your behavior. As a result, some research https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd… 12/27 10/9/24, 1:10 AM OneNote experiments may seek IRB approval to engage in participant deception, or the act of withholding information about the purpose and procedures of the study during the informed consent process. To approve the use of deception in research, IRBs must meet four criteria: 1. The research poses no more than a minimal risk to participants. This means that the research is unlikely to cause emotional or physical discomfort to participants. 2. The deception does not affect the well-being and the rights of the participants throughout the study. 3. Researchers must provide justification that using deception is the only way to conduct the study. There should be no other reasonable alternative approach to addressing the research questions. 4. After the participant’s role in the study is finished, participants should be debriefed by researchers and provided with information about what the researcher was investigating and how their participation will contribute to the research question. On the rare occasions when deception is used in a study, participants must be told about the deception and given reasons why this was necessary to answer the research question. Participants should also be allowed to ask questions and seek clarification about any part of the study. The goal of this process is not only to provide information to participants, but also to help them leave the study in a similar mental state as to when they entered the study. Loading... 2.4.4.3 Milgram’s Conformity Experiment: An Example of Research Deception Stanley Milgram, a psychologist at Yale University, was fascinated by the post–World War II Nuremberg war criminal trials. Many defendants justified killing countless people by claiming that they were merely following commands of their superiors (Barajas, 2016). That is, the soldiers were simply doing what they were told to do. For a moment, consider how you might react to such an order. Most people would intuitively answer with a full-stop “NO.” Milgram, however, wanted to know if there could be any truth to this “obedience” defense. In 1961, Milgram created an experiment to test how far people would go to obey an instruction provided by an authority figure (watch the video below to see how the experiment was done). Before the experiment, experts estimated that fewer than 1% of a population would willingly participate in the death of others. Milgram recruited 40 male participants from a local newspaper advertisement to participate in a study of memory (Milgram, 1963). During the experiment, participants were led to believe that they would help to teach a list of word-pairings to another research participant, who was actually a confederate (another researcher who is acting like a participant). The learner was instructed to study a list of word pairs, followed by a memory test where the “teacher” (the real participant) would name a word and ask the learner to recall its match from a list of four choices. The teacher-participant was instructed to administer an electric shock for every mistake, increasing the magnitude of the shock each time (i.e., the shock would get more painful each time). To do so, Milgram created a mock-shock generator with 30 switches that included voltages marked from 15 volts all the way up to 450 volts. The switches were also labeled with terms like “slight shock,” “moderate shock,” and “danger: severe shock” to imply something about how painful each shock might be. During the experiment, the confederate, who was hooked up to the mock-shock generator, made several wrong answers, and for each the teacher-participant believed that they were delivering a stronger and stronger electric shock (in fact, no shocks were ever applied). The confederate began to complain of the pain associated with the shocks and their desire to end the experiment, even yelling that he had a heart condition at a certain point. At the 300 volt shock, the confederate would physically bang on the walls and demand to be released. At some point during these shocks, many teacher-participants began to object to the experimenter. In response, the experimenter would deliver the following commands to prompt the participant into continuing to deliver shocks: Prompt 1: “Please continue.” Prompt 2: “The experiment requires you to continue.” Prompt 3: “It is absolutely necessary that you continue.” Prompt 4: “You have no other choice but to continue.” Beyond 300 volts (there were still 10 more levels of shocks), the confederate stopped responding altogether (implying that he was dead). Milgram's study surprisingly revealed that 100% of participant delivered shocks up to 300 volts, and 65% continued to deliver shocks up to the maximum 450 volts. Milgram noted that most of the participants strongly objected to delivering the shocks, to the https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd… 13/27 10/9/24, 1:10 AM OneNote point of showing significant distress, but continued to follow orders all the way to the end. Milgram's research gave support to the “obedience” defense by demonstrating that most people are likely to follow orders by an authority figure, even if it means killing another person. The social psychology of this century reveals a major lesson: often it is not so much the kind of person a man is as the kind of situation in which he finds himself that determines how he will act. (Milgram, 1974, p. 205) In this experiment, deception was both a powerful tool and also an unsettling example of the potential harm that research participation can have on the mental well-being of participants (Perry, 2013). Deception was a necessary tool in this experiment to generate an authentic measure of participants' obedience to authority, and the results directly contradicted how most people would “think” they would respond in the same situation. Unfortunately, answering the research question came at a cost to all who participated in the experiment. Several participants experienced emotional discomfort, guilt, and psychological trauma (maleficence) as a result of believing they were seriously harming another person. Furthermore, when several participants protested to delivering electrical shocks, the experimenter's “prompts” failed to respect an individual's freedom to participate (Principle E: Respect for People’s Rights and Dignity). As a result of studies like this, IRBs are not likely to approve studies like this today without experimenters making significant provisions that protect participants from distress and make it easy for participants to withdraw from the study at any time, without penalty or persuasion to keep going. Question 2.24 Review Mark as: None The Milgram Obedience study has received considerable criticism for lack of consideration of which of the following ethical principles? Select all that apply. Multiple answers:Multiple answers are accepted for this question Select one or more answers and submit. For keyboard navigation...SHOW MORE a Beneficence and Nonmaleficence Your answer b Fidelity and Responsibility c Integrity d Respect for People’s Rights and Dignity Your answer Explanation Participants didn't know that they would be administering shocks to a learner, and an authority figure kept telling participants to keep going even when they seemed distressed (deception and therefore a disrespect for rights and dignity). The reason for doing that, of course, was to simulate on a much smaller and milder scale what the SS guards were doing and to potentially see how regular people could commit atrocious acts against others. Participants also thought they had just killed a person - when they administered the XXX level of shock for incorrect answers, which violates beneficence and nonmaleficence. They weren't directed to counseling services if they felt distress even after it was explained that the learner did not receive shocks. Submitted: Show Submitted Answer This is your recorded answer from classroom or homework Show Correct Answer Check My Answer Please enter a new answer to submit. Explanation Explanation Mini-lecture by Dr. Kousaie: Loading... 2.5 Correlation https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd… 14/27 10/9/24, 1:10 AM OneNote Following the successful (and ethical) collection of data, researchers must decide how they want to analyze and explore this information. Their approach will depend both on the study design and research question. When researchers conduct observations, case studies, and surveys, they are often looking to identify relationships that exist between two or more variables. One way to quantify this relationship is through correlation. Here, we are looking for some relationship to show that as one variable changes, so does another. One way to represent the relationship between two variables is to create a scatterplot (Figure 2.9). A scatterplot is a type of graph that has one variable on the x-axis (the horizontal axis) and the other variable on the y-axis (the vertical axis) and provides a visual representation of relationships between variables. If the relationship is strong, the points on the graph cluster tightly together in a linear relationship (the word “linear” implies that the points would fall around a straight line). This relationship can not only be seen, it can also be described using a simple statistic called a correlation (denoted as r) to capture the direction and strength of a relationship between variables. Figure 2.9: This scatterplot shows the relationship between high school and college GPA. 2.5.1 Direction of Correlation Correlations can have positive, negative, or zero directionality. It is important to understand that positive directionality does not imply inherent "goodness," and negative directionality does not denote "badness." When two variables are positively correlated, the variables change in the same direction; that is, as one variable increases, the other variable also increases, and as one variable decreases, the other variable also decreases. For instance, height and weight are positively correlated. In general, as height increases, so does weight. Conversely, when variables are negatively correlated, an increase in one variable leads to a decrease in the other. For example, did you know that using computers during lectures could hurt your grades (Sana et al., 2013)? In this case, doing more than one thing (such as shopping, looking at social media, etc.) on your laptop during a lecture may lead to lower test scores. In this negative correlation, more of one thing is correlated to less of another thing. A zero correlation indicates that there is no apparent relationship between variables. For example, there is no correlation between vaccination early in life and the development of autism (Jain et al., 2015). One of the best ways to show directionality is with a scatterplot. Each data point on a scatterplot represents the intersection of scores on two variables (one on the x-axis and the other on the y-axis). When creating a scatterplot with positively correlated variables, the data points appear to cluster around a line from the bottom left to the top right, whereas a plot of negatively correlated variables would appear around a line extending from the top left to the bottom right. When the correlation is zero, it means that the variables are unrelated and the data points on a scatterplot would appear random (Figure 2.10). Figure 2.10: Visual representations of different correlations. Let’s consider absenteeism and exam scores. You would likely expect that high absenteeism would correlate with lower exam scores. Table 2.1 shows the number of absences and exam scores in a small class of 10 students. Please click here to view the text version of Table 2.1. Table 2.1: Tabular Representations of Different Test Scores Even in the small group of 10 students shown here, it is hard to interpret the relationship between scores—and you can imagine that this would become even more difficult to see in a larger class of hundreds of students! However, a visual representation of the data make this correlation easier to conceptualize. The most common way to do so is with a scatterplot, where one variable is represented on each axis of the graph. Looking https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd… 15/27 10/9/24, 1:10 AM OneNote at the scatterplot in Figure 2.11, is the relationship between the variables more apparent? We hope so. The variables are negatively correlated. If you drew a line through the points that best represented the scores around them, also called the line of best fit, it is easy to see the direction even when the data points are not perfectly aligned. Figure 2.11: This plot visualizes the relationship between grades and absenteeism. A line of best fit is drawn in red to represent the pattern between variables. Note that as the rates of absenteeism increase, exam scores tend to decrease. Let’s consider the relationship between income and years of education using a scatterplot (Figure 2.12). These variables are positively correlated. Figure 2.12: A positive correlation is shown between income and years of education. Visually, you can see that as years of education increase, there is a tendency for income to also increase. Finally, zero correlations occur when one variable is completely unrelated to the second variable. Let’s consider the relationship between weight and GPA; these variables are unrelated. The scatterplot in Figure 2.13 indicates a zero correlation. There is no order to the data points—it looks as if someone threw paint on the wall. Figure 2.13: There is no (or zero) correlation between weight and GPA. Visually, there does not appear to be any order to the data points. Loading... Loading... 2.5.2 Strength of Correlation Positive and negative values convey the direction of a correlation, but these descriptions do not indicate how closely the two variables are related. The strength of a correlation is determined by a second metric. When looking at the examples above, you may have noticed that the negative correlation between absenteeism and grades in Figure 2.11 seemed to more closely resemble a straight line than did the positive correlation between education and salary in Figure 2.12. As a rough estimate, the closer the data points are to the line of best fit, the stronger the correlation is. Using a little math, we can numerically represent the strength of the relationship with a correlation coefficient. The value of a correlation coefficient ranges from –1 to +1. Keep in mind that the positive and negative signs indicate the direction of the relationship, whereas the absolute value of the correlation (regardless of the +/– sign) is the magnitude or strength of the correlation. As the coefficient gets stronger, the value approaches 1.0 (positive or negative)—remember that the directionality is unrelated to strength. In a perfect positive (r = +1.0) and negative (r = –1.0) correlation, all points fall on a straight line. Thus, as the correlation gets stronger, the coefficient gets closer to +1 or –1. When there is no relationship (r = 0 or numbers close to zero), there is no relationship between variables. Note that stronger relationships have data points that cluster around the line of best fit (discussed above). 2.5.3 Correlations Can Be Misleading The most problematic element of correlation is that people assume there is a cause- and-effect relationship between variables. Just because two variables are related doesn’t mean we know why. Read the following statement aloud for emphasis: Correlations are not causation. You will find a version of this in any text regarding correlations and cause-and-effect relationships. Correlation is merely a https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd… 16/27 10/9/24, 1:10 AM OneNote relationship between two variables. For instance, did you know that when ice cream sales rise, so do homicides (Peters, 2013)? Are we to think that eating ice cream causes people to kill other people, or that killing people causes ice cream cravings? Of course not. The correlation between ice cream sales and homicides illustrates the importance of understanding the potential effect of a confounding variable—that is, another variable that may influence one or both variables that we are measuring, thereby influencing the correlation coefficient. Although ice cream consumption and homicide rates are positively correlated, the relationship between ice cream sales and homicides sounds preposterous; it is not likely that eating ice cream creates an urge to kill. Rather, it may be more relevant to consider a type of confounding variable in which a third variable, like temperature, which influences both eating ice cream (Staff & Mariott, 1993) and homicides (Tiihonen et al., 2017). Research shows that both variables rise as a result of hot weather: Ice cream is an appealing way to cool down, and people are more likely to be outside and in greater contact with one another when it is hot. Simply knowing that there is a relationship between two variables doesn’t tell us why that relationship exists. There are many other odd, if not funny, correlations that exist and can be found by searching the internet: Eating more margarine lowers divorce rates. More people drown when Nicolas Cage appears in films. (Vigen, 2015) Just imagine the media campaign based on these correlations: “Margarine: bad for your health, great for your marriage,” or warning labels on Nicolas Cage films: “Do not watch while in close proximity to water.” The caution here is that when a relationship between two variables is appealing, we have a tendency to attribute causality, the notion that one variable directly affects another variable. As we just discussed, however, correlations are not causation. In practice, correlations may miss other factors relevant to describing the relationship between variables. You might be left wondering “what good is a correlation if we can’t draw causation?” Well, correlations are excellent clues to explore relationships and make predictions about behavior. Furthermore, knowing that a correlation exists may lead us to more systematic approaches to determine what the causal relationships between factors are. The most effective way to go about determining causation is through a controlled experiment, which we will discuss in the next section. Loading... Loading... Loading... Loading... Loading... Loading... Loading... Loading... Loading... Loading... Loading... Loading... Loading... Loading... Mini-lecture by Dr. Kousaie: https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd… 17/27 10/9/24, 1:10 AM OneNote 2.6 Experimental Methods 2.6.1 The Hypothesis With observational approaches and correlations, we can describe events and behavior and help formulate hypotheses. Experimental research allows us to understand how separate pieces of facts and information are related. A hypothesis is a prediction about what will happen in research. The aim of conducting experimental research is to explain cause-and-effect relationships. Using the scientific method, we can find support for or modify an existing theory, as well as accumulate evidence through replication to develop new theories. The experiment is used to directly link ideas within a cause-and-effect relationship. In its simplest form, predicting cause-and-effect relationships can be framed in the following way: If _[I do this]_, then _[this]_ will happen. From the above statement, you can see that we are making a prediction about the outcome of an event because of something that we are doing. Formally, we call this prediction a hypothesis. A hypothesis should have the following characteristics: It should be consistent with prior observations or an existing theory. A hypothesis is not simply a “shot in the dark” (wild guess). Rather, a hypothesis should be an educated prediction based on what you have already learned from descriptive methods (such as observations, case studies, or surveys) that provide an understanding of “what is.” A hypothesis builds upon those observations to address “why it is.” It should be as simple as possible. The goal in an experiment is to identify a cause- and-effect relationship between two variables. Adding more than two variables complicates the relationship. As such, answering complex research questions may require multiple experiments, each with its own simple hypothesis. It should be specific. A good hypothesis provides all the details about who we are measuring, what changes will be made during the experiment, and what effect we predict those changes will have on the outcome of the experiment. It should be testable. A specific hypothesis states what evidence will be measured and used as a point of comparison. The hypothesis should be falsifiable. Are there clear conditions or outcomes that could prove your hypothesis false? At first glance, you may be wondering why falsifiability is a necessary component of a hypothesis. After all, if we have a theory, why would we create an experiment that might not work? If you’re scratching your head, consider what would happen if all predictions were only found to be true; inevitably, two theories will contradict one another (e.g., “the world is flat” versus “the world is round”). If we can’t falsify either of these theories in experiments, we would never be able to distinguish fact from fiction. Let’s consider an idea that many seem to subscribe to: Playing violent video games makes people more aggressive. If this is true, we could conduct an experiment by controlling how much exposure people have to violent video games and then measuring whether it has an effect on aggressive behavior. In this case, a testable hypothesis that predicts a specific outcome might be People who play violent video games will exhibit more aggressive behavior than people who do not play violent video games. While more descriptive, this hypothesis still lacks specificity and has not yet defined what is meant by “aggressive behavior.” An operational definition is how a researcher decides to measure a variable. Aggression could include behavior like physical aggressiveness (the number of hits, kicks, bites, or pushes) or verbal intimidation (how many swear words are directed at others). How a variable like aggression is operationally defined might depend on the ease of measurement, the strengths and weaknesses of each measure, and even what measures other researchers have used. For instance, physical aggressiveness may be easier to measure and has been previously used in similar types of studies (e.g., Andersen et al., 2008). A more precise hypothesis would then be People that play violent video games will hit, kick, bite, or push others more frequently than people who do not play violent video games. This is our experimental hypothesis. It is what we expect to find if this idea is correct. This statement is consistent with prior observation and is simple and specific. The hypothesis can be both tested (measured) and falsified if we do not see more aggressive behavior. Next, we have to design an experiment that manipulates people’s exposure to violent video games and measure the effect it has on people's physical aggression to establish a cause-and-effect relationship. 2.6.2 Experimental Variables An independent variable (IV) is the variable that the experimenter will manipulate, and it must contain at least two levels. In the example above, the two groups are violent video game players and a control group that only plays non-violent video games. In this case, https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd… 18/27 10/9/24, 1:10 AM OneNote the independent variable is the type of video games people play (violent or non-violent). It’s easy to remember that the independent variable always comes first, before any measurement is taken, and is what we think will cause a change in our experiment. Without an independent variable, this would not be an experimental measure. The dependent variable (DV), or outcome measure, is the variable(s) the experimenter counts or measures. In our experiment, we expect that aggressive behavior will be measurably greater in our group that plays violent video games. If the independent variable is the cause of change, then the dependent variable is the effect. Since the effect depends on the cause, what is measured is always called the dependent variable. Extraneous variables (also known as confounding variables) are any variables that are not the focus of study, but that may influence the outcome of research if not controlled. For example, when manipulating violent video game playing, it is important that gender is spread evenly across the violent and non-violent gaming conditions. Quite simply, males tend to be more physically aggressive than females (Hyde, 2005), so we expect males to be more physically aggressive regardless of the type of video game they play. If we unequally distribute males and females between groups, we would be introducing an extraneous factor that could affect our measurement of aggressive behavior. One solution might be to equally distribute males and females between the groups to ensure there is no gender bias. By controlling as many extraneous variables as possible, we can be confident that changes we observe in the dependent variable (aggressive behavior) are caused by the effects of our independent variable (the type of video game played). Loading... Loading... Loading... Loading... 2.6.3 Sample Selection When planning an experiment, researchers will need to select experimental and control groups. Remember from our section on surveys that the goal of any selection process is to create groups that are fair (i.e., we are not biasing one group over another) and representative of the bigger population. By creating fair groups, we can be confident that any difference we see between groups in our experiment is caused by the independent variable. In this section we discuss some common sampling techniques researchers use to build research groups. A simple random sample is a type of sampling where every individual in the population has an equal chance of participating. The advantage of a simple random sample is that, if large enough, it should approximate the larger population we wish to study. A stratified random sample is a more careful approach to random sampling and is particularly useful when there are two or more identifiable subgroups in the population. A stratification divides the population first by subgroups, and then randomly takes samples in proportion to the population of interest. For example, in a sample class of 100 students with equal numbers of male and female students, our stratification would be to first separate students by gender and then select an equal number from each gender group. This process ensures equal numbers of males and females are represented in our sample. Finally, a non-random sample can take many forms but generally follows the rule that not all individuals are equally likely to participate. For instance, if a researcher wants to study the effects of concussions in sports on mental health, it would not be ethically possible to randomly assign athletes to a concussion or non-concussion group. In this case, they would have to recruit athletes with pre-existing conditions who are available to participate in the research. In this case, the researchers might decide to create a convenience sample—a group of individuals that are only selected because of a pre- existing condition, convenience, or easy access to participation. 2.6.4 Experimental and Control Groups Once participants have been selected, we need to establish a basis by which we measure changes and compare behavior. For example, let's say we're really interested in a new drug that claims to enhance memory (e.g., Scott et al., 2002). To test this claim, we could compare memory scores between two random samples of people where one group takes the hypothesized memory-enhancing drug and the second group does not. The memory drug is our independent variable. In this case, we are creating two groups as a basis for our comparison. In research, the group that receives the treatment of interest (the memory drug) is called the experimental group. The other group, called the control group, is treated nearly identically to the experimental group but it does not receive the drug of interest. Our dependent variable and point of comparison between groups is how well each group performs on a memory test. We must also take into account the power of the mind and how it can influence our feelings and behavior. The mere thought of taking a drug that enhances our memory may lead to subtle psychological effects that make us pay closer attention to what we https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd… 19/27 10/9/24, 1:10 AM OneNote are memorizing, be slightly more alert, and most importantly perform a little better on the memory test. This effect is real, we are all susceptible to it, and it is called the placebo effect. To feel the power of the placebo effect for yourself, take a look at the Eric Mead’s demonstration in the following video (Mead, 2009). Loading... Loading... To account for the placebo effect, researchers sometimes create a placebo group, using “dummy” treatments to control for our expectations. In the case of a memory experiment, we might give participants a pill that contains no medicine or an injection of only salt and water. In this way, the placebo group controls for the psychological beliefs and expectations that may consciously or unconsciously influence our behavior. 2.6.5 Internal Validity/External Validity When research is designed to study causal relationships through direct manipulation of one variable, we must take measures to control any unrelated factors that might affect the outcome of our experiment. A control group represents a researcher’s efforts to remove or control the influence of any extraneous variables that might have an effect on a dependent variable. The goal here is to be assured that the only differences between our groups are those related to the independent variable. We want to avoid any systematic difference in attitudes or abilities that might bias our results. By controlling for factors that might bias the outcome of an experiment, we are addressing internal validity, the degree to which results may be attributable to the independent variable rather than some other effect of our experiment. Basically, was the experiment done “right”? If we control for internal threats to validity, we should be able to repeat the experiment again and again and come to the same conclusion. How relevant that conclusion is and how generalizable that conclusion may be is another concern. The external validity of a finding speaks to the degree to which a result can be applied beyond the scope of the experiment. Do the results of the experiment apply in the real world? Generalization is the external validity of how the results from an experiment can apply to other settings, other people, and other time periods. Replication not only serves to establish internal validity but can also play a key role in establishing external validity. We can be more confident in the external validity of an experiment if the same experiment can be replicated in different settings, populations, and contexts. Loading... Loading... Loading... Loading... Loading... Loading... 2.6.6 An Example Experiment Have you been wondering why you see so many questions presented throughout this text? While questions, quizzes, and tests are normally used by teachers to assess your knowledge, cognitive psychologists have long theorized that the act of recalling or recognizing material in a test actually leads to better retention of information than if a piece of information had not been recalled (Gates, 1917). While this “testing effect” was first described about 100 years ago, the application of the effect seemed to be limited to cognitive psychology experiments, and it hasn’t been until recently that psychologists have begun to apply and test this theory in a practical setting, like your class. In 2006, Roediger and Karpicke decided to conduct a practical experiment on the effect that tests (the independent variable) had on the memory recall (dependent variable) of relevant educational material. They hypothesized that repeated testing of educational material should lead to greater recall of information than simply studying. To test this hypothesis, the researchers recruited a random sample of undergraduate participants and asked them to read a passage of text. Participants were then randomly https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd… 20/27 10/9/24, 1:10 AM OneNote assigned to two different groups. The first group, our experimental group, was asked to recall facts and details of the passage. That is, our experimental group was tested. The second group, our control group, was not tested but was asked to re-read the passage for an additional study period. In this case, we have just imposed a difference between the groups (our independent variable): One group was asked to recall information while the other was not. At this point, the researchers have exposed groups to two different types of learning experiences. If the “testing effect” hypothesis is real, we should see an improvement in recall in later testing. If the hypothesis is false, we should see no differences between groups, or perhaps a decrease in recall compared to our control group. The experimenters then measured student’s ability to recall the passage either five minutes, two days, or even one week later. The passage of time is important because it helps us understand the short- and long-term effects of taking a test. Figure 2.14 highlights the results of the study in a graph. Notice on the graph that our two independent groups are made distinct by different colored bars, with yellow bars for our experimental group and red bars for our control group. It is believed that “memory” or “recall” of information will be dependent on whether or not students are placed in the control or experimental group. Figure 2.14: This graph shows the mean proportion of idea units recalled on the final test after a five-minute, two-day, or one-week retention interval as a function of a learning condition (additional studying versus initial testing). From Rodiger and Karpicke (2006). As you can see in Figure 2.14, participants in the control group, who were given an additional five minutes of studying, seemed to have a slight advantage in the immediate recall of information. However, when recall of information was delayed by a couple of days or even a week, the participants in the experimental group recalled around 15% more information. That is, performance in the long term is better after testing than after studying. In this case, researchers manipulated two experiences that led to a change in behavior, thus allowing them to conclude that testing caused better memory retention in the long term. Using an experiment, we are finally able to draw a causal relationship between variables. The conclusion and observation of this experiment received a great deal of attention, and other researchers began to ask even more questions about the testing effect. In this text, you are frequently asked to recall information not because we want to assess you, but because we want to improve your learning. Mini-lecture by Dr. Kousaie: 2.7 Making Sense of the Data Once data are collected in an experiment, we still have to make sense of the findings. We want to know whether the experimental and control condition differed with regard to the outcome measure (DV). There are questions to be answered, such as, Is there a difference between groups? Do the results support the hypothesis? Can we conclude there is a cause-and-effect relationship? 2.7.1 Describing Data: Central Tendency Statistics allow researchers to explain and describe the data. These types of manipulations are also known as descriptive statistics. This includes information like the mean, median, and mode and the frequency of certain demographics (e.g., 51% of participants were male). In addition to describing the data, we want to determine whether there are real differences between the independent variable condition so that we can make inferences about the causal relationship between the IV and DV, which is referred to as inferential statistics. Let’s explore the simple ways we can represent and draw conclusions from statistics. Descriptive statistics are a collection of ways to describe the data in the simplest way possible, which involves the use of quantitative values. You are likely already familiar https://uottawa-my.sharepoint.com/personal/emiss090_uottawa_ca/_layouts/15/Doc.aspx?sourcedoc={ed774f1a-6ddd-44e7-b946-9b8867e1cdf6}&action=edit&wd… 21/27 10/9/24, 1:10 AM OneNote with ways to represent data using descriptive statistics. For instance, if we were to ask you for one piece of information that defines your undergraduate performance, what would you say? Most students will likely say grade point average (or GPA). The average, also known as the mean, is a relevant and frequently used measure of the central tendency of the data. In any data set, the central tendency is the score that best represents the others. Figure 2.15: What defines your academic abilities? There are three types of central tendency: the mean (the average score), median (the middle score in an ordered set of data), and mode (the most frequently occurring number in a data set). Using a form of central tendency, such as the mean, allows a researcher to summarize large sets of data in an objective way. Although the mean is the most commonly used form of central tendency, it has one main disadvantage: It can be significantly affected by extreme values (known as outliers). For example, consider the nine household incomes given in Table 2.2. Please click here to view the text version of Table 2.2. Table 2.2: A set of households and their incomes. The mean household income in this group is $73,444. However, take a closer look at the data: You can see th

Use Quizgecko on...
Browser
Browser