Research Methods in Psychology PDF
Document Details
Uploaded by ConvincingMulberryTree
null
OCR
John Crane & Jette Hannibal
Tags
Related
- Research In Psychology: Methods & Design - Eighth Edition Lecture Notes PDF
- Introduction and History of Research PDF
- Research Methods and Statistics PDF
- Introduction to Psychology: PSY100 Research Presentation PDF
- Psychology 243 Notes - Understanding Quantitative & Qualitative Research
- Research in Psychology PDF
Summary
This document is a set of lecture notes on psychology research methods, including essential questions, pop vs. scientific psychology, activities, and critical thinking about commonly held beliefs.
Full Transcript
Research Methods in Psychology Essential How do we acquire knowledge in Psychology? Questions To what extent is Psychology a science? What ethical and methodological considerations must a psychologist make wh...
Research Methods in Psychology Essential How do we acquire knowledge in Psychology? Questions To what extent is Psychology a science? What ethical and methodological considerations must a psychologist make when setting up a research study? What are the strengths and limitations of an experimental approach to studying human behaviour? What do you want to know about human behavior? What would you do to scientifically study these questions? Pop psychology What is pop psychology? Unfounded opinions based on popular beliefs. Often called “urban legend.” Pseudo-science, e.g. psychics Why is it so popular? Gives simple answers to complex issues Based on beliefs: Looking for ways to “help yourself.” Looking for reasons for our failures and validation of our successes/good traits Confirms our existing beliefs and prejudices © John Crane & Jette Hannibal, InThinking www.tok-inthinking.co.uk Scientific Psychology Scientific psychology challenges our existing beliefs and seeks to deepen our understanding of human behaviour. Investigates assumptions and theories using empirical methods (data collection and analysis) Verification of evidence/reliability of research Publication of research results in journals and peer review © John Crane & Jette Hannibal, InThinking www.tok-inthinking.co.uk Activity: You are now going to see some commonly held beliefs.Your task is Is there scientific to guess whether it is supported by evidence for this? science or not. If you think that it is, what would be the evidence? Your task is to guess whether it is supported by science or not. FALSE “Cold weather In studies of cold transmission, people who are chilled are no more likely to get sick than those causes you to who were not. Cold weather itself doesn't make you sick, but it become sick.” can contribute to conditions that make it easier for illnesses to spread. During colder months, people tend to stay indoors in close proximity to each other, which increases the likelihood of spreading viruses like the cold and flu. Additionally, the cold air can dry out nasal passages, reducing their ability to trap viruses and bacteria, which may also increase the risk of infections. So while cold weather doesn't directly cause illness, it creates an environment where getting © John Crane & Jette Hannibal, InThinking sick is more likely. www.tok-inthinking.co.uk “Sugar makes kids FALSE hyperactive.” Despite the common belief, scientific studies have found no strong link between sugar and hyperactivity in children. Often, excitement from events like parties, where sugary foods are consumed, is mistaken for a sugar-induced energy boost. Parental expectations can also influence perceptions of hyperactivity. In one study, parents were told their kids had sugar and they were more likely to report problem behavior — but in reality, the kids had consumed a sugar-free drink. © John Crane & Jette Hannibal, InThinking www.tok-inthinking.co.uk FALSE. “You only use 10% In reality, nearly all parts of the brain of your brain.” are active at different times, even when we are at rest or performing simple tasks. Brain imaging studies show that we use much more than 10%, and various regions are responsible for different functions like movement, memory, and decision-making. The brain is highly efficient, and all © John Crane & Jette Hannibal, InThinking www.tok-inthinking.co.uk areas have a purpose. MAYBE. “Your brain works The idea that the brain works better better under under pressure can be true for some people and situations. pressure.” Pressure and stress can sometimes enhance focus and performance, especially in short-term or high-stakes scenarios. However, chronic stress or excessive pressure can negatively affect cognitive function, decision-making, and overall health. It's a balance—moderate pressure might improve performance, but too © John Crane & Jette Hannibal, InThinking much stress can be detrimental www.tok-inthinking.co.uk ATL: Inquiry How can you use research to improve real-life situations? The British Psychological Society’s Research Digest contains many brief descriptions of empirical studies in the psychology. You can find these studies here. 1. Find an empirical study on this site that interests you. 2. Follow the instructions on this Google Slides - Evidence-based suggestions to improve real-world issues The Perils and Promise of Praise (Dweck 2007) 1. What is meant by a “fixed mindset?” What is meant by a “growth mindset?” 2. What is Dweck’s theory about the effect of praise? 3. How does Dweck carry out research to test her theory? What was her procedure? 4. What were Dweck’s findings? What were the results of her study? 5. Implications for your school: a. What kind of praise do you think is most common in this school (or your previous school, if you’re new)? This question is asking you to think about your experiences and what you have witnessed. b. Do you think we should do something differently, based on this research? What/how? 6. Broader implications: Do the findings of this study matter only for school? In what other contexts could we apply these findings? Introduction to Research Methods Experiments in Psychology QUANTITATIVE METHODOLOGIES Experiments - Key Vocabulary Research question/aim: the purpose if the study - what questions does this study try to answer? Independent variable - the variable that is manipulated by the researcher. Dependent variable - the variable that is measured by the researcher. It is assumed that this variable changes as a result of the manipulation of the independent variable. Controlled variables - variables that are kept constant in order to avoid influencing the relationship between the IV and the DV. Standardized procedure - the idea that directions given to participants during an experiment are exactly the same. This is the most basic form of "control" for a study. Variables: Independent vs. dependent? 1) How does the kind of praise given by teachers affect the students’ mindset (growth vs. fixed)? 2) How do teachers’ expectations of their students affect the students’ intellectual performance? 3) How does the kind of therapy affect the patients’ outcome, in cases of depression? 4) How does the sex education curriculum applied in schools affect the students’ health? Now, think of the following: a. For the IV: how can a researcher manipulate this variable? How can the researcher create a change in this variable? b. For the DV: how can a researcher measure this variable? Hypothesis writing in steps Follow the following steps when writing hypotheses: 1) Identify the conceptual independent variable (IV) and dependent variable (DV) 2) Identify the operationalized independent variable (IV) and dependent variable (DV) 3) Identify the population to be studied 4) Write the null hypothesis, using the operationalized independent and dependent variables and identifying the population to be studied. 5) Write the research hypothesis, using the operationalized independent and dependent variables and identifying the population to be studied. Hypothesis writing in steps Steps #1 and #2: What does “conceptual” mean? What does “operationalized” mean? Identify the conceptual and It means the concept/big idea It means how exactly you are going operationalized independent that you are going to manipulate to manipulate (IV) or measure (DV) and dependent variables (IV) or measure (DV) this variable Research question: How does praise given by teachers affect the students’ mindset (growth vs. fixed)? Independent variable (IV) Dependent variable (DV) Conceptual IV: praise given by teachers Conceptual DV: Students’ mindset Operationalized IV: whether the teacher Operationalized DV: students’ willingness to take on a praised the student for their intelligence or more challenging task for their effort Hypothesis writing in steps Step #3 What does the “population” mean? Identify the population to be studied It means who you are studying; what group of people are trying to draw conclusions about? Research question: How does praise given by teachers affect the students’ mindset (growth vs. fixed)? Population: students Hypothesis writing in steps Research question: How does praise given by teachers affect the students’ Steps #3 and #4 mindset (growth vs. fixed)? Write the null hypothesis and the research hypothesis, Operationalized IV: whether the teacher praised the student for their intelligence using the operationalized ID or for their effort and DV and identifying the population to be studied. Operationalized DV: students’ willingness to take on a more challenging task Population: students Null hypothesis: states that the independent variable will Research hypothesis: that states that the independent NOT affect the dependent variable. variable will affect the dependent variable. There will be no significant difference in the students’ Students who are praised for their intelligence will be less willingness to take on a more challenging task, based on willing to take on a challenging task, compared to whether the teacher praised the student for their students who were praised for their effort. intelligence or for their effort. Hypothesis Writing Activity 1. On Toddle, find the activity titled “Hypothesis Writing Activity” 2. Read the instructions 3. Complete all exercises turn it in when you’re done. Sampling Techniques Population: group of people that the Goal: to generalize the results obtained from psychologist/researcher wants to understand or the sample, to the target population as a describe. whole I want to understand how stress affects people's To do this, the sample needs to be ability to make rational choices. representative of the target population I want to understand the eating habits of working-class white Americans. I want to understand the coping strategies of middle-aged men who have cancer. I want to understand the effects of stress from the IB program on our school's current group of grade 11 students. Sample: group selected from the population, to be in the research and represent the population Sampling Techniques Self-selected sampling or “volunteer sample”: people volunteer for the study; example: putting an ad in the newspaper & asking people to sign up Pro: Motivated and less likely to quit the research Con: May not be representative of the population Opportunity sampling or, “convenience sample”: when you use a pre-existing sample example. you select paralel A to be your sample, when studying grade 11 Pro: easy to access; time-saving Con: the sample may be too homogenous, and not representative of the population Random sampling when the sample is randomly selected example. drawing names out of a hat, when studying grade 11 Pro: less prone to sampling bias, because everyone in the population has the same chance of being selected Con: If the population is too large, then random sampling may be impossible Sampling Techniques Snowball sampling or, “network sample”; when you find a sample by drawing on the network on someone who may know potential participants example. you are looking for former drug addicts, and you ask a former drug addict to direct you to some friends that have the same history Pro: time-saving, especially when looking for a very specific group of people Con: may not be representative of the population Stratified sampling when a sample attempts to reflect sub-groups within the target population example. you are studying academic achievement in students at Einstein, so you make sure that your sample includes students who have A’s, students who have B’s, and so forth. Pro: has the potential of being very representative of the population Con: caution with labeling groups within a community Sampling Techniques The extent to which a study can be generalized to the target population is referred to Goal: to generalize the results as external validity obtained from the sample, to the target population as a whole If the sample IS representative of the population (or as much as possible!), we To do this, the sample needs say that the study has high external validity. to be representative of the If the sample is NOT representative of the population, then we say that the target population research has low external validity because of has sampling bias. BUT...when we say a study has sampling bias, we need to explain why this may have made a difference in the results of the study. So, why would a sample of students be biased, in the following studies?: 1. A study to find out if eating chocolate may affect memory. 2. A study to find out if levels of social media use are linked to levels of depression. 3. A study to find out if watching a funny movie changes one's testosterone levels. 4. A study to find out if childhood trauma has an effect on one's problem solving skills. Sampling Techniques Research Scenario: You have been hired by your local government as a health psychologist with the goal of increasing exercise in the local community. You decide to carry out interviews at the local fitness center to learn more about people’s motivation to engage in exercise. 1. What type of sample of this? 2. Your study may be criticized for having a sampling bias. Which group of people may be over-represented? Which group may be underrepresented? 3. How do you think that you could get a more representative sample for your study? Types of experiments There are two ways to categorize experiments: 1. by how the independent variable occurs: a. true experiments b. quasi-experiments c. natural experiments 2. by where the experiment takes place a. lab experiments b. field experiments Types of experiments - true experiment True experiment: the researcher actively manipulates the independent variable ○ Participants are randomly assigned* into conditions, or they participate in all conditions the procedure is very standardized and there is an attempt to control extraneous variables* Example: Stroop Effect experiment, where the researcher controls whether the words match the ink color, or not Types of experiments - true experiment True experiment: *Extraneous variables: all variables that are NOT the independent variable, that could affect the results of the experiment. In experiments, researchers try their best to control these extraneous variables (i.e. to make sure they’re the same for all participants, so that they can be sure that the changes in the dependent variable are because of the independent variable. *Random assignment into conditions: the participants in the experiment are assigned to the different conditions at random. For example, in a study about the effect of stress on memory, one participant is randomly assigned to the high-stress condition, the other is assigned to the low-stress condition. True Experiment Demonstration Thank you for consenting to participate in this experiment. I will now proceed to show you a list of the names of colors. For example, you may see the word “yellow”. I want you to tell me the font color, which is the color that the word is written on. For example, BLUE -- you should say “blue” YELLOW --- you should say “blue” Go through the entire list as quickly as possible, without making mistakes. Are you ready to begin now? True Experiment Demonstration Congruent condition True Experiment Demonstration Incongruent condition Types of experiments - quasi experiment Quasi-experiment: the research does NOT manipulate the independent variable because it occurs naturally and distinguishes the participants from one another (example. gender, profession, age) There is no random assignment into conditions, because the allocation occurs naturally (for example, women vs men vs non-binary people, or psychologists vs. lawyers) Quasi-Experiment Demonstration Research question: Does a person's ______ affect how long they can hold a plank for? Types of experiments - natural experiment Natural experiments Natural experiments are conducted in the everyday (i.e. real life) environment of the participants, but here the experimenter has no control over the independent variable as it occurs naturally in real life. Ex. a study on the effect of television on children’s attitudes ○ television was introduced naturally - the researchers did not manipulate this Types of experiments - lab experiment Lab experiments Controlled Environment: Takes place in a highly controlled, artificial setting (e.g., a laboratory). Variables Manipulated: The independent variable is generally deliberately manipulated by the researcher. ○ lab experiments are often (though not always!) true experiments, too. The control over variables allows for better determination of cause and effect. Example: A memory experiment where participants recall words in a lab setting with controlled conditions (lighting, time, etc.) Types of experiments - lab experiments Lab experiments Advantages: Control over potential extraneous variables. Easier replication of the study. High precision in measuring variables. Disadvantages: The artificial setting might not reflect real-life situations Potential for participants to alter their behavior because they know they are being studied. Types of experiments - field experiment Field experiments Key characteristics Natural Environment: Conducted in real-world settings (e.g., schools, workplaces). Less Control Over Variables: Control over variables is limited, potentially including the IV. Findings are more likely to reflect real-world behaviors. Example: An experiment where a researcher studies the effects of background music on concentration in a classroom. Types of experiments - field experiment Field experiments Advantages: Natural setting might make behavior more natural. Participants may not know they are being observed, making their behavior more natural. Disadvantages: Harder to control extraneous variables. More difficult to replicate due to the uniqueness of the environment. Experimental designs Experimental designs: what strategy was used for the experiment? This depends on the purpose of the experiment. Repeated measures design (within subjects): one participant receives each condition of the condition 1 experiment “BLUE” - Example: If we were testing the effect of music Condition 2 on learning, the same participants would “BLUE” memorize a list of words with music - and then again without music. Experimental designs Independent samples design (between subjects): each participant is randomly assigned to only one condition of the experiment. condition 1 - Example: If I show participants a video of a car crash, and then ask them to estimate the car’s speed, but ask some participants the question “how fast were the cars going when they smashed each other?”, and others the question “how fast were the cars going when they hit each other?” Matched pairs design (between subjects): pairs of participants Condition 2 are matched in terms of key variables, such as age and IQ. - One member of each pair is then placed into one condition and the other member into the other condition. This allows the groups to be as similar as possible. Hypothesis Writing Activity Continuation 1. Unsubmit your previous hypothesis writing activity on Toddle. 2. Gather in pairs to review and discuss each of the research scenarios from your original task. Work on a single document moving forward. Write both of your names on the document. 3. For each scenario: ○ Revisit what you initially identified as the Independent Variable (IV), Dependent Variable (DV), and the Population. ○ Discuss the best way to operationalize these variables. You may adjust your original definitions based on your discussion. ○ Choose the best operationalization of the IV and DV for each scenario, and use this choice to guide the next steps. 4. Determine the Sampling Technique: ○ What sampling technique should or could be used for each scenario? Explain your reasoning. 5. Identify the Type of Experiment: ○ Is the experiment True, Quasi, or Natural? ○ Is the experiment conducted in a Lab or a Field setting? Provide your reasoning. 6. Determine the Experimental Design: ○ What experimental design is being used? (e.g., repeated measures, independent groups, or matched pairs). Explain your choice. 7. Add these points to your original work, building upon your previous responses. 8. Resubmit the updated task on Toddle Evaluating Methodology - Experiments Evaluating Methodology - Internal Validity Internal validity is a measure of how well a study is conducted (its structure) and how accurately its results reflect the studied group. Internal validity is not a "yes or no" concept. ○ Instead, we consider how confident we can be with study findings based on whether the research avoids traps that may make those findings questionable. In short, you can only be confident that a study is internally valid if you can rule out alternative explanations for the findings. Evaluating Methodology - Internal Validity To assess internal validity, you can ask yourself a series of questions: Were the participants randomly assigned to groups? Random assignment helps ensure that any differences between groups are not due to pre-existing differences among participants. Were the groups equivalent at the beginning of the study? Before the manipulation (independent variable) occurs, it's important to ensure that the experimental and control groups are similar in all relevant aspects. Was there a control group? Having a control group allows you to compare the experimental group's results with those of a group that did not receive the IV, helping you attribute any observed effects to the independent variable. Evaluating Methodology - Internal Validity Was the experiment conducted in a controlled environment? Minimizing extraneous variables by conducting the experiment in a controlled environment helps ensure that changes in the dependent variable are most likely due to the independent variable. Was the experiment double-blind or single-blind? In a double-blind study, both participants and experimenters are unaware of who is in the experimental group, which reduces the potential for bias. Were any confounding (extraneous) variables controlled for? Identify and account for any variables that might have influenced the results and were not part of the experimental design. Evaluating Methodology - Internal Validity Was the procedure standardized and consistent for all participants? Ensure that the experimental procedure is the same for all participants to eliminate potential sources of bias. Was there a clear operational definition of the independent and dependent variables? Precisely define the variables being measured or manipulated to ensure clarity and consistency. ○ Where these definitions appropriate? Did the researcher actually manipulate what they wanted to manipulate, and measure what they wanted to measure? (i.e. construct validity) Were there any demand characteristics or experimenter biases? Consider whether participants' behavior may have been influenced by their awareness of the experiment's purpose or by the experimenter's cues. Evaluating Methodology - Internal Validity Was there a sufficient sample size? Ensure that the sample size is large enough to detect meaningful effects. Were there any order effects? If the experiment involves repeated measurements or tasks, check for order effects (e.g., practice or fatigue effects) and control for them if necessary. Were there multiple researchers comparing their observations? Comparing observations among multiple researchers (researcher triangulation) helps ensure the accuracy of those observations. Evaluating Methodology - External Validity External validity relates to how applicable the findings are in the real world. It means that the findings can be generalizable to similar individuals or populations. External validity affirmatively answers the question: Do the findings apply to similar people, settings, situations, and time periods? Population validity and ecological validity are two types of external validity. ○ Population validity refers to whether you can generalize the research outcomes to other populations or groups. ○ Ecological validity refers to whether a study's findings can be generalized to real-life situations or settings. Evaluating Methodology - External Validity To assess external validity, you can ask yourself the following questions: Were the participants representative of the target population? Consider whether the participants in the study accurately reflect the characteristics of the larger population you want to generalize to. Was the sample size adequate for generalization? Ensure that the sample size is sufficiently large to make meaningful generalizations to the target population Was the study conducted in a natural or artificial setting (i.e. how was its ecological validity?) Consider whether the experimental environment accurately represents the real-world context where the phenomenon of interest typically occurs. Evaluating Methodology - External Validity Were the procedures and manipulations similar to real-life situations? Assess whether the experimental conditions and manipulations closely resemble the situations and conditions you want to generalize the results to. Were the results consistent across different contexts or populations? If the study involved variations in context or populations, examine whether the effects were consistent across these different conditions. Was the study conducted over a long enough duration to capture real-world dynamics? Some phenomena may change or develop over time, so consider whether the study's duration was appropriate for the research question. Evaluating Methodology - External Validity Were there any situational or temporal factors that might limit generalization? Consider whether specific factors in the study's context or timing could restrict the applicability of the findings to different situations or time periods. Did the study use a single method or multiple methods of data collection? Consider whether using multiple methods (e.g., surveys, observations, interviews) enhances the generalizability of the findings. Ethics in Psychological Research How can we treat participants ethically? Ethics: code conduct necessary when Why should psychologists treat participants ethically? carrying out research. moral responsibility to protect participants public trust so that they can obtain participants for Main ethical guidelines in psychological research future experiments avoiding lawsuits 1. informed consent 2. use of deception 3. protection from harm and undue stress 4. right to withdraw 5. debriefing 6. anonymity How can we treat participants ethically? Ethics: code conduct necessary Informed consent : when carrying out research. participants must be told about the nature of the study and agree to participate ○ must know which rights they have ○ must be able to understand the research Parents/guardians must give consent for underage participants usually a signed document ○ usually the first thing in an experiment or any study But...does the experiment work if the researcher tells participants everything about the experiment? How can we treat participants ethically? The use of deception Deception can either be misinformation or it may be not telling the participant the complete goal of the study. Ideally, participants shouldn’t be deceived ○ slight deception (which does not cause stress or harm to the participants) may be used, if justified appropriately How can we treat participants ethically? Protection from undue stress & harm most basic ethical standard humiliation, force, coercion, violence, or anything that may harm the participant’s physical or psychological health but, is always possible to prevent any stress?? "Undue" stress is a higher level of stress than an individual may experience on a day-to-day basis How can we treat participants ethically? Right to withdraw participants have the right to withdraw from the experiment at any time they should not be pressured, coerced, or forced to remain in the study, even if they gave informed consent at the beginning. How can we treat participants ethically? Debriefing at the end of the study, the true purpose of the research must be revealed to participants if they were deceived, that must be explained must be given access to results, if they want them when the study deals with sensitive subjects, the debriefing may include references to additional support/resources How can we treat participants ethically? Participant anonymity all the information obtained in the study must be anonymous names or any identifying information should not be revealed in the publication of the study Correlational Studies QUANTITATIVE METHODOLOGIES Correlational Studies A correlational study in psychology is a quantitative research method used to examine the relationship between two or more variables without manipulating them. Sometimes, an experiment is not possible, but the The goal is to determine whether and to what researcher can still collect quantitative data degree the variables are related. that show a relationship between variables. However, this type of study does not establish When is it not possible/ethical to do an a cause-and-effect relationship, meaning it experiment? Come up with an example. cannot show that one variable causes changes in another ○ it only shows whether they are associated and the direction of their relationship (positive, negative, or no correlation). Correlational Studies Correlation: when one variable changes, the other is likely to change too ○ Positive correlation ○ Negative correlation ○ No correlation But, correlation is NOT causation! Correlational Studies Positive correlation: when variable x increases, variable y is likely to increase too; OR, when x decreases, y is likely to decrease too Example: the more hours you spend studying, the more likely that you will do well on your exam the less hours you spend studying, the less likely that you will do well on your exam Correlational Studies Negative correlation: when x decreases, y is likely to increase, OR, when x increases, y is likely to decrease. Example: the more hours you spend on videogames, the lower your exam score might be the less hours you spend on videogames, the higher your exam score might be Correlational Studies No correlation: changes on variable x do not relate to any changes on variable y Example: your height has nothing to do with the number of romantic partners you have the numbers of hours you exercise has nothing to you with your mother’s age Correlational Studies A scatter plot is a graphical display that shows the relationships or associations between two numerical variables, which are represented as points (or dots) for each pair of scores. It indicates the strength and direction of the correlation between the two variables. When you draw a scatter plot, it doesn’t matter which variable goes on the x-axis and which goes on the y-axis. Correlational Studies Instead of drawing a scatter plot, a -1 would be a very strong negative correlation statistical calculation can be used to 0 would be no correlation express the correlation numerically as a coefficient, ranging from -1 to +1. +1 would be a very strong positive correlation. Correlational Studies A key limitation of correlational research: bidirectional ambiguity Since no independent variable is manipulated, we cannot say that variable x causes variable y Maybe y causes x instead? Maybe they interact to cause behaviour? Maybe their correlation is just coincidental and the results are actually due to another variable? Correlational Studies Do you think that there is actually a relationship between these two? No = this is a spurious correlation! Sometimes, we might find very strong correlations between variables that seem completely unrelated. This is another reason why we should be critical when interpreting correlational studies. ATL: Thinking Critically For each of the following news headlines, do you think that the findings are causal (cause & effect - the result of an experiment) or simply correlational in nature? How do you think that the research would have been done? 1. Cell phones disrupt teen sleep. This study could be either. If it were an experiment, then teens would have to be asked to record their quality of sleep, either by self-reporting or by using some technology to monitor their sleep. They would do that first with their regular phone use before bedtime and then again after two weeks with no phone use. The study could also be correlational where there is no before and after position, but a survey would be given asking them about their phone use and the quality of their sleep. 2. Using Wash your Hands signs in public bathrooms increases the number of hand washers! This study would have to be experimental. It would involve monitoring the number of hand washers either by having a "bathroom attendant" count the percent of those that leave without washing their hands or by filming the sinks. They would then add signs to see whether hand-washing increased. Causation could then be determined. ATL: Thinking Critically For each of the following news headlines, do you think that the findings are causal (cause & effect) or simply correlational in nature? How do you think that the research would have been done? 3. Men who are distracted by a beautiful woman are more likely to take financial risks. This study would also have to be experimental. But the limitation is that it is most likely that there were two separate conditions - the group with a beautiful woman and the group with no beautiful woman. So, although causality may be inferred, one would have to be cautious because personality differences and experience in financial decision-making may be very different for the participants in the two conditions. 4. Study suggests attending religious services sharply cuts the risk of death It would be rather difficult to set this up as an experiment. The study would have to be a correlational study. It would look at different age groups - e.g. 45 - 55, 56 - 65, and 66 - 75. For each group they would look at their attendance at religious services and then, perhaps two years later, see how many of each group is still alive. Task: Interpreting Correlation Coefficients Correlational Studies Strengths: Limitations: 1. Correlation allows the researcher to 1. Correlation is not and cannot be taken to investigate naturally occurring variables that imply causation. Even if there is a very may be unethical or impractical to test strong association between two variables, experimentally. we cannot assume that one causes the other. a. For example, it would be unethical to conduct an experiment on whether smoking causes a. There is bidirectional ambiguity! lung cancer. 2. Correlation allows the researcher to clearly and easily see if there is a relationship between variables. This can then be displayed in a graphical form. Qualitative Research Qualitative Research: Intro Method chosen depends on: purpose, participants, researcher’s beliefs about the nature of knowledge and how it can be acquired How is knowledge gathered? ○ Natural sciences: rely on deductive processes, hypothesis testing, and evidence to support a conclusion. Focus on causation relationships ○ Social sciences: often rely on inductive processes, evidence used to reach a conclusion, focus of understanding the complexity of social processes Some consider qualitative research to not be scientific because it does not follow the scientific method of the natural sciences ○ But, qualitative researchers say science should rather be defined as a systematic, rigorous, empirical task that produces trustworthy and reliable info, so qualitative research is science Qualitative Research: Intro Qualitative research methods: Case study Interview ○ Focus group interview Observation Questionnaire (NOT a survey) Qualitative Research: Intro Characteristics of qualitative research: Interested in meaning- how people make sense of the world or an experience Objective is to describe and possibly explain events and experiences Often studies people in their own environment The data are gathered through direct interaction with the participants, or observations. The data consist of texts, rich in description. The data are extensive and open to interpretation, it is not easy to analyze as there is no single approach The data can generate theory. Qualitative Research: Strengths & Limitations Strengths Limitations Rich data Can be very Useful for explaining time-consuming complex and sensitive Data analysis can be issues difficult Explain phenomena Interpretation may be beyond mere observation subjective Identify and evaluate factors that contribute to Cannot establish cause & solving a problem effect relationships Generate new ideas and High risk of researcher theories for and participant biases problem-solving Qualitative Research: Credibility Credibility (or “trustworthiness” in qualitative research research = internal validity in quantitative research we ask the question: to what extent to the findings reflect the reality? ○ if the research presents a true picture of the phenomenon being studied, then the study is credible How can we increase the credibility of a study? ○ triangulation ○ reflexivity ○ establishing a rapport ○ credibility checks ○ iterative questioning ○ rich descriptions ***DO NOT USE THE WORD “validity” in any context when analyzing qualitative research. It only applies to quantitative research.*** Qualitative Research: Credibility Increasing credibility: triangulation a combination of different approaches to collecting and interpreting data; three main types: ○ method triangulation: use various methods ex. an observation and an interview ○ data triangulation: use data from various sources ex. interviewing students, teachers, and cleaning/maintenance staff at a school, about the same topic ○ ○ researcher triangulation: combining the interpretations /observations of different researchers ex. if two people see the same thing, this increases credibility Qualitative Research: Credibility Increasing credibility: establishing a rapport rapport: a friendly, harmonious relationship the researcher should ensure that the participants are honest, by creating a “friendly” research environment ○ ex. the researcher should remind the participants that participation is voluntary ○ ex. the researcher should explain that there are no right or wrong answers in an interview Qualitative Research: Credibility Increasing credibility: iterative questioning iterative: doing something again and again, usually to improve it. in research projects, especially those involving sensitive issues, participants may distort data either intentionally or unintentionally. researchers should identify those ambiguous answers and return to the same topic, but rephrase the question Qualitative Research: Credibility Increasing credibility: reflexivity Important for researchers to be aware of their own contribution to the constructing of meaning in the research and to reflect on ways in which bias can occur Personal reflexivity: reflect on factors such as values , beliefs, experiences, interests, ideology ○ “I noticed that overcoming trauma was particularly emphasized in their conversations; however, since I myself have a history of overcoming childhood trauma, this observation could have been influenced by my personal beliefs and should be cross-checked by an independent interviewer.” Epistemological reflexivity: thinking about the ways knowledge has been generated, and assumptions ○ “the following behaviors were observed….however, they should be interpreted with caution because participants were aware they were being observed and hence might have modified their behavior.” Qualitative Research: Credibility Increasing credibility: credibility checks checking the accuracy of data by asking participants themselves to read interview transcripts or field notes of observations, so confirm that these are accurate representations what they said or did. Qualitative Research: Credibility Increasing credibility: thick (or “rich”) descriptions explaining the behavior itself AND the context in which it occurred, so it can be understood ○ ex. “he smiled” vs “when asked if he had lied about having done his homework, he smiled” Qualitative Research: Researcher Biases confirmation bias (same as in quantitative research) ○ avoid by: very difficult to avoid in qualitative research. but, researchers can be retained to recognize it and take it into account (i.e. reflexivity) leading question bias ○ when participants in an interview are inclined to answer a certain way because the wording of the question encourages them to (ex.. “what do you remember about the traumatic events of 9/11”? ○ avoid by: asking neutral questions (ex. “what do you remember about 9/11?”) question order bias ○ when the response to one question influences the answers to another question ex. participant answered x in one question, so they think they must also answer the same in another question, to be consistent. ○ avoid by: difficult to avoid, but minimized by asking general questions before specific ones Qualitative Research: Researcher Biases sampling bias (same as in quantitative research) ○ avoid by: very difficult to avoid in qualitative research biased reporting ○ when the findings are not equally represented in the research report Ex. the researcher might only briefly mention pieces of evidence that don't “fit” ○ avoid by: reflexivity, researcher triangulation Qualitative Research: Participant Biases social desirability (same as in quantitative research) ○ avoid by: non-judgemental questions, good rapport dominant respondent bias ○ in group interviews, when one participant influences the behaviors and responses of the others ○ avoid by: researchers should be ready to “mediate” and give everyone opportunities to talk sensitivity bias ○ when participants respond honestly to regular reactivity (same as in quantitative research) ○ when participants alter their behavior or responses questions, but distort their responses on sensitive because they are aware they are being observed or ones ○ avoid by: good rapport, reinforcing ethical studied considerations like confidentiality, increase the ○ avoid by: good rapport, using unobtrusive sensitivity of the questions gradually observation methods when possible Qualitative Research: Ethics in Conducting the Research Overall, the same ethical considerations apply to Anonymity and confidentiality quantitative and qualitative research. ○ Identity of participants should not be known outside the research team Ethical considerations in conducting the research: ○ Data like video tapes should be destroy after Informed consent research to preserve confidentiality ○ Should always be obtained; but, sometimes it ○ If non-anonymized data is needed, the participant is not possible (ex. covert observations) needs to explicitly consent to this Protecting participants from undue stress or harm Right to withdraw ○ Participants should be informed of the topic ○ may not be possible if participants don’t know that are going to be addressed they are being studied ○ Sensitive issues should be carried Deception professionally and with sympathy; if needed ○ may be justified, depending on the case the interview could be stopped ○ The researcher should not provide advice to Debriefing participants, but they can give them guidance ○ always necessary! but sometimes participants regarding where to find support never know that they were observed ***Important note: these Qualitative Research: considerations also apply to quantitative research*** Ethics in Reporting and Applying the Results Aside from the ethical considerations involved in conducting the study, there are also ethical considerations involved in reporting and applying the results of a study. There are three areas of consideration 1. reporting the individual results to participants 2. publishing the findings 3. applying the findings ***Important note: these Qualitative Research: considerations also apply to quantitative research*** Ethics in Reporting and Applying the Results Ethical considerations in reporting the individual including information in informed consent results to participants: ○ for primary findings: consent should state how All findings in a research study may be divided into the results will be shared with participants. ○ primary findings: those discovered as ○ for incidental findings: researchers should have a part of the main aim of the research plan for dealing with these, and share the plan in ○ incidental findings: those discovered the consent process. unintentionally the plan may be to not share incidental Risks and benefits findings - but participants must agree to ○ consider the risks and benefits of that. disclosing the findings to participants unanticipated incidental findings Ex. what if you’re using brain scans to study emotion, and ○ if the findings are incidental, unanticipated, and discover that the participant has a brain tumor? no plans were made regarding how to handle them, then an ethics committee would review whether or not to share with the participants. ***Important note: these Qualitative Research: considerations also apply to quantitative research*** Ethics in Reporting and Applying the Results Ethical considerations in publishing the findings: Reflexivity Confidentiality ○ When publishing the findings, researchers ○ there should be no way to identify the should disclose any personal connection to identity of the participants, especially the research matter. in case studies Justice & equitable treatment Data fabrication ○ prevent publishing an idea that leads to ○ the published research results must prejudice against a group correspond to the actual, true results. Sharing research data for verification - Transparency peer review ○ transparency about the funding of their ○ researchers should securely store research. their raw data and, if another researchers request, must be willing researchers must reflect on why the to share it for verification purposes funders have funded their research and how the funders may use any findings of the research. ***Important note: these Qualitative Research: considerations also apply to quantitative research*** Ethics in Reporting and Applying the Results Ethical considerations in applying the findings: When the findings of psychological research are published, they can be the basis for treatment and interventions. When applying the results of a research study to develop such interventions, this must be considered: a. the extent of generalizability of the study to other contexts or populations i. the researchers must give enough information of their sample, population, sampling technique, procedure, etc, to allow informed judgements about generalizability. b. limitations of the study in terms of credibility c. findings should be replicated by independent researchers to ensure credibility Qualitative Research: Generalizability Sample-to-population generalization: the findings can be generalized from the sample to the population from which the sample was drawn In experimental research, this is called “population validity” and is part of “external validity” this type of generalization is hard because the samples used in qualitative research are generally biased ? What should we consider? ○ does the sample accurately represent the population? ○ Is anyone overrepresented, or underrepresented? ○ What was the sampling technique? Qualitative Research: Generalizability Case-to-case generalization (also known as transferability): the findings are generalized to a different group of people or a different setting or context This is the responsibility of both the researcher and the reader of the research ○ the researcher needs to provide sufficient rich descriptions, so that the reader has enough information about the study and its context Study’s original population & context ○ the reader decides whether or not the context described in the research is similar to a new situation in experimental research, its closest equivalent (not exactly ? the same) would be “ecological validity”, which is another part of“ external validity” different population different context Qualitative Research: Generalizability Theoretical generalization : the findings or particular observations can be generalized to a broader theory In experimental research, its closest equivalent (not exactly the same) is “construct validity”; it is the “jump” we make from directly observable operationalizations to unobservable constructs we can achieve this type of generalization if we reach we have extensive and rich qualitative data, the analysis was in-depth and free of biases, and so on. ○ is the original study well done? can we trust it? have many studies observed similar results? ○ can we see very clear patterns? ○ can we transfer these findings to different population and different contexts?