Approaches to Researching Behavior PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Summary

This document provides an overview of research approaches in psychology, emphasizing experiments and qualitative/quantitative methods. It includes examples and activities that are likely aimed at secondary school students. The document is intended as a teaching resource, not a self-contained research paper.

Full Transcript

Approaches to researching behavior The study of psychology is evidence based and has evolved through a variety of different research approaches, both qualitative and quantitative. An understanding of approaches to research is also important for the internal assessme...

Approaches to researching behavior The study of psychology is evidence based and has evolved through a variety of different research approaches, both qualitative and quantitative. An understanding of approaches to research is also important for the internal assessment task in order to design, conduct, analyze, draw conclusions and evaluate an experiment. This applies to both SL and HL students. Task I. Thought books Take your notebook,and reflect on the following two questions with regard to the Rosenthal and Jacobson (1966) study. 1. To what extent do you feel that Rosenthal and Jacobson's study reflects your own school experience? 2. Do you think that this study only applies to young children in school or could the study be applied to something else? After writing, we then open the class to discussion for about ten minutes. A study on ‘Cute aggression’ https://youtu.be/ApoYwEeDNrc After completing the video, take time to discuss with a partner whether they think that the experiment is "gold" or "rubbish." What do you think about the study?? What type of research is this? What are the limitations of the study? How differently they could have conducted this experiment? The other possible responses could be : The data was quantitative data – number of bubbles popped. This made it easy to compare the participants. The study may lack validity. Popping bubble wrap may not be measuring levels of aggression - but instead boredom, anxiety because they are in an experiment or personal traits like being "fidgety." The procedure was not completely standardized. One of the participants was twisting the bubble wrap rather than popping the bubbles individually like everyone else. Maybe they were bored and popped the bubbles. They should have mixed it up - with one group starting with the cute images and the other group starting with the "boring" images. The study should have been counterbalanced. The sample size was too small. They may have reacted differently because they thought that they were supposed to pop the bubbles. This is the expectancy effect. They may have reacted differently because they knew they were being recorded. This is an example of reactivity. Evaluation of the theory of cute aggression. The theory appears to be testable and there is some evidence as we see in this study. There does not seem to be a lot of use (application) of this theory. The construct of "aggression" is not well defined. In addition, we wonder if "cuteness" is a universal construct. Do all cultures find animals "cute?" The use of a Western sample may have biased the results. Research Methods Quantitative Qualitative Experiments One of the most widely used methods in the study of behaviour has been the experiment. The goal of an experiment is to determine whether a cause-and-effect relationship exists between two variables. The experiment is an example of quantitative research, which generates numerical data. These data can be statistically tested for significance in order to rule out the role of chance in the results. Let us say that a researcher wants to find out if noise affects one’s ability to recall information. The aim of the study is to see if one variable (noise) has an effect on another variable (recall of information). The variable that causes a change in the other variable is called the independent variable (IV). This is the variable that the researcher deliberately manipulates. The variable that is measured after the manipulation of the independent variable is called the dependent variable (DV). A key characteristic of an experiment is the use of controls. The idea of "control" is that when the researcher manipulates the independent variable, all other possible variables stay the same. In other words, the procedure must be exactly the same in both groups; the only difference should be the manipulation of the independent variable. For example, if we are testing the role of noise in one's ability to recall a list of words, one group would read the list while listening to music. Another group would read the list in silence. Otherwise, there should be no other difference between the groups. Some examples of controls for this study would include: The list of words would be the same - the same words, the same font, the same size font, the same order of the words. The conditions of the room should be the same. If one room has a lot of posters on the wall with information, while the other room has bare walls, this could theoretically influence the results. The temperature of the rooms should be the same. The time of day when the test is taken should be the same. Research Methodology Operationalizing the variable When writing a hypothesis, it is important that you clearly state the independent variable and the dependent variable. The independent variable is what the researcher changes or manipulates. The dependent variable is what the researcher measures. Often, however, the IV and DV are vague. You need to make sure that you have clearly defined both variables. For the IV you need to define the term and indicate how the IV will be manipulated. For the DV, you need to define the term and indicate how the DV will be measured. This is called operationalization. For example, the following hypothesis is not operationalized: The lower an athlete's self-esteem the poorer their performance. There is clearly an IV and DV. The IV is the athlete's self-esteem and the DV is the athlete's performance. But the hypothesis is very vague and does not give us a good sense of what is really being tested. To operationalize the IV we need to indicate how self-esteem is lowered. For example, negative feedback from a coach during a game could lower an athlete's self-esteem. Since it may be true that we are not going to actually give the athlete a test of self-esteem before and after the coach's comments, it might be good to simply focus on negative feedback. To operationalize the DV, we need to indicate how "athlete's performance" is defined and measured. If we limit this to football players, it really is not a good idea to have "goals scored" as there are often not many goals in a football match. But we could look at "attempts on goal." Then, regardless of the success, we can look at whether lowering self-esteem results in an athlete taking fewer risks or being more assertive. A better hypothesis - one that operationalized its variables - would look like this: High school football players who receive negative feedback from their coach attempt fewer goals than high school football players who do not receive feedback. With that hypothesis, the reader has a much better sense of what is being studied and how we will know if the hypothesis is actually supported or not. Activity The following exercise asks you to write an operationalized hypothesis for a proposed research question. There is no "correct answer," as there are many potential ways to write the hypothesis. After you have written your operationalized hypothesis, click to see the responses that are provided. How do they compare to the hypotheses that you have written? 1. A researcher wants to see if stress can increase aggression in small children. What is the independent variable? What is the dependent variable? How would you write a well-operationalized hypothesis? 2. A researcher wants to see if a person’s academic skills can be affected by their level of self-esteem. What is the independent variable? What is the dependent variable? How would you write a well-operationalized hypothesis? 3. A researcher wants to see if exercise has an effect on mood. What is the independent variable? What is the dependent variable? How would you write a well-operationalized hypothesis? 4. A researcher wants to see if a person’s level of happiness can affect their problem-solving skills. What is the independent variable? What is the dependent variable? How would you write a well-operationalized hypothesis? 5. A researcher wants to see if anxiety can affect an actor's performance. What is the independent variable? What is the dependent variable? 1. A researcher wants to see if stress can increase aggression in small children. What is the independent variable? Stress What is the dependent variable? Aggression How would you write a well-operationalized hypothesis? Example: Three-year-olds who are taken away from their parent and left with a stranger demonstrate an increased frequency of beating on a drum that is in the room 2. A researcher wants to see if a person’s academic skills can be affected by their level of self-esteem. What is the independent variable? The level of self-esteem What is the dependent variable? Academic skills How would you write a well-operationalized hypothesis? Example: High school students who are praised for their work up until an examination will get higher scores on content knowledge than students who do not receive praise. 3. A researcher wants to see if exercise has an effect on mood. What is the independent variable? Exercise What is the dependent variable? Mood How would you write a well-operationalized hypothesis? Example: Men who run for twenty minutes on a treadmill will report a happier mood than men who have not. 4. A researcher wants to see if a person’s level of happiness can affect their problem-solving skills. What is the independent variable? Level of happiness What is the dependent variable? Problem-solving skills How would you write a well-operationalized hypothesis? Example: Teenagers who are given free tickets to the cinema will be able to solve more anagrams correctly than teenagers who are not given any gift prior to the experiment. 5. A researcher wants to see if anxiety can affect an actor's performance. What is the independent variable? Anxiety What is the dependent variable? Actor's performance How would you write a well-operationalized hypothesis? Example: High school actors who are told that there is someone important in the audience judging their performance will make more errors with their lines than actors who are not given this information. Both the independent and dependent variables must be operationalized. In other words, they need to be written in such a way that it is clear what is being measured. In the example used above, noise is the independent variable. This could be operationalized as dissonant rock music played at a volume of 100 decibels. An operationalized dependent variable could be the number of words remembered from a list of 30 words. Now we know exactly what the IV is and what you are going to measure in order to support your research question. Simply stating that your dependent variable is "the results of the study" is not enough as it does not say anything about what is actually being measured. Another characteristic of experiments is that they are highly standardized. This means that they have procedures that are written in enough detail that they can be easily replicated by another researcher. Finally, a true experiment randomly allocates participants to conditions. With random allocation, participants have the same chance of being assigned to the experimental or the control condition. This lessens the potential for characteristics of the individuals influencing the results. Activity Experiment 1: A study of stress on one's level of aggressive behaviour. Identify the IV and DV? Experiment 2: A study on how one's level of self-esteem affects their problem-solving skills. Identify the IV and DV? EXPERIMENT-1 The independent variable (IV) is stress. The dependent variable (DV) is the level of aggression EXPERIMENT-2 The independent variable (IV) is the level of self-esteem. The dependent variable (DV) is problem-solving skills ACTIVITY How will you operationalize the variable in Experiment 1 and Experiment-2 Experiment 1: A study of stress on one's level of aggressive behaviour. The independent variable (IV) is stress. How could you change the level of stress in the different conditions? (Remember, this must meet ethical standards). The dependent variable (DV) is the level of aggression. What would you actually measure? Experiment 2: A study on how one's level of self-esteem affects their problem-solving skills. The independent variable (IV) is the level of self-esteem. How could you change the level of self-esteem in the different conditions? The dependent variable (DV) is problem-solving skills. What would you actually measure? Experiment 1 Students will give several suggestions. The problem is that not all of them are ethical or practical. Typical ways that psychologists increase stress are giving the participants a puzzle that they cannot solve, manipulating their stress hormones through medication, or having them give an oral presentation in front of a group of strangers who give negative non-verbal feedback. Some ethical suggestions would be to ask them to respond to a theoretical situation where one of the options would be aggression, playing a video game, using VR, and putting them in a situation where they may push others out of their way, have them play a game with others to observe mild aggression. Experiment 2 When doing this question, many students talk about lowering one's self-esteem, but raising one's self-esteem is the more ethical option. They would raise a person's self-esteem by asking them to write a short response to a question and then praising them for their thinking and writing. Setting up a situation in which they would be complimented - or giving them a score on a test and then saying that they have scored among the highest that the researcher had seen. Problem-solving is very broad, so this would need to focus on something specific - e.g. a set of math of problems (the typical student response!), a set of anagrams, a set of word problems, etc. Points to remember The experimental methods The researcher manipulates one variable to see the changes in another. To begin an experiment, the researcher first should identify his or her variable. A variable is anything that can vary or change. There are two types of variable Independent variable-Is a treatment variable that is manipulated by the psychologist to represent the cause of an outcome. Dependent variable-Represents the outcome that the psychologist is measuring. It is important to define your variables clearly.(The intensity,high,low,happiness,sadness). Points to remember A variable is anything that can change,vary and can measure. IV- is manipulated by the researcher. DV-measure after making changes in the IV It is important to to define the variables properly and it is important to measure/operationalize the variables. For example-To measure whether Stress leads to aggression. It is important to narrow it down and clearly operationalize the variable. Example: DP 2 students who are unable to meet their deadline of submission tend to binge eat. Confounding variable-It is usually told,IV is the one that manipulate the DV,but there can be other variables that can change the result and we would assume that it is IV. Confounding variable. There are other factors, besides the IV can influence the findings which are called as confounding variables. There are many types of confounding variables. 1. Experimenter bias: Researcher treats the participants in the experimental control group differently to increase the chances of meeting their hypothesis demand. 2. Demand characteristics: are cues the researcher might give participants about the purpose of the study. 3. Hawthorne effect: Participants might behave consciously as they are being observed. Important points to remember. Experimental/alternative hypothesis- When the IV is said to have an effect on DV. Null Hypothesis- When the IV does not affect the DV. Hypothesis After the aim of the study is decided, the researcher formulates a hypothesis. The hypothesis predicts how the independent variable affects the dependent variable. An experimental hypothesis predicts the relationship between the IV and the DV - that is, what we expect will come out of the manipulation of the independent variable. In this case, we will have two conditions: one condition where participants have to recall words with loud music, and one where the participants recall words with no music. In the second condition, there is no noise. This is called the control condition, because we compare the two conditions—that is, one with noise and one with no noise—in order to see if there is a difference. An example of an experimental hypothesis could be: Listening to dissonant rock music played at 100 decibels will decrease the number of words that adolescent girls are able to recall from a list of 30 words. In an experimental hypothesis, the IV (listening to dissonant rock music played at 100 decibels) is predicted to have an effect on the DV (the number of words recalled from a list of 30 words). In experimental research, it is conventional to formulate both a null hypothesis and an experimental hypothesis. The null hypothesis states that the IV will have no effect on the DV, or that any change in the DV will be due to chance. An example of a null hypothesis could be: Listening to dissonant rock music played at 100 decibels will have no significant effect on adolescent girls’ ability to recall words from a list of 30 words; any change in the individual’s ability to recall a list of words is due to chance. You may find it strange to make a null hypothesis, but in fact, it makes sense. The researcher wants to reject the null hypothesis to show that the predicted cause-and-effect relationship between the IV and the DV actually exists. Sometimes, however, we have to accept the null hypothesis. This would happen if the results showed no relationship between music and the recall of words. It is important to recognize that psychologists never prove anything - they can only disprove. Our goal is either to accept the null hypothesis, which means that we have found that there is no relationship between two variables, or reject the null hypothesis, which means that there is some type of relationship between the two variables. There are two types of Experimental/Alternate hypothesis. 1.Directional(One-tailed hypothesis)/alternative hypothesis-When there is a significant difference exists in one condition 2.Non-directional/two tailed hypothesis-it does not fall under any direction(because you compare both conditions)-it assumes that there is no relationship between two conditions or there is a relationship between two conditions but not expected in directional hypothesis 3.Null hypothesis-There is no relationship(, any differences that exist are due to chance.) 4.Correlational hypothesis-There is a relationship between stress and aggression Hypothesis The experimental method is based on hypothesis testing. Inferential testing asks the researcher to choose between a null and an alternative (research) hypothesis. You are actually calculating the probability of the result occurring if the null hypothesis is true. The first step is to state the null hypothesis. The null hypothesis assumes that there will be no significant difference for a given population under two different conditions. For example: H0: Individuals show no significant difference in the mean number of words that students recall from a list of 40 unrelated words when in a quiet room than when listening to music. The IV and the DV are clearly stated. The IV is whether the individual is in a quiet room or listening to music. The DV is recalling words from a list of 40 unrelated words. First, notice that the variables are operationalized - that is, they are very clearly defined for us so we know exactly what is being manipulated and what is being measured. Also notice that the null hypothesis states that there will be no significant difference. This is important because we are not saying that the mean recall must be exactly the same in order to retain the null hypothesis - it is only important that there not be a "significant difference" - that is, a large enough difference to say that the results were not due to chance. The alternative hypothesis is your guess about what will happen. For the above experiment, the researcher hypothesis could be: H1: Participants will memorize more words from a list of 40 unrelated words when they are in a quiet room than when listening to the radio. Notice that there is a clear guess as to the outcome of the experiment. What if we find that they remember fewer words? If that is true, then the alternative hypothesis is not supported, but neither is the null hypothesis is also not supported because there is a difference in the number of words memorized. If the null is rejected by there is no correct alternative hypothesis, the experiment is technically invalid. It is not possible to change any hypothesis retrospectively. The way to solve this is to state a two-tailed hypothesis instead of a one-tailed hypothesis. A two-tailed hypothesis does not state the direction: H1: There will be a difference in the number of words that participants will memorize from a list of 40 unrelated words when they are in a quiet room than when listening to the radio. The two-tailed test gets you away from the trap of getting results that do not accept your alternative hypothesis but clearly are significant. It is not acceptable practice to change your hypothesis after you get your results to match your findings. A two-tailed hypothesis is usually done in psychology to investigate a question. A one-tailed is often used to test the reliability of a study's findings. Hypothesis You must be able to write two types of hypotheses: a null hypothesis, which states that there will be no relationship between the independent and dependent variable, and the alternative hypothesis (aka the research hypothesis) which clearly predicts the relationship between the independent and dependent variable. The purpose of this activity is to have you practice writing null and research hypotheses. There is a worksheet attached. https://www.thinkib.net/media/ib/psychology/files/hypotheses-worksheet.pdf Operationalizing the variable-which means the variable that can be measured. Experimental group-one which receives the treatment. Control group –one which does not receive the treatment. Placebo effect-Response to a treatment caused by a person's expectations and not the treatment itself(Blind-fold technique) It is very important for a researcher to have a control group, without which it is not possible to gain the findings. It is important to know, all variables in the experiment must remain the same to all participants(The instructions,materials,survey questions, questionnaire) A summary of whatever is done so far https://youtu.be/B2GcTIJYIZI There are three types of Quantitative research 1.Experimental Studies. 2.Correlational studies. 3.Descriptive studies. Experimental Studies Descriptive studies & Qualitative Research. Research Methods 1.QUANTITATIVE RESEARCH METHODS. Laboratory experiments Field experiments Quasi-experiments Natural experiments True Experiment 2.QUALITATIVE RESEARCH METHODS Interviews Observations Case studies Content analysis Questionnaires Laboratory Experiments Types of Quantitative research methods Lab experiment: an experiment done under highly controlled conditions. Field experiment: an experiment done in a natural setting. There is less control over variables. A true experiment: An IV is manipulated and a DV measured under controlled conditions. Participants are randomly allocated to conditions. A quasi experiment: Like the "experiment" by Derren Brown - no IV is manipulated and participants are not randomly allocated to conditions. Instead, it is their traits that set them apart - a fish seller, a hot dog vendor and a jeweler. A natural experiment: An experiment that is the result of a "naturally occurring event." We will address these later in the course but a natural experiment might address a question like: Did stress increase at our school after the introduction of the IB? Or Did aggression increase in rural Canada after the introduction of television? Field & Natural Experiments Quasi Experiments The term quasi means mostly.Here we take participants who have similar traits or characteristics.(eg-IB students,Grade 5 boys) In this kind of experiments,it is difficult to establish cause and effect relationship,because IV is not manipulated or you believe that the pre existing factor is the similarity between them,but may be there are other factors among that group which might influence the DV.But we might conclude saying the change in the DV IS DUE TO IV. In true experiment-The IV is manipulated,and then the DV is measured in order to get the actual result Observations Case studies. Involves detailed, in-depth investigation of one person or a small group. It involves biographical details, behavior and experience of interest, with qualitative data about feelings, experiences etc. generally recorded, though quantitative data can be generated too. Content Analysis Case study Questionnaires It is a self-report where participants answer written questions involving opinions, attitudes beliefs and behavior. Observational Method.-Gathering data to collect the findings.The types are as follows: 1.Behavioural categories. 2.Field Notes. 3.Sampling procedures. Inter-observer reality. Co-relational analysis. Co-relation- means to see if there is relation between each other? Are they equal?, or they are opposite to each other? In co-relation-the researcher cannot manipulate the variables, he has to measure it the way it is. Examples:Age,reaction,time,back account balance etc. Zero Co-relation-Means there is no relationship between the two co-variables. Co-relation co-efficient. It is depicted numerically to understand whether the variables has positive co-relation or negative co- relation. Positive co-relation is depicted as +1.0 Negative co-relation is depicted as -1.0 Zero co-relation is depicted as 0 https://www.youtube.com/watch?v=M4Dn-GORAc8 Sampling Techniques. Sampling-The selecting of participants to get a wider population. Several sampling techniques exists, which are as follows: 1.Random sampling- 2.Opportunity sampling. 3.Self-selected sampling. 4.Purposive sampling. 5.Snowball sampling. 6.Stratified sampling Random sampling Disadvantages of Stratified Sampling 1. It can be difficult to split the population of interest into individual, homogeneous strata, especially if some of the groups have overlapping characteristics. 2. If the strata are wrongly selected, it can lead to research outcomes that do not reflect the population accurately. 3. Categorizing and interpreting results from stratified sampling is difficult compared to other types of sampling. The following activity practices identify the sampling technique used in a study. This is one of the standard questions that may be asked on Paper 3. For each of the scenarios below, first, identify the sampling technique that was used and then explain why this technique would be appropriate for the study. ACTIVITY(Scenarios) 1. Your school wants to do a study of the attitudes of the parent community toward the IB program. The school is looking for parents who have had older children study in the IB program in a different school, in order to see if they feel that the program is better, equal, or worse than the program at the other school. What type of sample is this? 2. A psychologist puts an advertisement into the paper because he is looking for both married and single people to talk about their sex lives. What type of sample is this? 3. A sports psychologist is looking for professional male gymnasts who had hip injuries and have returned to the sport. Which sampling technique would be most likely used by this researcher? Why? 4. An industrial psychologist is looking at job satisfaction and burnout in teachers. In order to do this and represent a large range of teachers, what would be the most effective sampling technique? 5. Your school is doing a study of attitudes toward the school lunch. In choosing a sample, what would be the most appropriate if you want to get a good sense of how all members of your community feel about the lunches? 1.Purposive - in order to be in the sample, you need to have a certain characteristic. In this case, you have to have an older child who attended an IB program at a different school. 2.Self-selected-Since the topic is known before they sign up, they are willing to talk about it. If the volunteers were unaware of the topic, it is possible that they would withdraw from the study or not be willing to talk a lot about the topic because of its personal nature. The sample may not represent the average male. It may be only those that are comfortable with talking about their sex lives; the sample may also be limited in diversity - meaning, the age and culture of the men who respond to the advertisement. 3.This is a purposive sample in that the participants have to have a similar characteristic - in this case, recovering from a hip injury - in order to be in the sample. However, since this may be difficult to obtain, a snowball sample may be used, where athletes who have recovered from this type of injury may recommend others that they know that have also recovered. They may know them because of working together on a team, or they may have a common physiotherapist. 4.The psychologist would use a sample of opportunity - that is, a sample that already exists within a school. From each school - especially if the school is large and it is impractical to have everyone fill in the survey - she may use a random sample, choosing people through a random number generator or through pulling names out of a hat. If she studies a large number of schools - perhaps from a school district - then she has a sample made up of several samples of opportunity that have a similar characteristic. This is known as a cluster sample. 5.Many schools use self-selected sampling, but then this opens the results up to distortion. If the school wants to get a sense of how teachers and students feel about the service, then a stratified sample may be appropriate - that is, if teachers make up 20% of the school, then they should represent 20% of the sample. They should then be randomly selected from all of the teachers in the school. Sampling Bias When discussing limitations of studies, it is not enough for students to say: This study has sampling bias because only students were used in the study. To demonstrate critical thinking, students need to think about why this may make a difference to the results of the study. Think about how a student sample may bias the findings. Below are some suggested examples of why this bias is significant. Other answers could definitely be proposed, as long as you justify your responses. One of the ways that we evaluate a study is to consider the strengths and limitations of the sample that was used. When evaluating a sample, it is important to consider two things. First, is it representative of the population from which it was drawn? If it is a study of university students, does the sample fairly represent university students? Remember, sampling is not meant to represent "the whole world." Secondly, within a sample, whose voices may be missing? If a group is excluded or missing, then the sample may be biased. However, it is not enough simply to say that. Talking about bias You have been hired by your local government as a health psychologist with the goal of increasing exercise in the local community. You decide to carry out interviews at the local fitness center to learn more about people’s motivation to engage in exercise. Your study may be criticized for having a sampling bias. Which group of people may be over-represented? Which group may be underrepresented? When considering this example, the following voices may be missing from the sample: The sample may be very middle/upper class. This excludes lower socioeconomic classes. The time of day that the experiment is done may mean that people who work full time may be excluded. People who do not exercise. A certain age group may be predominant at the gym People that do other sports - e.g. tennis - would be excluded. Simply saying that a sample is "biased" is not good critical thinking. Unless the study uses a random sample drawn from the general population, the sample will have a bias. The key is to identify not only which voices are not represented in the sample, but what the impact may be on the findings of the study. For example, not having people who may be fully employed means that the interviews will not tell us much about work-exercise balance. A lack of people who are in a lower socio-ecomic level means that we may not find out about obstacles that lack of resources plays on one's choice to exercise. Your task Read the examples of research that are below. For each study, list three “voices” that are missing in this research that could lead to research bias. For each "missing voice", identify one way that this may have an effect on the findings of the study. Study 1 A clinical psychologist wanted to study if people living with schizophrenia score higher on creativity tasks than people that do not have the disorder. To obtain a sample, the researcher chose fifteen individuals from an in-patient clinic in a capital city. All participants were asked to carry out The Alternative Uses Task. Participants were asked to list novel uses of a common object -- such as a paper clip. Raters then judge the proposed uses based on their originality (how rare they are relative to other people's responses) or flexibility (how many different categories of uses are listed). The researcher found that, on average, people living with schizophrenia scored above average on the task. Study 2 A psychologist wants to investigate the role of communication in relationships. The researcher obtained a sample of 30 married couples, all of whom were seeking help for their relationship at a counseling center run by a local church. All couples had been together for at least 10 years. The couples were filmed as they had a conversation about a topic that was known to lead to disagreement and argument. Their body language and use of language were recorded to see what patterns may lead to dissatisfaction in their relationship. The researcher found that couples in danger of divorce were more likely to show signs of disgust or contempt when talking to their partner about content Study 3 An educational psychologist wants to study the effects of online learning. Your school is asked to be part of the study. Only the IB students will take part in the study. Students are randomly allocated to either the “online” or “in person” condition for IB English and IB mathematics. Each student will have three weeks of instruction and then be tested on their understanding of the unit. The researcher found that there was no significant difference in the performance on the end-of-unit assessment in either condition. Study 4 A cognitive psychologist wants to see if people are more willing to spend money if it is cash or credit card. The sample is made up of students from a competitive university. Each student is in one of two conditions. In the first condition, they are given 300 USD cash. In the second condition, they are given a credit card and told that it has a 300 USD limit. They are then directed to a website that sells items that are often purchased for students’ dormitory rooms. They are told that they can spend the money as they wish. The researchers found that those that used cash tended to buy more items that were lower priced; those that had the credit card bought fewer items that were higher priced. Experimental Design When we talk about experiments, we talk about the design that is used - in other words, what strategy was used for the experiment? The design of an experiment should effectively address the research problem that is being investigated. In the IB psychology course, we usually discuss three designs. A within-subjects design (repeated measures) A between-subjects design (independent samples) A matched pairs design Experimental design 1.Repeated measure design(RMD)-When same group of participants are tested under different condition. 2.Matched pair design(MPD)-When different group of participants are matched under similar characteristics and tested under different condition. 3.Independent group design(IGD)-When different group of participants are tested under different conditions. Example. The intake of chocolate improves the concentration. (RMD)-Same group of participants will be tested in two different conditions(1.Given chocolates,2.Not given chocolates). (MPD)-Participants of similar age are selected and given the chocolates (IGD)-Two group of participants are selected, one group given chocolate and the other one not given chocolate. https://youtu.be/I6qarsVb3RU Matched pair design A matched pairs design is an experimental design that is used when an experiment only has two treatment conditions. The subjects in the experiment are grouped together into pairs based on some variable they “match” on, such as age or gender. Then, within each pair, subjects are randomly assigned to different treatments Example of a Matched Pairs Design Suppose researchers want to know how a new diet affects weight loss compared to a standard diet. Since this experiment only has two treatment conditions (new diet and standard diet), they can use a matched pairs design. They recruit 100 subjects, then group the subjects into 50 pairs based on their age and gender. For example: A 25-year-old male will be paired with another 25-year-old male, since they “match” in terms of age and gender. A 30-year-old female will be paired with another 30-year-old female since they also match on age and gender, and so on. Then, within each pair, one subject will randomly be assigned to follow the new diet for 30 days and the other subject will be assigned to follow the standard diet for 30 days. At the end of the 30 days, researchers will measure the total weight loss for each subject. Activity Break into groups of 2- 3 Look at the following research question. Does raising one's level of self-esteem affect behaviour? Answer the following questions 1: An operationalized research question. 2: Statement of a null and research hypothesis. 3: Operationalization of the independent variable. 4: Operationalization of the dependent variable. 5: A description of the procedure. https://www.thinkib.net/media/ib/psychology/files/self-esteem-experiment.pptx 1.IV-Self-esteem 2.DV-Behaviour Asking them to rate their self-esteem on a scale of one to 10 and revealing the scores to them and then giving a questionnaire that consists of questions of how they might react or handle the situation to see whether the scores revealed of self esteem gets reflected in the way they answer the question(which portrays whether the test scores of self- esteem reflects the way they behave) Raising or lowering the self-esteem does not have an effect on the behaviour. Raising self esteem has an effect on behaviour. Lowering self esteem has an effect on behaviour. This will be Independent sample design Participants of experimental group will get both worksheets,participant of control group will get only behaviour questionnaire. Condition-1- The experimental group will receive the self- esteem rating questionnaire and once the score is revealed ,they will be getting a behaviour worksheet to answer. Condition-2-The control group will only get the behaviour worksheet to perform. The scores of each group will be analyzed to see if self esteem questionnaire made a difference in the way they answer the question compared to the control group Ethical Consideration Ethical issues in conducting the research Example of Ethical consideration in research( https://youtu.be/9hBfnXACsOI) Ethical Consideration in reporting the results Example of ethics in results Ethical Issues. If deception is unavoidable,there are measures that can be taken. Why we follow ethical procedures? It is often thought to prevent lawsuits! However, the bigger issue is the public trust so that we can obtain participants for future experiments. There are several classic studies that are rather unethical that could be discussed. Watch the Milgram study video It is a modern version. Pay attention to the behaviour of the participants. After the video, Let us have a discussion about the ethics of this experiment. https://youtu.be/Xxq4QtK3j0Y Questions 1. Do you think that there is value in Derren Brown replicating the original Milgram experiment? Why or why not? 2. Psychologists predicted that Milgram would only have a very small number of people actually obey orders. Do you think that based on that prediction they should have stopped the experiment? 3. Which ethical protocols were broken in this experiment? Of the ethical protocols that were violated, which do you consider the most significant? Why? Activity https://www.thinkib.net/media/ib/psychology/files/ethics-proposal s-research-rev.pdf Write down the key word of ethical consideration. Answers of the activity Point to remember(paper-3 and IA) Inthe second set of questions, one of the two questions is always asked. Students will be asked one of the following questions: Describe the ethical considerations that were applied in the study and explain if further ethical considerations could be applied. Describe the ethical considerations in reporting the results and explain ethical considerations that could be taken into account when applying the findings of the study. Things to remember: For the first question, all ethical considerations that were stated in the text should be described. In addition, other considerations that have not been mentioned should be addressed. It would be best to think of the Magic 6: Consent, Anonymity, Right to withdraw, Debriefing, Undue stress or harm, Deception. It has a stupid acronym: CAR DUD! (Or Dud Car, if you like!) The second question has two parts. When describing the reporting of the results, it is important to consider anonymity and communicating the results of the study effectively to the readers. The second part of the question asks students to think about what concerns there may be in applying the findings. This is a hypothetical exercise, but could focus on its potential effects on different groups, leading to stress or discrimination. You can see more on how to approach this second question in the next slide Generalizability-refers to the extent to which the results of the study can be applied beyond the sample and the settings used in the study itself. The factors of generalizability in quantitative research are as follows: External validity-Refers to the extent to which the conclusions from your research study can be generalized to the people outside of your study. Mundane realism-Describes the degree to which the materials and procedures involved in an experiment are similar to events that occur in the real world. Ecological validity-Refers to the extent to which the findings of a research study can be generalized to real-life settings. Temporal validity-Refers to the extent to which the findings and conclusions of study are valid when we consider the differences and progressions that come with time. Population validity-Describes how well the sample used can be extrapolated to a population as a whole. Construct validity-Refers to how well a test or tool measures the construct that it was designed to measure eg. how well can the BDI measure depression? The factors of generalizability in qualitative research are as follows: Sample-to-population generalization- Making conclusions about a population larger than your sample size based off your research finding Theoretical generalization-When researchers attempt to expand the quality and applicability of theory by generalizing the findings of the study to existing theory Case-to-case generalization/ transferability-Relationships between two cases by which one case inherits all the properties and relationships of another case. Used when two or more cases have common behaviour/logic. The extent to which findings can be transferred to other contexts. Credibility-the quality of being believable or trustworthy The factors of credibility in quantitative research are as follows: 1.Internal validity-Are the researchers testing what they say they are testing. 2.External validity-the extent to which the results of a study can be generalized to other situations and to other people 3.Population Validity-degree to which the study results can be generalized to and across the people in the target population. 4.Ecological validity-How experimental environments may be affected the behaviour of the participants. How to write credibility in quantitative Describe the Seesaw relationship between internal and external validity (How the researcher has to consider) Discuss controls that could have been placed to increase internal validity Discuss threats to internal validity Discuss potential population validity based on the sampling technique Discuss the ecological validity and mundane realism of the study The factors of credibility in qualitative research are as follows: 1.Phenomenological approach-The research is only credible to the degree the participant agrees that they accurately reflect their reality 2.Data Triangulation-conducting more than one source of data to enhance the validity and credibility of your results (a) Method triangulation-Use of multiple methods of data collection to study the same phenomenon. (b) Researcher triangulation-The use of several observers, interviewers, or researchers to compare and check data collection and interpretation 3.Member checking-where data and interpretations are checked with the participants.Allows participants to clarify their intentions, correct errors and provide additional information if needed (4) Aggregate Research (grounded theory & external reliability)-research is more credible, when it reflects grounded theory and previous research.When it support grounded theory that is theory, that has been supported by empirical evidence.When other researchers confirm the findings of the study ,this increases the potential credibility (external reliability) How to write credibility in qualitative research stimulus? Define Credibility- 1.phenomenological approach 2.Discuss Triangulation -Method -Data -researcher 3.Discuss member checking 4.Aggregate Research (grounded theory & external reliability) What is bias in qualitative research? source of bias in qualitative research can be associated both with the researcher and the participant. Hence there are two groups of biases: participant bias and researcher bias Type of participant bias: 1.Acquiescence bias-A tendency to give positive answers whatever the question. It may occur due to the participant's natural agreeableness or because the participant feels uncomfortable disagreeing with something in the research situation. Ways to overcome the bias: -researchers should be careful not to ask leading questions. Questions should be open-ended and neutral. It should be clear that there are no "right" or "wrong" answers. 2.Social desirability bias-participants' tendency to respond or behave in a way that they think will make them more liked or more accepted. Intentionally or unintentionally, participants may be trying to produce a certain impression instead of behaving naturally, and this is especially true for sensitive topics Ways to overcome the bias: -questions should be phrased in a non-judgemental way. Good rapport should be established. questions can be asked about a third person ('What do your friends think about?') 3.Dominant respondent bias-Occurs in a group interview setting when one of the participants influences the behavior of the others. Other participants may be intimidated by such people or feel like they will be compared to the dominant respondent Ways to overcome the bias: - Researchers should be trained to keep dominant respondents in check and try to provide everyone with equal opportunities to speak 4.Sensitivity bias-A tendency of participants to answer regular questions honestly but distort their responses to questions on sensitive topics Ways to overcome the bias: - Building a good rapport and creating trust. Reinforcing ethical considerations such as confidentiality. Increasing the sensitivity of the questions gradually. Type of researcher bias: 1.Confirmation bias- Occurs when the researcher has a prior belief and uses the research (intentionally or unintentionally) to confirm this belief. It may manifest itself in such things as selectivity of attention or tiny differences in non-verbal behavior that may influence the participants. Ways to overcome the bias: - Strictly speaking, this is unavoidable because in qualitative research the human observer is an integral part of the process. However, this bias can be recognized and taken into account through the process of reflexivity 2. Leading questions bias-Occurs when the questions in an interview are worded in a way that encourage a certain answer. For example, "When did you last have angry thoughts about your classmates?" Ways to overcome the bias: - Interviewers should be trained in asking open-ended, neutral questions 3.Question order bias-Occurs when the response to one question influences the participant's responses to subsequent questions. Ways to overcome the bias: - This bias cannot be avoided but can be minimized by asking general questions before specific ones, positive questions before negative ones and behavior-related questions before attitude-related questions. 4.Biasedreporting-Occurs when some findings of the study are not equally represented in the research report Ways to overcome the bias: - Reflexivity. Also independent researchers may be asked to review the results (researcher triangulation) Bias In quantitative research 1.Demand Characteristics-occurs when participants understand the true aim of the experiment and then alter their behavior (intentionally or unintentionally). How to avoid demand characteristics-deception, post-experimental questions, repeated measures. 2.Experimenter bias-occurs when the researcher unintentionally influences participants' behavior and the results of the study. How to avoid -double blind study, researcher triangulation 3.selection (participant variability)-for some reason the groups are not entirely equivalent at the start of the experiment, and the way in which they differ affects the relationship between the IV and the DV How to avoid -random allocation, make sure you are allocating by subgroups (gender, mentality, etc.) 4.Testing effect-the first measurement of the DV may affect the second measurement. How to avoid-use a control group, counter balance 5.Instrumentation-occurs when the instrument measuring the DV changes slightly between measurements, compromising standardization of the measurement process. It is often a human observer How to avoid-standardize the measurement conditions, train the researchers, and use researcher triangulation 6.History-outside events that happen to participants in the course of the experiment. How to avoid-eliminate confounding variables, replication, triangulation 7.Maturation-the natural changes that participants go through in the course of the experiment, such fatigue or growth How to avoid-use a control group at the same time and with the same measurements, but no treatment 8.experimental mortality-occurs when some participants drop out of the experiment. It only becomes a problem when the rate of dropping out is not the same in every experimental condition How to avoid-design study to keep participants, use protection from harm Sampling bias in quantitative and qualitative Sampling bias in quantitative research are as follows; Stratified. Random self-selected Opportunity Sampling bias in qualitative research are as follows; Purposive. Quota-Quota sampling is defined as a non-probability sampling method in which researchers create a sample involving individuals that represent a population. Researchers choose these individuals according to specific traits or qualities. They decide and create quotas so that the market research samples can be useful in collecting data. These samples can be generalized to the entire population. The final subset will be decided only according to the interviewer’s or researcher’s knowledge of the population. For example, a cigarette company wants to find out what age group prefers what brand of cigarettes in a particular city. They apply survey quota on the age groups of 21-30, 31-40, 41-50, and 51+. From this information, the researcher gauges the smoking trend among the population of the city. Snowball Theoretical-Theoretical sampling is a process of data collection for generating theory whereby the analyst jointly collects codes and analyses data and decides what data to collect next and where to find them, in order to develop a theory as it emerges. Convience-Convenience sampling is a type of non-probability sampling that involves selecting participants for a study from those who are readily available and willing to participate. This type of sampling is often used in field studies or when conducting research with hard-to-reach populations. Reliability and validity Reliability-The result should be consistent even if it is tested many number of times. Validity-The test should measure exactly what it intends to measure. Validity Limitations of the experiment This topic will be useful in your critical thinking aspect. One of the limitations of an experiment could be that a variable was not controlled. A variable that influences the results of an experiment is called an extraneous variable. An example of an extraneous variable could be a trait of a participant that was not controlled for. If I am testing to see if music has an effect on one's ability to recall a list of 20 words, but I didn't check to see if all were native English speakers, then the fact that in one group I had significantly more non-native speakers could be a confounding variable. Extraneous variables can also be with the materials in the study. For example, it may be that the words were all one-syllable, which may have made them more easily recallable. This could then be seen as an extraneous variable that may have affected the results, rather than the IV that I was manipulating - that is, listening to music. Failure to control for extraneous (confounding) variables means that the internal validity of a study is compromised - that is, we cannot be sure that the study actually tested what it claims to test and the results may not actually demonstrate a link between the independent and dependent variable. Methodological considerations another limitation Methodological considerations have to do with the design and procedure of the experiment. Discussing methodological considerations is one of the key evaluative strategies when discussing research. One concern that researchers have is what is called participant biases - or demand characteristics. This is when participants form an interpretation of the aim of the researcher's study and either subconsciously or consciously change their behavior to fit that interpretation. This is more common in a repeated measures design where the participants are asked to take part in more than one condition of the independent variable. However, it may also occur in an independent samples design. Participant biases are also a problem in observations and interviews. There are at least four different types of participant biases. Expectancy effect is when a participant acts a certain way because he wants to do what the researcher asks. This is a form of compliance - the participant is doing what he or she is expected to do. Often, simply knowing that you are in an experiment makes you more likely to do something that you would never do in normal life. And this is where experiments can be problematic. Orne (1962) carried out a simple study to test the effect of demand characteristics. Participants were asked to write solve addition problems of random numbers. There were 224 numbers per page. Each time they completed a page, they were asked to tear up the sheet into at least 32 pieces and then move on to the next page in the pile. There were 2000 sheets of paper in the pile. The researcher told them to keep working until they were told to stop. In spite of how boring the task was - as well as how useless the task obviously was - the participants continued for several hours because they were "doing an experiment." Not everyone who "figures out" an experiment will try to please a researcher. The screw you effect occurs when a participant attempts to figure out the researcher's hypotheses, but only in order to destroy the credibility of the study. Although this is not so common, there are certain types of sampling techniques that are more likely to lead to this demand characteristic. For example, if you are using an opportunity sample made up of students who give consent, but feel that there was undue pressure on them to take part, then the screw you effect is more likely. This may be the case when professors at university require students to take part in studies to meet course requirements - or even in your internal assessment... The screw you effect may also happen when the researcher comes across as arrogant or condescending in some way and the participants then decide to mess up the results. Participants usually act in a way that protects their sense of self-esteem. This may lead to the social desirability effect. This is when participants react in a certain way because they feel that this is the "socially acceptable" thing to do - and they know that they are being observed. This may make helping behaviour more likely; or in an interview to determine levels of prejudice and stereotyping, participants may give "ideal" answers to look good, rather than express their actual opinions Finally, sometimes participants simply act differently because they are being observed. This is a phenomenon known as reactivity. The change may be positive or negative depending on the situation. In an experiment on problem solving, a participant may be very anxious, knowing that he is being watched and then make more mistakes than he usually would under normal situations. You can probably imagine that reactivity plays a significant role in interviews - especially clinical interviews - where the interviewee may demonstrate anxiety, overconfidence or paranoia as a result of being observed. Controlling for demand characteristics 1. Use an independent samples design. By not being exposed to both conditions, participants are less likely to figure out the goal of the experiment. 2. During the debriefing, be sure to ask the participants if they know what was being tested. If they answer yes, this may have an influence on the results. 3. Deception is often used in experiments in order to avoid demand characteristics; however, this may lead to ethical problems if the deception leads to undue stress or harm of the participant. Another Limitation is order effects Order effects are changes in participants' responses that result from the order (e.g., first, second, third) in which the experimental conditions are presented to them. This is a limitation of a repeated measures design; for example, testing to see if music affects one's ability to memorize a list of words and the participants are exposed to a series of different types of music. There are three common order effects that affect the results of a study. First, there are fatigue effects. This is simply the fact that when asked to take part in several conditions of the same experiment, participants may get tired or they may get bored. In either case, they may lose motivation to try their best or their concentration may be impaired, influencing the results. One of the limitations of the proposed "effects of different types of music on learning a list of words" study is called interference effects. This is when the fact that you have taken part in one condition affects your ability to take part in the next condition. For example, in an experiment you are asked to memorize a list of twenty words in silence. Now, with music playing, you are asked to memorize a different list of words. The researcher may find that some of the words on the list may be the same as those on the first list. This is an example of an interference effect influencing your final results. Finally, when we ask participants to do a task repeatedly, we may see that they improve as a result of practice effect. For example, if I want to see how long it takes participants to solve a SUDUKO puzzle under different conditions, it is possible that they are faster in later conditions because they are simply getting better at doing the puzzles because of the practice effect. Controlling for order effects 1. One control is called counter-balancing. This is when you vary the order in which the conditions are tested. For example, in condition A, participants are asked to recall a list of twenty words without music. In condition B, they are tested with music. Participants are randomly allocated to group 1 or 2. Although this is still a repeated measures design, group 1 is tested first with condition A and then condition B. In group 2, they are tested first with condition B and then with condition A. If order effects did not play a role in the research, then the results should be the same for both groups. 2. There needs to be a long enough pause between conditions. 3. Often researchers use a filler task in order to clear the "mental palette" of the participants. This controls for interference effects. For example, after being shown the first list of words without music, the participants are asked to recall as many words as possible. The researcher then has the participants count backwards by 3 from 100. Then the second condition is administered. Another Limitation is Research Bias Researcher bias is when the beliefs or opinions of the researcher influence the outcomes or conclusions of the research. There are several ways that this may occur. First, there is the problem of confirmation bias. Confirmation bias is when a researcher searches for or interprets information in a way that confirms a pre existing belief or hypothesis. For example, if a researcher is observing children on a playground and believes that boys are more aggressive than girls, the researcher may record more examples of such aggression and not pay attention to aggressive female behaviour. Another form of researcher bias is the questionable practice of p-hacking. This is when a researcher tries to find patterns in their collected data that can be presented as statistically significant, without first positing a specific hypothesis. For example, I am doing a study of the effect of music on the ability to memorize a list of words. The original plan was that I would have participants listen to different types of music and compare them to silence. Controlling for researcher biases 1. Researchers should decide on a hypothesis before carrying out their research. They should not go back and adjust their research hypothesis in response to their results, but instead run another experiment to test the new hypothesis. 2. To control for confirmation bias, researchers can use researcher triangulation to improve inter-rater reliability. As a researcher, I work as a team where we are all observing the children on the playground. If we all observe the same level of aggression in males and females, it is likely that we have avoided confirmation bias. 3. A double-blind control is the standard control for researcher bias. In a double-blind control, the participants are randomly allocated to an experimental and a control group. The participants are not aware which group they are in. In addition, a third party knows which participants received which treatment - so the researcher who will examine and interpret the data does not who received which treatment. Finally, there are other ways that the validity of a study may be compromised, besides the effect of confounding variables. Internal validity may also be affected by the construct validity of a study - that is, investigating if the measure really is measuring the theoretical construct it is suppose to be. This has to do with the operationalization of the variables. If you are doing a study of attitudes of Europeans about Americans, asking them if they watch US films, own an iPhone, watch CNN or wear American designer clothes would not be a good measure of "pro-American attitudes." There are several problematic constructs in psychology - including intelligence, communication, love and aggression. It is also important to discuss the external validity of a study. External validity is the extent to which the results of a study can be generalized to other situations and to other people. There are two key ways of assessing external validity. One is to determine whether the sample is biased. If the sample is not representative of the population that it is drawn from, then the results are not generalizable to that population and the study lacks external validity. This is also known as population validity. The second way in which external validity can be assessed is to consider the ecological validity of a study. The basic question that ecological validity asks is: can the results of this study be generalized beyond this situation? Often in laboratories the situation is so highly controlled, it does not reflect what happens in real life - so, we cannot say that it would predict what would happen under normal circumstances and lacks ecological validity. It can also be that the situation was so artificial that it does not represent what usually happens in real life - for example, being asked to shock a stranger in a lab situation or watching a video of a car crash rather than actually observing one in real life. Finally, whenever we discuss research we should always consider the ethics of the experiment. When researchers do not follow ethical protocols, the research cannot be (or should not be) replicated. This means that the results cannot be shown to be reliable. Task I. Evaluating research Priming is defined as activating particular representations or associations in memory just before carrying out an action or task. Having people think about a positive experience can influence how they experience a follow-up experience. Research has even shown that we can be primed by the look of the food on our plate: when the food is served in a way that is more artistic, people tend to enjoy the food more than if the same ingredients are just dumped on the plate. Watch the following video clip taken from Bang Goes the Theory. You will be shown one study at a time. After each study,answer the following question: How valid do you think that the results of this study are? What are the limitations of the study? https://youtu.be/x1pLHVMO4ho Working with the vocabulary The goal of this worksheet is to see how much you have been learning from lessons and the text.Work in pairs to see how many of the terms you can define. https://www.thinkib.net/media/ib/psychology/files/limitations-of- exp-vocab-rev.pdf Data analysis(qualitative & quantitative) Quantitative data is the numeric form. Qualitative data is in non-numerical form. It gives insight into feelings, thoughts and emotions.but,the analysis could be affected due to researchers bias.therefore,it can be converted into quantitative data through content and inductive content analysis. 1.Meta analysis. 2.Content analysis-Data are converted into coding units and tested. 3.Inductive content analysis-Themes are specified to analyze the results

Use Quizgecko on...
Browser
Browser