Podcast
Questions and Answers
What is the primary purpose of operationalizing variables in research?
What is the primary purpose of operationalizing variables in research?
- To eliminate the need for a hypothesis.
- To increase the subjective interpretation of data.
- To define how variables will be measured or manipulated. (correct)
- To make the variables more abstract.
A null hypothesis (H0) assumes there is a significant difference between the conditions or variables being studied from the outset.
A null hypothesis (H0) assumes there is a significant difference between the conditions or variables being studied from the outset.
False (B)
What is the key difference between an aim and a hypothesis in research?
What is the key difference between an aim and a hypothesis in research?
An aim is a broad statement of intent, while a hypothesis is a specific, testable prediction.
In experimental research, the variable that is manipulated by the researcher is known as the ______ variable.
In experimental research, the variable that is manipulated by the researcher is known as the ______ variable.
Match the following sampling methods with their descriptions:
Match the following sampling methods with their descriptions:
What design involves participants completing all conditions of the experiment?
What design involves participants completing all conditions of the experiment?
Field experiments typically have higher internal validity compared to lab experiments due to the natural setting.
Field experiments typically have higher internal validity compared to lab experiments due to the natural setting.
Define what is meant by 'demand characteristics' and explain how could they affect a study.
Define what is meant by 'demand characteristics' and explain how could they affect a study.
The method of controlling for order effects in a repeated measures design by splitting the sample into groups experiencing different condition orders is called ______.
The method of controlling for order effects in a repeated measures design by splitting the sample into groups experiencing different condition orders is called ______.
Which of the following is a key advantage of natural experiments?
Which of the following is a key advantage of natural experiments?
Open questions in self-report techniques primarily yield quantitative data that is easily analyzed statistically.
Open questions in self-report techniques primarily yield quantitative data that is easily analyzed statistically.
Describe one strategy researchers can use to address social desirability bias in self-report techniques.
Describe one strategy researchers can use to address social desirability bias in self-report techniques.
In interviews, asking a pre-prepared list of questions in a fixed order is characteristic of a ______ interview.
In interviews, asking a pre-prepared list of questions in a fixed order is characteristic of a ______ interview.
Which of the following is a primary limitation of case studies?
Which of the following is a primary limitation of case studies?
Overt observations guarantee more natural behavior from participants since they are aware of being observed.
Overt observations guarantee more natural behavior from participants since they are aware of being observed.
What is inter-rater reliability, and why is it important?
What is inter-rater reliability, and why is it important?
A ______ correlation indicates that as one variable increases, the other variable decreases.
A ______ correlation indicates that as one variable increases, the other variable decreases.
What does informed consent entail?
What does informed consent entail?
Which measure of central tendency is most sensitive to extreme scores?
Which measure of central tendency is most sensitive to extreme scores?
Histograms are used to display the frequency of nominal data, where the bars do not touch each other.
Histograms are used to display the frequency of nominal data, where the bars do not touch each other.
Flashcards
Hypothesis
Hypothesis
A precise, testable statement about how study parts interact, specifying independent variable levels, dependent variable in experiments.
Operationalization
Operationalization
Stating specifically how variables are measured in the study.
Null Hypothesis (H0)
Null Hypothesis (H0)
States there is no difference between the variables being studied.
Alternate Hypothesis (H1)
Alternate Hypothesis (H1)
Signup and view all the flashcards
Extraneous Variables
Extraneous Variables
Signup and view all the flashcards
Demand Characteristics
Demand Characteristics
Signup and view all the flashcards
Counterbalancing
Counterbalancing
Signup and view all the flashcards
Standardized Procedures
Standardized Procedures
Signup and view all the flashcards
Sample
Sample
Signup and view all the flashcards
Generalization
Generalization
Signup and view all the flashcards
Systematic Sampling
Systematic Sampling
Signup and view all the flashcards
Opportunity Sampling
Opportunity Sampling
Signup and view all the flashcards
Independent Groups Design
Independent Groups Design
Signup and view all the flashcards
Repeated Measures Design
Repeated Measures Design
Signup and view all the flashcards
Matched Pairs Design
Matched Pairs Design
Signup and view all the flashcards
Field Experiments
Field Experiments
Signup and view all the flashcards
Natural Experiment
Natural Experiment
Signup and view all the flashcards
Questionnaires
Questionnaires
Signup and view all the flashcards
Observation
Observation
Signup and view all the flashcards
Inter-rater reliability
Inter-rater reliability
Signup and view all the flashcards
Study Notes
Research Methods Overview
- Covering hypothesis formation, variables, sampling, research design, correlation, procedures, planning, ethics, and data handling.
- Data handling includes quantitative and qualitative data, primary and secondary data, computation, descriptive statistics, and data interpretation.
Formulation of Testable Hypotheses
- An aim narrows down a general area of interest into a clear statement of intent, including the purpose of the study (e.g., replicating previous research).
- A hypothesis is a precise, testable statement about how study parts interact, specifying independent variable levels and the dependent variable in experiments.
- Variables need to be operationalized, stating how they are measured (e.g., "number of words recalled" instead of "recall").
- A null hypothesis (H0) begins with the assumption of no difference between conditions or variables.
- An alternate hypothesis (H1) suggests a difference between conditions, acting as a research hypothesis.
- Evidence from statistical tests determines the acceptance or rejection of the null hypothesis in favor of the alternate hypothesis.
Types of Variables
- Correlational studies measure co-variables to find relationships (positive or negative), but cannot establish causation.
- Experimental setups manipulate an independent variable to measure changes in the dependent variable.
- Operationalization is crucial for dependent variables to specify how they are measured (e.g., reduction in a hostility questionnaire score).
- Extraneous variables, if uncontrolled, can impact internal validity by providing alternate explanations for study findings.
- Demand characteristics are cues that influence participant behavior, potentially affecting the accuracy of results.
Control of Extraneous Variables
- Random allocation and matched pairs designs are used to control for participant variables.
- Counterbalancing is used to control for order effects by splitting the sample, reducing the impact of practice or fatigue.
- Standardized procedures control situational variables, giving each participant the same experience aside from changes in the independent variable.
- Single and double-blind trials control for demand characteristics and investigator effects to prevent bias.
- Pilot studies and peer review help identify extraneous variables before the main study.
Sampling Methods
- Target population are individuals that form the broader group for study, while a sample is a selected subset.
- Generalization is applying results from a sample back to its broader target population.
- Random sampling provides the same chance of any individual to get selected for the sample
- Systematic sampling selects every nth member from a population list.
- Opportunity sampling includes participants easily accessible to the researcher
- Volunteer sampling relies on self-selecting participants through advertisements or announcements
- Stratified sampling involves identifying subgroups within a population and sampling proportionally from each.
Designing Research: The Experimental Method
- Repeated measures design involves participants completing all conditions of the experiment
- Independent groups design splits participant samples into groups completing different conditions
- Matched pairs design matches participants based on characteristics to balance variables across conditions.
Independent Groups Design
- Different people participate in each condition and data is considered unrelated.
- Prone to participant variables influencing results if groups are not well-balanced.
Repeated Measures Design
- Participants complete all conditions, allowing for comparison of individual performance in each condition
- Data is related, but this is subject to order effects, where performance changes due to practice or fatigue. Counterbalancing helps to control it.
- Likely higher chance of demand characteristics
Matched Pairs Design
- Aims to reduce participant and order effects.
- Two groups are assessed on an important variable
- Then ranked so the most "similar" participants are in pairs.
- Each of the pair then complete different arms of the testing
- The results are then compared as related data
- Takes longer than other methods### Laboratory Experiments
- A lab experiment involves the experimenter having full control over the experimental environment.
- Environmental factors, such as noise, temperature, and instructions, are highly controlled and standardized.
- The experimenter manipulates one factor, the independent variable, and keeps all other variables constant.
- The experimenter measures how changes in the independent variable affect the dependent variable.
- Lab experiments allow researchers to suggest cause-and-effect relationships due to control over variables.
- Lab experiments have high internal validity because the observed effect is likely due to the independent variable.
- Lab experiments are highly replicable due to standardized procedures.
- Lab experiments may lack external validity, as behaviors observed in labs may not generalize to real-world settings.
- Lab tasks may lack mundane realism, meaning they are not like real-world tasks.
- Participants in lab experiments may alter their behavior due to awareness of being studied, leading to demand characteristics.
Field Experiments
- Field experiments are conducted in real-world settings to address weaknesses of lab studies.
- Field experiments take place in natural settings like shopping centers, workplaces, or schools.
- A strength of field experiments is increased external validity.
- Participants are expected to show more natural behavior in their natural environment, enhancing ecological validity.
- Tasks used in field experiments are more likely to be real-world tasks, increasing mundane realism.
- Demand characteristics are less of a problem if participants are unaware of being studied.
- A weakness of field experiments is the lack of control compared to lab experiments.
- The real world is chaotic, and controlling every possible variable affecting the dependent variable is difficult.
- Extraneous variables can influence measurements.
- Researchers in field studies are often unable to randomly assign participants to conditions.
- Effects observed in field experiments may be due to factors other than the independent variable, reducing internal validity.
Natural Experiments
- In a natural experiment, the levels of the independent variable have already happened naturally.
- The researcher measures the change in the dependent variable.
- Natural experiments allow research into areas that cannot be studied otherwise due to ethical or cost reasons.
- Natural experiments have high external validity because changes have happened naturally in real life.
- Changes in behavior cannot be the result of demand characteristics.
- Researchers have no control over the experiment, such as randomizing participants or controlling extraneous variables.
- Other factors might have influenced the dependent variable.
- Researchers are less certain of a cause-and-effect relationship between the IV and DV than in lab studies.
- Situations occur naturally and are often rare, making replication difficult.
Self-Report Techniques
- Self-report techniques involve participants knowingly responding to questions, revealing personal information.
- Interviews involve real-time conversations with a researcher, either face-to-face or remotely.
- Questionnaires involve sending a list of pre-prepared questions to participants for them to answer and return.
- Both questionnaires and interviews can use open or closed questions.
- Open questions allow participants to answer in any way they want, providing qualitative data in the form of words.
- Closed questions provide a limited number of response options, giving quantitative data in the form of numbers.
- Closed questions yield quantitative data, so researchers can easily compare responses and use data analysis.
- Open questions, though harder to analyze, may lead to more valid, truthful answers.
- Questionnaires and interviews often combine open and closed questions.
- Researchers need to ensure questions are clear.
- Researchers should avoid complex terminology unfamiliar to participants.
- Interviewers can reword questions for clarity.
- Researchers need to avoid biased or leading questions.
- Researchers might consider piloting a questionnaire or interview to check for problems.
- Filler questions can be used to put participants at ease in interviews or hide the true aims of the study in questionnaires.
Structured, Unstructured, and Semistructured Interviews
- Interviews involve a back-and-forth series of questions in real time.
- Questions can be open, closed, or a mix.
- Interviews are often recorded for later review.
- Types of interviews include structured, unstructured, and semistructured.
- Structured interviews involve asking a full list of questions in order.
- The advantage of structured interviews is that a trained interviewer is not needed.
- It is easier to compare structured interviews because all interviewees have had the same experience.
- In structured interviews, follow-up questions cannot be asked if the interviewee says something interesting.
- Unstructured interviews occur when the interviewer does not have a set list of questions.
- Unstructured interviews are free-flowing, informal conversations with a general topic.
- The advantage of unstructured interviews is that it is likely to develop a rapport with participants.
- With unstructured interviews, you can develop a point if the interviewee says something interesting.
- With unstructured interviews, you need a highly trained interviewer.
- Because every unstructured interview is different, it is hard to compare multiple interviews.
- Semi-structured interviews combine prepared questions with the ability to ask new ones.
- The interviewer is highly trained to think of the right questions to ask.
- Semi-structured interviews still have fixed questions for every participant.
Evaluating Self-Report Techniques
- Self-report techniques are easy to replicate and allow detailed information from participants.
- They suffer from bias, especially social desirability bias.
- Questionnaires do not require trained interviewers, so they are very cheap to give to large numbers of people.
- Problematic questions in questionnaires cannot be dealt with in the moment.
- Participants might not take questionnaires seriously, leading to acquiescence bias.
- Use a similar question later in the questionnaire, but phrase it in the opposite way, to check for acquiescence bias.
- Interviews can rephrase questions and build rapport, but they need a highly trained interviewer.
- Interviews may have smaller numbers of participants and a higher cost.
- An additional problem of interviews is interviewer effects.
- Teenagers might give different responses on sex, drugs, or opinions on old people if the interviewer was the same age and gender, or if they were much older and the opposite gender.
Case Studies
- Involves gathering information on an individual, group, or organization.
- Researchers can include interviews, observations, experimental findings, and content analysis.
- Case studies have a high level of detail about the individual or group.
- Case studies tend to be investigations of psychologically unusual individuals.
- Case studies can be done on events or organizations.
- Case studies can be done on a group of typical members of a demographic.
- The type of data collected is usually qualitative information in the form of words, and data in the form of numbers can be used to back up qualitative findings.
- The duration of a case study can be short or long, and is called a snapshot or longitudinal study, respectively.
- Case studies have been used significantly in clinical psychology.
- Freud used a number of case studies to develop psychoynamics.
- Case studies of children of abnormal upbringings can be used to test the ideas of childhood development.
Evaluations of Case Studies
- There is no other method that collects as much in-depth and rich information about individuals.
- This leads to a high level of realism that can be argued to be highly valid.
- Case studies often look at the behavior of very rare individuals.
- Case studies are often the only way to study certain behaviors.
- Just one unusual case study can show a pre-existing psychological theory is incorrect or maybe just not yet complete.
- Most of the critical evaluations question the scientific nature of case studies.
- Case studies are often completed long after events and depend heavily on memory, so what's recorded is often inaccurate.
- Interview problems like social desiraability bias are present.
- Findings from case studies cannot be generalized to wider populations.
- Exact replication of case studies is impossible.
- Researchers decide what to include in the report and might only include data that supports their ideas.
- Researchers may lose objectivity when interpreting behavior.
- While case studies shouldn't be generalized and lack the scientific credibility of experimental methods, they can generate hypotheses that can be tested empirically and then ultimately accepted.
- Over a 100 years after Paul Brocker and Tan's death, we now can use fMRI scans to confirm the existence of the region of the brain associated with speech production.
Designing Observation Studies
- Observation is when researchers watch and record behavior as it happens.
- Researchers has choices to make about the type of observation they want to conduct.
- One choice they need to make is between a controlled and a naturalistic observation.
- A controlled observation is when we control the situation the participants experience and record their behaviors.
- This is done in a lab which helps to control as many variables as possible.
- This gives the participants the same experience.
- While an advantage of this approach is we can reduce the effects of extraneous variables on the participants behavior and are able to repeat the observation and get reliable results.
- A big weakness is the environment itself is artificial and we may not see the same behavior repeated in the participants natural environment.
- Our other option is a naturalistic observation.
- The participants are observed in their normal environment and this has the advantage of high realism.
- The participants should behave as they normally would and we can claim that our findings have external validity in this case ecological validity.
- However, the lack of control means that there might be unknown extraneous variables causing the behavior.
- Another choice is between overt or covert observation.
- In an overt observation, the participants can see you and critically they know they're being observed.
Observational Techniques
- Participants must consent to research participation, aligning with ethical guidelines.
- Awareness of being observed can alter participant behavior, known as demand characteristics.
- Covert observations mitigate demand characteristics by observing natural behavior.
- Covert observations may be unethical due to the lack of informed consent.
- Participant observation involves the researcher becoming part of the studied group.
- Participant observation allows firsthand knowledge of participant situations and promotes rapport.
- Participant observations risk researcher objectivity due to potential bias.
- Nonparticipant observation involves the researcher observing from a distance.
- Nonparticipant observation increases objectivity but may miss important findings.
Observational Design
- Operationalized behavioral categories are crucial in observation design for clear variable definition.
- Operationalizing involves defining a variable so that it can be objectively measured.
- For example, defining aggression through specific actions like punches, pushes, and kicks.
- Behavioral categories are listed, observable behaviors related to a target behavior.
- Frequency charts can help record data
- Observational reliability can be assessed with a test of inter-rater reliability
- Inter-rater reliability involves multiple researchers observing and comparing results for similarity.
- Data sets are compared by using correlation like Spearman's row
- A correlation of 0.8 or stronger shows reliable results.
Correlation Studies
- Correlational studies differ from experiments as they measure variables without manipulation.
- Correlation involves measuring and comparing two co-variables, such as age and IQ.
- Data from correlation studies is displayed on a scattergram.
- In a scattergraph, either variable can be on the x or y axis
- Positive correlation means both co-variables increase together.
- Negative correlation means one co-variable increases as the other decreases.
- Zero correlation means that there is no relationship between the co-variables
Correlation Evaluations
- A critical evaluation to consider is that correlation does not equal causation.
- Correlation cannot determine which variable influences the other, and there may be a third variable.
- Measuring pre-existing co-variables provides no control over extraneous variables.
- Correlations can highlight potential causal relationships for further investigation.
- Correlational research often poses fewer ethical problems due to measuring pre-existing variables.
- The correlation coefficient is a useful tool in describing the strength of a correlation.
Ethical Considerations
- The American Psychological Association and the British Psychological Society (BPS) set ethical guidelines.
- Psychologists can bend ethical guidelines, but serious violations risk expulsion from the BPS.
Ethical Issues
- Participants should give informed consent, being aware of the study's aims, purpose, and consequences.
- If participants can't give consent, guardians may provide it.
- Participants have the right to withdraw at any stage and have their data destroyed.
- Researchers must consider potential risks to participants' well-being, health, values, and dignity.
- Confidentiality ensures personal records are secure and identities are protected in published results.
- Confidentiality may be broken if participants or others are in danger.
- Debriefing involves explaining the reasons and outcomes of the research and checking for harm.
Balancing Ethics and Validity
- Researchers balance ethical rights with gaining valid data and investigating sensitive topics.
- Alternatives to informed consent include prior general consent, retroactive consent, and presumptive consent.
- Prior general consent means agreeing to a long list of things that could happen
- Retroactive consent is getting consent after the participants take part
- Presumptive consent is asking a group similar to the participants if they would agree to take part
- These alternatives avoid demand characteristics by omitting the goal of the study.
- A cost-benefit analysis weighs costs to participants against benefits to society.
- An ethics committee evaluates research proposals based on ethical principles and may use cost-benefit analysis.
- The debriefing is revealing deception, other groups, reminding that they can withdraw data, checking for harm and offering support.
Data Types
- Quantitative data: Data in the form of numbers
- Qualitative data: Data in the form of words mean descriptions of behavior, thoughts and feelings
- Recordings of observations and interviews can be coded and tallied to turn qualitative data into quantitative data.
- Recordings of observations and interviews can be coded and tallied.
- Content analysis can be used to turn qualitative data into quantitative data.
- Quantitive data is often collected in experiments and closed questionnaires
- Qualitative data is suited to interviews, observations, open questionnaires, and case studies.
- Quantitative data is objective and less biased, while qualitative data is subjective and open to interpretation.
- Quantitive date techniques tend to be more reliable
- Weakness of quantitative data is its lack of depth and detail
- Advantage of qualitative data is it provide more detail and valid measurement of human experience.
Primary vs. Secondary Data
- Primary data is first-hand data collected by the researchers themselves.
- The primary data is focused on the demands of the hypothesis
- Secondary data is data that has already been collected and published.
- The data is focused on the demands of the hypothesis.
- Secondary data can be government or business statistics and records or previously published data from other studies.
- Primary data collection can be costly and time-consuming.
- Secondary data saves costs as it is already collected, freely accessible and ready to analyze
- Primary data is more valid because its collection is shaped to the demands of the research question.
- Researchers trust that the original researchers collected valid data when collecting secondary data.
Mode
- A criticism is that in small data sets, there are likely to be multiple modes or no mode if every score is different, which doesn't give a clear average value.
- The mode doesn't include all the values in its calculation, making it not as sensitive as the mean.
Median
- The median is calculated by ordering the values from lowest to highest and using the middle number as the average.
- If there are an even number of data points, then the median is calculated as halfway between the two middle scores.
- Strengths: not affected by extreme outlier scores and is easy to calculate
- Weakness: not all the raw scores are used in its calculation, making it not as sensitive as the mean.
Mean
- The mean is calculated by adding all the scores together and dividing by the number of scores.
- Strengths: uses and represents all of the raw data in the calculation, making it sensitive.
- Weakness: easily shifted towards extreme values because of its sensitivity.
- The correct measure of central tendency should be used for the right situations due to their contrasting strengths and weaknesses.
Range
- The range is calculated by subtracting the smallest value from the largest to get the range.
- Adding one when working out the range is an acceptable way of stating the range.
- Strength: is very easy to calculate.
- Weakness, extreme scores easily distort its value, and it doesn't show if the scores are more spread out or clustered around the mean.
Computation of Percentages
- In exams, explain what a percentage means rather than stating what the percentage is
- Percentages can also be expressed as a fraction or a decimal.
- To calculate the percentage of any number, write the fraction, convert the fraction into a decimal by dividing, and turn the decimal into a percentage by multiplying by 100
- To find the percent of a number, turn the percentage into a decimal and then multiply the decimal by the number of participants.
- To work out the percentage change between two numbers, subtract the old number from the new number, divide this difference by the old number, and then multiply by 100.
Interpretation and Display of Quantitative Data
- Raw data table: Quantitative results are gathered together into a raw data table when first collected before summarizing.
- Frequency table: Researchers also collect their raw data in the form of a frequency table, also known as a tally chart.
- A frequency table generally has three columns: a column with a behavior category or value of a variable, a column for a tally mark for each observation or record, and a final column for a total of the tally.
Bar Charts
- A bar chart is used to summarize the frequency of categorical data, also known as nominal data which is data in distinct categories like favorite pet.
- The categorical variable is usually placed on the x or horizontal axis, with the frequency or value on the y-axis, the vertical axis.
- The height of the bar shows the value or frequency of that category.
- Each bar is a distinct category and could be arranged in any order.
- The bars do not touch; if they touch, this suggests continuous data and that's a histogram
- A bar chart can show more than one variable at the same time, in which case the chart will have a legend or will be appropriately labeled.
Scattergrams
- Scattergrams are used to show the relationship between two co-variables in correlational research.
- As co-variables, it doesn't matter which way around each variable goes on the X and Y axis.
- Each point on the scattergram represents two measurements of one participant.
- Common correlational relationships are positive and negative.
Histograms
- Histograms show frequency.
- The bars are touching to show that the scale on the x-axis is continuous.
- Common examples are test scores, age group, and time scales like months and years.
- The bars have to be displayed in order to make sense of the data.
Normal Distributions
- After collecting data about the frequency of some continuous factor, it can be displayed on a graph that shows its frequency.
- This is known as a histogram, with the y-axis representing frequency and the x-axis showing the value of the scores.
- Sets of data that show this distinctive bell curve are what's known as a normal distribution.
- The two sides of the curve are symmetrical, i.e., the same on both sides.
- The mean, the median, and the mode are all at the top in the center of the curve.
- The sides of the curve don't touch the x-axis as there always theoretically could be extremely small or large scores.
- The mode will always be at the top of the curve as the mode is the most common or frequent score.
- 50% of the scores are on each side of the highest point with the same amount of area under each side.
- The extreme scores of each side balance out, keeping the mean in the center.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.