Psychology Chapter 2: Research Methods

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

What is the primary reason for using good research designs?

  • To minimize the impact of biases and ensure objective evaluation of evidence. (correct)
  • To obtain results that are easily generalizable and applicable.
  • To eliminate the need for statistical analysis.
  • To confirm pre-existing beliefs regardless of contrary evidence.

How does System 2 thinking differ from System 1 thinking?

  • System 2 requires little cognitive effort, while System 1 requires significant cognitive effort.
  • System 2 is fast and intuitive, while System 1 is slow and analytical.
  • System 2 relies on feelings and gut reactions, while System 1 involves careful evaluation of evidence.
  • System 2 is analytical and deliberate, while System 1 is intuitive and quick. (correct)

Why is the scientific method described as a 'toolbox of skills'?

  • Because it depends heavily on the researcher's personal intuition and creativity.
  • Because it offers a variety of techniques and approaches that can be applied flexibly depending on the specific circumstances. (correct)
  • Because it requires specialized equipment and laboratory settings.
  • Because it involves using a fixed set of procedures applicable to all situations.

What is the main benefit of random selection in research?

<p>It increases the generalizability of the results to the larger population. (A)</p> Signup and view all the answers

In the context of research, what does 'reliability' refer to?

<p>The consistency of a measurement across different times or raters. (C)</p> Signup and view all the answers

What is the 'replicability crisis' in psychology, and how has the 'open science movement' responded to it?

<p>The 'replicability crisis' refers to the challenge of repeating psychological findings, addressed by promoting transparency and data sharing. (C)</p> Signup and view all the answers

Which research method involves observing behavior in its natural setting without manipulating the environment?

<p>Naturalistic observation. (A)</p> Signup and view all the answers

What is a primary advantage of naturalistic observation?

<p>High external validity. (B)</p> Signup and view all the answers

What is a key disadvantage of naturalistic observation?

<p>It is low in internal validity, making it difficult to draw cause-and-effect conclusions. (D)</p> Signup and view all the answers

Which research method involves an in-depth analysis of an individual, group, or event?

<p>Case study. (C)</p> Signup and view all the answers

What is a primary advantage of using case studies in research?

<p>They offer the ability to investigate rare phenomena. (D)</p> Signup and view all the answers

What is a major disadvantage of case study research?

<p>Generalization of results may be limited. (B)</p> Signup and view all the answers

What is the primary purpose of self-report measures like questionnaires and surveys?

<p>To gather specific information about a person's behaviors, attitudes, and feelings. (B)</p> Signup and view all the answers

Which of the following is a disadvantage of using self-report measures?

<p>The wording of questions can significantly influence results. (C)</p> Signup and view all the answers

What is the 'Halo Effect' in rating data?

<p>The tendency for a high rating in one positive characteristic to spill over and influence the ratings of other characteristics. (B)</p> Signup and view all the answers

Which research design is best suited for determining if a relationship exists between two or more variables?

<p>Correlational design. (C)</p> Signup and view all the answers

What is a key limitation of correlational research designs?

<p>They cannot establish cause-and-effect relationships. (D)</p> Signup and view all the answers

What does the 'third-variable problem' refer to in correlational research?

<p>The possibility that an unmeasured variable is responsible for the observed correlation. (D)</p> Signup and view all the answers

How is the strength of a correlation measured?

<p>Using a correlation coefficient. (A)</p> Signup and view all the answers

What do scatterplots primarily display?

<p>Grouping of data points in two dimensions. (C)</p> Signup and view all the answers

What are 'illusory correlations'?

<p>Correlations that people perceive to exist when they actually do not. (C)</p> Signup and view all the answers

What is a key characteristic of an experimental design?

<p>Manipulation of at least one independent variable and random assignment of participants to conditions. (C)</p> Signup and view all the answers

What is the purpose of random assignment in experimental designs?

<p>To ensure that participant characteristics are evenly distributed across all experimental conditions. (A)</p> Signup and view all the answers

In an experimental design, what is the role of the 'independent variable'?

<p>It is the variable that the experimenter manipulates. (A)</p> Signup and view all the answers

What is the difference between within-subjects and between-subjects experimental designs?

<p>Within-subjects designs use the same participants in all conditions, while between-subjects designs use different participants in each condition. (D)</p> Signup and view all the answers

What are 'confounding variables' in experimental research?

<p>Variables that differ between the experimental and control group and may be responsible for observed differences. (D)</p> Signup and view all the answers

What is the 'placebo effect'?

<p>Improvement resulting from the mere expectation of improvement. (C)</p> Signup and view all the answers

What is the purpose of a 'double-blind' procedure in experimental research?

<p>To prevent both the participants and the researchers from knowing the study condition. (C)</p> Signup and view all the answers

What are 'demand characteristics' in experimental research?

<p>Cues that might indicate the study's purpose to participants. (D)</p> Signup and view all the answers

Flashcards

System 1 Thinking

Intuitive thinking that is fast and relies on feelings.

System 2 Thinking

Analytical thinking that is slow and relies on careful evaluation.

Random Selection

A technique where everyone in a population has an equal chance of being chosen for a study.

Reliability

Consistency of measurement.

Signup and view all the flashcards

Validity

Extent to which a measure assesses what it intends to measure.

Signup and view all the flashcards

Open Science Movement

The movement for open and transparent science to ensure replicable and reproducible findings.

Signup and view all the flashcards

Naturalistic Observation

Observing behavior in its natural context without manipulation.

Signup and view all the flashcards

External Validity

Extent to which findings can be generalized to real-world situations.

Signup and view all the flashcards

Internal Validity

Extent to which cause and effect conclusions can be drawn.

Signup and view all the flashcards

Reactivity

Participants behave differently when being observed.

Signup and view all the flashcards

Observer Bias

A researcher's own expectations and opinions.

Signup and view all the flashcards

Case Study

In-depth analysis of an individual, group, or event.

Signup and view all the flashcards

Questionnaire

Tool measuring characteristics related to a person through self-report.

Signup and view all the flashcards

Survey

Tool measuring opinions and attitudes through self-report.

Signup and view all the flashcards

Response sets

The tendency to distort answers to present oneself positively.

Signup and view all the flashcards

Malingering

The tendency to intentionally appear psychologically disturbed.

Signup and view all the flashcards

Rating Data

Self-report measure where someone else comments on a person's behavior.

Signup and view all the flashcards

Halo Effect

Tendency for a high rating in one trait to influence ratings of other traits.

Signup and view all the flashcards

Horns Effect

Tendency for a negative rating in one trait to influence others.

Signup and view all the flashcards

Correlational Designs

Research measuring variables to find relationships between them.

Signup and view all the flashcards

Variable

Anything that can vary across or within people.

Signup and view all the flashcards

Causation

When a correlational research design cannot show causation.

Signup and view all the flashcards

Illusory correlations

People think two variables are correlated when they really aren't.

Signup and view all the flashcards

Experimental Design

Manipulating at least one independent variable.

Signup and view all the flashcards

Independent variable

A variable that is manipulated by the experimenter.

Signup and view all the flashcards

Dependent variable

A variable that is measured by the experimenter.

Signup and view all the flashcards

Random Assignment

Ensuring equal chance of being assigned to experimental or control.

Signup and view all the flashcards

Blind Study

Participants are blind to the study condition.

Signup and view all the flashcards

Double-blind

Neither participant nor researcher knows the study condition.

Signup and view all the flashcards

Placebo Effect

Improvement from the mere expectation of improvement.

Signup and view all the flashcards

Study Notes

  • Chapter 2 discusses the different research methods used in psychology, ethical considerations, and the use of statistics

Good Research Designs

  • Highlighted by the example of frontal lobotomies, which lacked systematic research. In 1935, Carlyle Jacobsen presented research related to this topic
  • Egaz Moniz developed frontal lobotomy procedures based on this research which were used to treat humans with psychological and behavioral problems
  • Moniz was awarded a Nobel Prize in 1949 and Walter Freeman popularized the practice in the USA
  • Systematic research on frontal lobotomies effects was not available until the 1960s
  • Naive realism and confirmation bias contributed to the practice

Two Modes of Thinking

  • System 1 Thinking: It's intuitive, fast, and gut reaction based
  • Relies on heuristics (mental shortcuts or rules of thumb)
  • Requires little cognitive effort
  • System 2 Thinking: It's analytical, slow, and a careful process of evaluation
  • Used for reasoning through a problem

Scientific Method

  • The scientific method is not a one-size-fits-all process
  • It is a toolbox of skills applicable in specific ways based on the context

Random Selection

  • Random selection is a technique ensuring everyone in a population has an equal chance to participate in a study
  • It increases generalizability of results
  • Studying fewer people broadly is more effective than studying more people narrowly

Evaluating Measures

  • Reliability is the consistency of measurement using Test-Retest or Interrater Reliability
  • Validity is the extent to which a measure assesses what it claims to measure
  • Reliability is required for validity, but validity is not required for reliability

Openness in Science

  • Open and transparent science ensures findings are replicable and reproducible
  • The open science movement was created in response to a replicability crisis in psychology

Replication Crisis

  • To resolve the replication crisis, data and materials should be shared publicly
  • Conduct replications of original work
  • Science journals should publish all sound science, not just flashy studies, using preregistered research

Major Research Methods

  • Descriptive
  • Correlational
  • Experimental

Naturalistic Observation

  • Focuses on observing behavior naturally without manipulation and may involve observing subjects like students or animals in their natural habitat
  • Major advantages: High external validity to generalize findings and captures natural environments
  • Major disadvantages: Low internal validity to draw cause and effect conclusions because of reactivity and observer bias, with no control of variables

Case Studies

  • Case studies involve an in-depth analysis of an individual, group, or event
  • examples include Patient DF (Mel Goodale, David Milner), Patient HM (Brenda Milner and colleagues), and Patient MC (Striemer et al., 2019)
  • Major advantages: Allows investigation of rare phenomena, provides existence proofs, is good for hypothesis generation
  • Major disadvantages: cannot definitively determine cause and effect. Generalization and possible observer bias may be an issue

Self-Report Measures

  • Research uses interviews, questionnaires, or surveys to gather information related to feelings and attitudes
  • Questionnaires measure a variety of characteristics through self-report
  • Surveys measure opinions and attitudes through self-report
  • Major Advantages: Easy to administer and gather data, cost effective, measures feelings and opinions
  • Major Disadvantages: Question writing affects results, accurate insight needed, response sets and malingering alter results

Rating Data

  • Rating data are a type of self-report measure where someone else is asked to comment on a person’s behavior, assuming familiarity
  • It can avoid malingering and response set bias in self-reporting
  • Halo or Horns effects are disadvantages, as well as susceptibility to stereotypes

Correlational Designs

  • Research designs measure different variables to see if there is a relationship between them
  • They look to see if things co-relate or correlate
  • A variable is anything that can vary across or within people such as impulsivity or creativity
  • Advantage: flexible, easier to conduct, can determine a relationship between two variables
  • Disadvantage: does not explain causation
  • It is hard to determine direction of causal relationship or the third-variable problem
  • Strength of correlation is measured using a coefficient ranging from -1 to 1. 0 is no relation, and 1 is a perfect relation
  • Positive correlations mean variables move in the same direction; negative correlations mean variables move in opposite directions

Correlations and Scatterplots

  • Scatterplots show a grouping of data points in 2-dimensions, with each dot representing data from a single participant

Illusory Correlations

  • These are variables thought to be related, when they are not, forming the basis of superstitious beliefs
  • Can involve lunar-lunacy effects, sports superstitions, and confirmation bias

Experimental Designs

  • Defined by random assignment of participants to conditions, and manipulation of at least one independent variable.
  • Allows for causal relationship to be established between variables
  • Independent variable is manipulated by experimenter
  • Dependent variable is measured by the experimenter

Experimental Designs: Key Principles

  • Random assignment ensures each participant has an equal chance of being assigned to the experimental/control group
  • Between-subjects designs assign different participants to different experimental conditions
  • Within-subjects designs have a participant serve as both experimental and control condition
  • Treatment A can be a placebo, and Treatment B a new experimental drug

Extraneous Variables

  • Researchers try to control for any extraneous/confounding variables such as differences between controls and experimental groups after conditions

Experimental Design Example

  • Hypothesis: Drug, "SadBeGone" will increase participants' mood
  • Independent variable: "SadBeGone" or placebo
  • Dependent variable: Measured mood through a wellness survey

Placebo Effect

  • Improvement based on the mere expectation of improvement
  • Placebo effects are short-lived
  • Control the placebo effect by ensuring the experimental group is blinded to their condition
  • Double-blinded experimentation means that neither the researcher nor the participant know which conditions they are in

Nocebo Effect

  • Harm from the mere expectation of harm
  • Can induce headaches (Morse, 1999) or feeling pain

Experimenter Expectancy

  • Hypotheses can unintentionally bias experiments
  • Can protect bias through double-blind procedures
  • E.g. Clever Hans was a horse who learned to do arithmetic

Demand Characteristics

  • When participants try to guess the purpose of a study, they may change their behavior accordingly
  • Researchers can disguise the purpose of a study to prevent demand having an impact
  • Examples of this involve socially desirable responding

Research Ethics

  • Researchers must adhere to strict guidelines
  • Ethical questions are difficult, but there are clear cases where past research has not been ethical

Shameful Sciences

  • In the Tuskegee Syphilis Study, in 1932-1972 US Public Health Service, ~400 African American men diagnosed with syphilis in Alabama were never offered treatment
  • Even after penicillin was proven effective in the 1940s
  • During WW2, the Nazis conducted cold water, sterilization, and bone/muscle transplantation experiments on humans

Belmont Report

  • The 1979 Belmont Report followed studies in Tuskegee and elsewhere
  • Research should
  • Allow people to be informed and make decisions
  • Maximize benefits and minimize risks
  • Distribute benefits and risks to all participants

Research Ethics Boards

  • All North American research colleges have at least one research ethics board, REB
  • REBs review research and protect participants from harm
  • REBs contain members of the institution with expertise in ethics and community members
  • In Canada the Tri-Council Policy Statement (TCPS) is followed

Guidelines for Experiments

  • Research must provide informed consent, protection from harm, freedom from coercion, a risk-benefit analysis, deception justification, debriefing, and confidentiality

Animal Research

  • Institutions engaged in animal research must have an animal research ethics board
  • AREBs review programs with animal subjects
  • AREBs include faculty, experts, veterinarians, and community members
  • The Canadian Council on Animal Care (CCAC) regulates humane use of animals in research

Statistics

  • It's an important part of research
  • It is the application of mathematics to describe and analyze data
  • It is a tool to draw conclusions from data

Descriptive Statistics

  • Numerical characteristics of data
  • Two main types: Measures of central tendency and measures of variability
  • Central tendency: Includes mean (average), median (middle), and mode (most frequent)
  • Variability: How loosely or tightly bunched scores are
  • Range: Difference between lowest and highest scores
  • Standard deviation: how far each data point is from the mean

Descriptive Statistics: Mean, Median and Mode

  • Sample IQ scores: 100, 90, 80, 120, 120 (Mean: 102)
  • Sample IQ scores: 80, 85, 95, 95, 220 (Mean: 115)

Central Tendency

  • The mean tends to provide the best measure of central tendency for normality distributed data sets
  • If data is skewed, median/mode works better

Variability: Range and Standard Deviation

  • Variability measures how loosely/tightly bunched scores are in the data set: two main methods
    • Range: difference between lowest and highest scores
    • Standard deviation: how far each data point is from the mean (i.e., the average)

Inferential Statistics

  • Mathematical methods to generalize findings from a sample population to the general population
  • Determine if results are likely to have occurred simply due to chance

Statistical Significance

  • The probability that the findings are due to chance
  • If the result is statistically significant, it means that the results
  • p<.05; or 1 in 20 (i.e., 5%) probability that the observed difference was due to chance alone
  • Influenced by sample size, effect size, and variability

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Cognitive Research Methods and Goals Quiz
5 questions
Cognitive Psychology: Research Methods
41 questions
Use Quizgecko on...
Browser
Browser