Research In Psychology: Methods & Design - Lecture Notes PDF
Document Details
Uploaded by UserReplaceablePyrite4262
University of Guelph
Tags
Related
- Experimental Psychology - Theory PDF
- Psychology Research Methods: A Comprehensive Guide
- Introduction to Experiments in Psychology PDF
- Psychology 243 Notes - Understanding Quantitative & Qualitative Research
- McGill Research Methods in Psychology Past Paper PDF
- Past Paper: Experimental Psychology (Arabic) PDF
Summary
These lecture notes cover research in psychology, focusing on methods and design, including experimental methods, subject variables, validity, and ethical guidelines. The notes include examples and figures, appropriate for an undergraduate psychology course.
Full Transcript
# RESEARCH IN PSYCHOLOGY: METHODS & DESIGN ## Eighth Edition ## Chapter 5. Introduction to Experimental Research ### Chapter Objectives * Describe the impact of Woodworth's 1938 Experimental Psychology on the way psychologists define an experiment * Define a manipulated independent variable and i...
# RESEARCH IN PSYCHOLOGY: METHODS & DESIGN ## Eighth Edition ## Chapter 5. Introduction to Experimental Research ### Chapter Objectives * Describe the impact of Woodworth's 1938 Experimental Psychology on the way psychologists define an experiment * Define a manipulated independent variable and identify examples that are situational, task, and instructional variables * Distinguish between experimental and control groups * Describe John Stuart Mill's rules of inductive logic and apply them to the concepts of experimental and control groups * Recognize the presence of confounding variables in an experiment and understand why confounding creates serious problems for interpreting the results of an experiment * Identify independent and dependent variables, given a brief description of any experiment * Distinguish between manipulated independent variables and those that are subject variables * Understand the interpretation problems that accompany the use of subject variables * Recognize the factors that can reduce the statistical conclusion validity of an experiment * Describe how construct validity applies to the design of an experiment * Distinguish between the internal and external validity of a study * Describe the various factors affecting an experiment's external validity * Describe and be able to recognize the various threats to an experiment's internal validity * Recognize that external validity might not be important for all research but that internal validity is essential * Understand the ethical guidelines for running a "subject pool" ### Essential Features of Experimental Research * Mill's inductive logic * Method of agreement * If X, then Y (sufficiency - X is sufficient for Y) * Method of difference * If not X, then not Y (necessity - X is necessary for Y) * Together - X is necessary & sufficient for producing Y * Agreement - analogous to experimental group * Difference - analogous to control group * Establishing independent variables (IVs) * Manipulated IVs * Situational * Task * Instructional * Experimental groups * given treatment (Research Example 6 - given a golf ball and told it was a "lucky" ball) * Control groups * treatment withheld (Research Example 6 - given a golf ball and not told it was a "lucky" ball) * Controlling extraneous variables * Confounds * Any uncontrolled extraneous variable * Results could be due to IV or to confound * Distributed practice example (Table 5.1) * Table 5.1. Confounding in a Hypothetical Distribution of Practice Experiment | | Monday | Tuesday | Wednesday | Thursday | Friday | |----------------|--------|---------|-----------|----------|---------| | Group 1 | 3 | - | - | - | Exam | | Group 2 | 3 | 3 | - | - | Exam | | Group 3 | 3 | 3 | 3 | - | Exam | Note: The 3s in each column equal the number of hours spent studying a chapter of a general psychology text. * Measuring dependent variables (DVs) * DVs are any behaviors measured in an experiment * Review scales of measurement (Ch. 4) * Problems * Ceiling effects - task is too easy, all scores very high, disguising any differences * Floor effects - task too difficult, all scores very low, disguising any differences * Solution * Task of moderate difficulty, determined through pilot testing ## Subject Variables * Subject Variables * Already-existing attributes of subjects in a study * Examples - gender, age, personality characteristic * Anxiety example * As a manipulated variable - induce different degrees of anxiety in participants * As a subject variable - choose participants who have different degrees of their typical anxiety * Research Example 7 * Subject variable #1 - culture (European Americans and East Asians); Subject variable #2 - gender (Women and men) * Research example * Greater field dependence for East Asians and for females (Figure 5.1) * Drawing conclusions when using subject variables * With a manipulated IV - Assuming no confounds, IV causes DV * With a subject variable * Groups may differ in several ways * IV cannot be said to cause DV * All that can be said - the groups differ from each other * Using both manipulated and subject IVs * Bandura's Bobo study (Box 5.2) * Manipulated - type of exposure to violence; Subject - gender ## The Validity of Experimental Research * Statistical conclusion validity * Proper statistical analyses and conclusions * Construct validity * Well-chosen and well-defined IVs and DVs * External validity * Internal validity ### External Validity * The extent to which research findings generalize to contexts other than those of the experiment ### Externally valid studies... 1. Generalize to other populations 2. Generalize to other environments (also, ecological validity) 3. Generalize to other times ### Internal Validity * Does my study actually answer the research question I proposed and designed to answer? ### Internally valid studies... 1. Have valid operational definitions 2. Have valid measurements 3. Have no confounds ## Threats to Internal Validity * Studies extending over time (pretests & posttests) * History * Maturation * Regression to the mean * Testing and instrumentation * Importance of using a control group * Participant problems * Subject selection - The Brady study - ulcers in executive monkeys * Attrition ## Creating Ethical Subject Pools * (Box 5.3) * Must be part of educational process for students * Should follow APA guidelines * Post requirement, describe it thoroughly * Provide reasonable alternative activities * Get IRB approval for all projects * Treat students well; provide mechanism for complaint * No penalties for no-shows * Follow all other aspects of the APA ethics code * Have a mechanism in place to assess the subject pool ## Summary * Experimental research involves independent and dependent variable, in an effort to test the effects of the IV on the DV. * We attempt to control for confounding variables to increase the internal validity of our study. * We must consider other possible threats to internal validity as they pertain to our study. * Once we identify IVs, DVs, and threats to validity, we design a study to control those threats. ## Chapter 6. Control Problems in Experimental Research ### Chapter Objectives * Distinguish between-subjects designs from within-subjects designs * Understand how random assignment can solve the equivalent groups problem in between-subjects designs * Understand when matched random assignment should be used when attempting to create equivalent groups * Understand when counterbalancing is needed to control for order effects in within-subjects designs * Distinguish between progressive and carry-over effects in within-subjects designs, and understand why counterbalancing normally works better with the former than with the latter * Describe the various forms of counterbalancing for situations in which participants are tested once per condition and more than once per condition * Describe the specific types of between- and within-subjects designs that occur in developmental psychology, and understand the problems associated with each * Describe how experimenter bias can occur and how it can be controlled * Describe how participant bias can occur and how it can be controlled ### Between-Subjects Designs * Different sets of subjects in each level of an IV * Comparison is between two different groups of subjects ### Necessary when... * Subjects in each condition have to be naïve (Example - Barbara Helm study) * Subject variable (e.g. gender) is the IV ### Main problem to solve - creating equivalent groups ### Creating Equivalent Groups * Random assignment * Each subject has equal chance of being assigned to any group in the study * Spreads potential confounds equally through all groups * Blocked random assignment - involves assigning a subject to each condition of the study before the condition is repeated * Matching * Deliberate control over a potential confound * Use when * Small n per group might foil random assignment * Some matching variable correlates with DV * Measuring the matching variable is feasible (Table 6.1) ### Within-Subjects Designs * Also called repeated-measures designs * Same subjects in every level of an IV * Comparison is within the same group of subjects - Used when comparisons within the same individual are essential (e.g., perception studies) * Eliminates the possibility that differences between levels of the IV could be due to individual differences ### Main problem to solve - order effects * Progressive effects * Carry-over effects (harder to control) * Performance on or experience in Sequence A-B may affect performance (i.e., "carry-over") on Sequence B-A ### Controlling Order Effects * Counterbalancing * Altering the order of the experimental conditions ### Testing once per condition * Complete counterbalancing (all possible orders = x!) * Test participants in every possible different order at least once * Works well with only a few conditions * Partial counterbalancing * Random sample of all possible combinations is selected * Latin square - every condition of the study occurs equally often in every sequential position, every condition precedes and follows every other condition exactly once ### Testing more than once per condition * Reverse counterbalancing * Example - A-B-C-D-D-C-B-A-A-B-C-D-D-C-B-A * Blocked randomization * Example - B-C-A-D-A-C-B-D-C-D-A-B-D-A-C-B * Research Example 8: When referees see red... ## Methodological Control in Developmental Research * Cross-sectional design * Between-subjects design * Potential for cohort effects - Worse with large age differences * Longitudinal design * Within-subjects design * Potential for attrition difficulties - But extremely low in Terman's famous longitudinal study of gifted children (Box 6.1) * Cohort sequential design * Combines cross-sectional and longitudinal ## Controlling for the Effects of Bias * Experimenter bias * Experimenter expectations can influence subject behavior * Clever Hans * Rosenthal studies * Controlling for experimenter bias * Automating the procedure * Using a double blind procedure (Research Example 9: Giving older adults caffeine in the afternoon...) * Participant bias * Hawthorne effect * Effect of knowing one is in a study * Misnamed perhaps (Box 6.2) - Importance of understanding history * "Good" subjects - Participants tend to be cooperative, to please the researcher * Evaluation apprehension - Participants tend to behave in ideal ways so as not to be evaluated negatively * Demand characteristics * Cues giving away true purpose and study's hypothesis (Research Example 10 - eating behavior...) ### Controlling for participant bias * Effective deception * Use of manipulation checks * Field research ## Ethical Responsibilities of participants * Box 6.3 * Be responsible * Show up for scheduled appointments, or inform research of cancellation * Be cooperative * Behave professionally when participating in research * Listen carefully * Ask questions if unsure of your rights or of what you are asked to do * Respect the researcher * Do not discuss study with others * Be actively involved in debriefing * Help the researcher understand your experience ## Summary * Between- and within-subjects designs are two types of experimental designs. * Methodological controls for between-subjects designs include random assignment and matching. * Methodological control for within-subjects designs include counterbalancing techniques and block randomization. * Developmental designs have unique control issues. * We should be mindful of experimenter and participant biases. * Methodological control is key to creating an internally valid study.