Chapters 13 & 14: Quasi-Experimental and Experimental Designs PDF

Document Details

Uploaded by Deleted User

Tags

experimental research research methods quasi-experimental designs social sciences

Summary

This document covers quasi-experimental and experimental designs in research. It details participant variables, regression, program evaluation, and other critical concepts. The text discusses methods used to control extraneous variables and the importance of external validity, including generalization across participants and settings.

Full Transcript

Chapter 13: Quasi- experimental designs: used to make comparisons among different groups of individuals who cannot be randomly assigned to groups Participant-variable design: when the grouping variable involves pre-existing characteristics of the participants Participant-variable: variable that d...

Chapter 13: Quasi- experimental designs: used to make comparisons among different groups of individuals who cannot be randomly assigned to groups Participant-variable design: when the grouping variable involves pre-existing characteristics of the participants Participant-variable: variable that differs across participants Regression to the mean: extreme scores from participants will naturally be less extreme at other times Single-participants: tracking the behavior of individuals over time to draw conclusions about changes in single person's behavior Program-evaluation research: designed to study intervention programs to see if programs are effective Experimental control: extent the experimenter can eliminate effects on dependent variable other than the effects of the independent variable Internal validity: extent in which the dependent variable is effected by the independent variable Extraneous variable: variables other than the independent variable that effects dependent variable (caused by random error) Confounding variable: variables other than the independent variable on which the participants in one experimental condition differ systematically or on average from those in other conditions (caused by systematic error) Confounding: when another variable other than the independent variable differs systematically across conditions Alternative explanations: presence of a confounding variable does not always mean the effect of independent variable is irrelevant Standardization of conditions: all participants in all levels of the independent variable are treated the exact same Experimental script/ or protocol contains all information about what experimenter says during experiment Impact: when the manipulation creates the hoped for changes in the conceptual variable Experimental realism: extent in which the experimental manipulation involves participants in research Manipulation checks: determines if experimental manipulation has had on the intended target Pilot tests: conducting the manipulation on a few participants before beginning the experiment Artifacts: aspects of research methodology may go unnoticed and may inadvertently produce confounds Cover Story: a false or misleading statement about what is being studied, helps stop participants from guessing what the hypothesis is Demand characteristics: unrelated experiments technique: Demand Characteristics: Non-reactive measures: Participants do not realize what is being measured or cannot control their response Experimenter bias: if experimenter knows, research hypothesis they may treat participants differently Naïve experimenters: know nothing about the hypothesis Non-reactive: participants do not know what is being measured or cannot control their responses Methods to control extraneous variables: Select from a homogenous population Before-after research designs: dependent before after Baseline measures: pre-manipulation measurement : Dependent variable equals post minus pre (difference in score) Multiple group before/after design: any difference among participants will influence baseline measure and dependent variable, power is increased by controlling for variability among research participants Matched- group research design: participants are measured on variable of interest before experiment begins they are assigned to conditions on the basis of their scores on that variable Chapter 14: External validity: a second major set of potential threats to the validity of research (outside of internal validity) The extent to which the experimenter allows conclusions to be drawn about what might occur outside of or beyond existing research Generalization: extent to which relationship can be demonstrated in a wide variety of people and a wider variety of people and manipulated or measured variables Generalization among participants: goal of experimental research; explain underlying casual relationships among conceptual variables unless the researcher has a reason to belive generalization will not work, it is then appropriate to assume a result found in one population will generalize to other populations Generalization across settings: the uniqueness of an experiment makes it possible that the findings are limited to specific settings/.... Experimenters/... manipulators/... measured validity Ecological validity: repeating an experiment in different places, with different experimenters and different operationalizations of the variables: Increase in ecological validity equals an increase in potential generalization Field experiments: experimental designs conducted in natural environment (example: library, factory, school, etc.): Generally will have higher ecological validity Replication: repeating previous research; 4 types 1: Exact: repeat design as exact as possible 2: Conceptual: investigates the relationship between the same conceptual variables, tests hypothesis using different operational definitions of the independent variable and/or the measured dependent variable things to keep in mind when integrating results: - Every test of hypothesis is limited in some sense - Some experiments are conducted in specific settings that may be unlikely to generalize - Others are undermined by potential alternative explanations - Every significant result may be invalid because it represents type 1 error Research programs: collections of experiments conducted in such a way they systematically study a topic of interest through conceptual and constructive replications over a period Meta-analysis: a statistical technique using the results of existing studies to integrate and draw conclusions about those studies - Provides objective method of reviewing findings - Specifies inclusion criteria indicating exactly which studies will or will not be included in the analysis - Systematically searches for all studies meeting inclusion criteria - Uses the effect size statistic to provide an objective measure of strength of observed relationship

Use Quizgecko on...
Browser
Browser