Full Transcript

RESEARCH STRATEGIES Descriptive Strategy Correlational Strategy Experimental Strategy Quasi-Experimental Strategy DESCRIPTIVE STRATEGY Provides a DESCRIPTION re the variable(s) of interest. (Who, What, Where, When). CORRELATIONAL STRATEGY Seeks to identify and describe a (LINEAR) relationship be...

RESEARCH STRATEGIES Descriptive Strategy Correlational Strategy Experimental Strategy Quasi-Experimental Strategy DESCRIPTIVE STRATEGY Provides a DESCRIPTION re the variable(s) of interest. (Who, What, Where, When). CORRELATIONAL STRATEGY Seeks to identify and describe a (LINEAR) relationship between two variables. Beyond descriptive research but NOT explanatory research. The LINEAR relationship between variables, if any, does NOT in itself establish cause-and effect (see later). EXPERIMENTAL STRATEGY Seeks to identify and describe a a CAUSE-ANDEFFECT type of relationship between TWO (or more) variables. There must be at least ONE independent variable (cause) and at least ONE dependent variable (effect). EXPERIMENTAL STRATEGY Using EXPERIMENTAL DESIGN, the scientist will MANIPULATE (systematically change) the INDEPENDENT variable(s), look for hypothesized CHANGE in the DEPENDENT variable(s), while exerting CONTROL on EXTRANEOUS variable(s), thereby RULING OUT possible ALTERNATIVE CAUSES (explanations). QUASI-EXPERIMENTAL STRATEGY Seeks to identify and describe a CAUSE-ANDEFFECT relationship between two (or more) variables. Quasi-experimental research design however lacks some CONTROL ON EXTRANEOUS VARIABLES which prevents the scientist from making an unambiguous causeand-effect interpretation. VALIDITY (AGAIN) NOT MEASUREMENT VALIDITY INTERNAL VALIDITY Speaks to the INTEGRITY of the EXPERIMENTAL DESIGN. Threat(s) to internal validity is/are any factor(s) within the design that allows for other interpretation(s). VALIDITY (AGAIN) NOT MEASUREMENT VALIDITY EXTERNAL VALIDITY Threat(s) to external validity is/are any characteristics of the study (e.g., sampling) that limit the generality (generalizability) of the findings. INTERNAL/EXPERNAL VALIDITY NOT MEASUREMENT VALIDITY Threats to internal/external validity can be minimized beforehand by EXPERIMENTAL DESIGN. There is a TRADE-OFF between internal validity and external validity. VARIABLES INDEPENDENT VARIABLES The variable(s) that the scientist manipulates (i..e, systematically changes). DEPENDENT VARIABLES The variable(s) that the scientist measures to determine if changes are observed as a result of manipulation of the independent variable). VARIABLES EXTRANEOUS VARIABLES Variables other than the variables of interest the research hypothesis (i.e., the independent and dependent variables). Think: Extra variables. Extraneous variables may/may not be subject to control. VARIABLES CONFOUNDING VARIABLES Extraneous variable(s) that changes with the independent and/or dependent variable(s), thus providing for an alternative cause-andeffect explanation. INTERNAL VALIDITY: THREATS PARTICIPANT Variables: Possible introduction of ASSIGNMENT bias (NOT sampling bias). ENVIRONMENTAL Variables: E.g., time of day, temperature, etc. MEASUREMENT Variables – E.g., repeat measures (cf. pre-test/post-test) may incur practice/familiarity/fatigue effects, instrument changes, etc. EXTERNAL VALIDITY: THREATS GENERALIZING ACROSS PARTICIPANTS If NOT random sampling and/or random assignment, then: Subject SAMPLING and/or ASSIGNMENT bias University student (bias) Volunteer (bias) Participant characteristics (bias) Cross-species generalizations INTERNAL/EXTERNAL VALIDITY Noted already, a TRADE-OFF exists between internal validity and external validity. Laboratory Research vs Field Research. INTERNAL/EXTERNAL VALIDITY LABORATORY Research: “High” internal validity/“Low” external validity. FIELD Research: “Low” internal validity/“High” external validity. RESEARCH CONSIDERATIONS: STRATEGIES, DESIGNS AND PROCEDURES STRATEGY: Descriptive, correlational, etc. DESIGN: Individual vs Group, Between vs Within, Number of Variables, Statistical Analysis. PROCEDURE: The details of the experimental design e.g., number of individuals/groups/ treatments, measurement of variables, etc. EXPERIMENTAL DESIGN Experimental Design: In general terms, a COMPARISON of individuals (groups) who have received different experimental “TREATMENTS” Experimental RESEARCH: Laboratory, Field. Experimental DESIGN: Uses some type of CONTROL, then CHANGES a situation (INDEPENDENT VARIABLE/S), observes any EFFECTS of those changes (DEPENDENT VARIABLE/S). Experiments should be designed to allow meaningful COMPARISONS, that is, OTHER independent variables than the one(s) of interest (i.e., CONFOUNDING variable/s) should NOT CHANGE (cf. apples and oranges). EXPERIMENTAL DESIGN: RECAP Direction/Association/Alternatives The cause(s) must PRECEDE the effect(s) EXPERIMENTAL DESIGN: RECAP Direction/Association/Alternatives There must be association (PATTERN OF CHANGE) between the INDEPENDENT and DEPENDENT variable(s) EXPERIMENTAL DESIGN: RECAP Direction/Association/Alternatives RULE OUT alternative explanations by manipulating the INDEPENDENT variable(s), measure changes (if any) in the DEPENDENT variable(s), while trying to CONTROL for EXTRANEOUS variables. INDEPENDENT VARIABLE The “LEVELS” of an independent variable refer to different administrations of the independent variable (also know as experimental TREATMENT) INDEPENDENT VARIABLE The experiment makes a COMPARISON among TREATMENT conditions, that is, the experiment compares CHANGES in the DEPENDENT variable(s) with CHANGES in (or levels of) the INDEPENDENT variable(s) INDEPENDENT VARIABLE The experimental design seeks to CONTROL for extraneous (including confounding) variables. Confounding variables may be KNOWN or UNKNOWN. Extraneous variables include both known and unknown confounding variables, hence the desire to control BOTH. EXTRANEOUS (confounding) variables: PARTICIPANT variables: Individual traits – sex, age, etc. EXTRANEOUS (confounding) variables: ENVIRONMENT variables: Time of day, temperature, etc. EXTRANEOUS (confounding) variables: MEASUREMENT variables: Varying observers/instruments, calibration checks, etc. We try to CONTROL for extraneous variables by: (A) holding these variables CONSTANT (B) MATCHING values across treatments and/or (C) RANDOMIZATION (no systematic bias) HOLDING VARIABLES CONSTANT Standardize participants, environment, procedures, etc. Hold variables within certain limits (cf. constant) cf. Internal validity/External validity MATCHING VALUES ACROSS TREATMENTS Identify values (for extraneous variables) then match/balance across treatments. RANDOMIZATION Random SAMPLING and random ASSIGNMENT are NOT the same thing. RANDOMIZATION Random SAMPLING: Sample subjects at random from a population. Random ASSIGNMENT: Assign subjects at random from a sample to different treatments (groups). RANDOMIZATION The intent of RANDOM ASSIGNMENT is to be able to attribute (observed) CHANGES in the DEPENDENT variable to (systematic) CHANGES in the INDEPENDENT variable, by EXCLUDING alternatives on the reasoned basis of NO BIAS. RANDOMIZATION No systematic bias. Thus, no systematic changes (expected) in extraneous variables across treatment conditions. Useful process to control for UNKNOWN extraneous variables. RANDOM assignment: One way to CONTROL for systematic PRE-EXISTING differences (known or unknown) between groups. RANDOM assignment: PROBABILISTIC and hence UNBIASED. (Not guaranteed.) The intent of RANDOM ASSIGNMENT is to be able to attribute (observed) CHANGES in the DEPENDENT variable to (systematic) CHANGES in the INDEPENDENT variable, by EXCLUDING alternatives on the basis of NO BIAS. CONTROL GROUPS EXPERIMENTAL Group receives the experimental TREATMENT. CONTROL Group receives NO TREATMENT. The EFFECTS of the experimental TREATMENT are assessed by COMPARING the results of the EXPERIMENTAL group to those of the CONTROL group. The control group provides a BASELINE comparison against which treatment effects can be judged. In fact, the CONTROL group may receive NO TREATMENT as indicated above, or a PLACEBO TREATMENT. EXTERNAL VALIDITY If threat to external validity is a concern, then the experimenter may look to increase the realism of the experimental set-up by: SIMULATION FIELD RESEARCH EXPERIMENTAL DESIGN: SOME EXAMPLES (For Better Or Worse) VARIATION-1 Independent variable i.e., treatment Experimental group Post-test (dependent variable) WEAKNESS: We do not know if there was any CHANGE in the dependent variable. VARIATION-2 Independent variable i.e., treatment Experimental group Pre-test (dependent variable) Post-test (dependent variable) WEAKNESS: We do not know that the treatment caused CHANGE, if any, in the dependent variable. Instead, some factor other than the independent variable may be responsible for change in the dependent variable. VARIATION-3 Independent variable i.e., treatment Experimental group Control group Post-test (dependent variable) WEAKNESS: Possible group differences might have existed before the treatment. Thus, we do not know that the treatment caused any post-test group differences (if any observed). VARIATION-4 Independent variable i.e., treatment Experimental group Control group Pre-test (dependent variable) Post-test (dependent variable) WEAKNESS: None. (CLASSICAL) EXPERIMENTAL DESIGN Independent variable(s) i.e., treatment(s) Experimental group(s) Control group(s) Random assignment Pre-test (dependent variable) Post-test (dependent variable)

Use Quizgecko on...
Browser
Browser