Document Details

Uploaded by Deleted User

Tags

experimental designs research methods psychology study guides

Summary

This study guide covers experimental designs, including single-case, control series, and quasi-experiments. It details methods like reversal and ABA designs, multiple baseline designs, threats to internal validity, and more. It appears to be suitable for a psychology course at the undergraduate level.

Full Transcript

Chapter 11 ▪Experimental Designs Understand the difference between various experimental designs (e.g., single-case, control series, and others). - Single-Case experiment design: experimental design that allow cause-and-effect inferences based on data from one or a small number of research...

Chapter 11 ▪Experimental Designs Understand the difference between various experimental designs (e.g., single-case, control series, and others). - Single-Case experiment design: experimental design that allow cause-and-effect inferences based on data from one or a small number of research participants. - Reversal design: a single-case design in which the treatment is introduced after a baseline period and then withdrawn during a second baseline period. - One method of determining that the manipulation had an effect is to demonstrate the reversibility of the manipulation. - Multiple baseline design: observing behavior before and after manipulation under multiple circumstances across individual participants. - Quasi-experiment designs: approximate the control features of true experiments to infer that a given treatment did have its intended effect. - One-Group Posttest-only design: called a “one-shot case study” —a quasi-experimental design that has no control group and no pretest comparison. - One-group pretest-posttest design: obtains a comparison by measuring participants before and after manipulation. - Nonequivalent control group design: compares an experimental group with a separate control group, but the two groups are not equivalent (not been randomly assigned to conditions). - Noneequivalent control group pretest-posttest design: compares an experimental group with a nonequivalent control group and incorporates a pretest and a posttest. - Interrupted time series design: examines the dependent variable over an extended period of time, before and after the independent variable is implemented - Control series design: an extension of the interrupted times series design in which there is a comparison or control group. Focus on how a subject's behavior is tracked over time in a baseline control period in single-case designs. - A change in the subjects behavior from a baseline to treatment periods of evidence for the effectiveness of the manipulation. - The basic issue in single-case experiments is how to determine that the manipulation of the independent variable had an effect. ▪Reversal and ABA Designs Describe the purpose of reversal designs and how they demonstrate the reversibility of the effect of an independent variable. Study the structure and logic behind ABA designs and their variations. - One method of determining that the manipulation had an effect is to demonstrate the reversibility of the manipulation. - In an ABA design, behavior is observed during the baseline control (A) period, again during the treatment (B) period, and also during a second baseline (A) period after the experimental treatment has been removed. - This can be greatly improved by extending it to an ABAB design introduced the second time or even an ABABAB design that allows the effect of the treatment to be tested a third time. ▪ABAB Design Understand how ABAB designs improve on ABA designs by adding a second withdrawal period. Why might ending with the treatment phase may be beneficial? - Using an ABAB design provides the opportunity to observe a second reversal when the treatment is introduced again. The sequence ends with the treatment rather than the withdrawal of the treatment. - It’s beneficial because instead of ending with a treatment withdrawal, the sequence ends with a treatment. ▪Multiple Baseline Designs Explore the concept of multiple baseline designs across situations, behaviors, and participants. Understand how these designs measure behavior in different settings or under different conditions. - Such a change must be observed under multiple circumstances to rule out the possibility that other events were responsible. - Across different individuals, different behaviors, or different settings. - In the multiple baselines across subjects, the behavior of several subjects is measured over time; for each subject, though, the manipulation is introduced at a different point in time. - In a multiple baseline across behaviors, several different of a single subject are measured over time. - In a multiple baseline across situations, in which the same behavior is measured in different settings, such as at home and at work. - The effectiveness of a treatment is demonstrated when a behavior changes only after the manipulation is introduced. ▪Threats to Internal Validity Identify threats like regression toward the mean and how they affect one-group pretest-posttest designs. How can we address these threats in experimental design? - History effect: any outside event that could be responsible for the results. - Maturation effect: naturally occurring change within the individual is responsible for the results. - Testing effect: simply raking the pretest changes the participant’s behavior. - Instrument decay: change in the measuring instrument (including observers) is responsible for the results. - Regression toward the mean: the principle that extreme scores on a variable tend to be closer to the mean when a second measurement is made. ▪History Effects How can extraneous events (history effects) that occur between measurements confound results? - History refers to any event that occurs between the first and second measurements but is not part of the manipulation. Any such event is confounded with the manipulation. However, history effects can be caused by virtually any confounding event that occurs at the same time as the experimental manipulation. ▪Maturation Effects Understand changes that occur systematically over time and their impact on the validity of studies. - Any changes that occur systematically over time are called maturation effects. ▪Selection Differences Understand how selection differences occur, especially when participants are chosen from natural groups, and their impact on experimental results. - Usually occurs when participants who form the two groups in the experiment are chosen from existing natural groups. ▪Cohort Effects Understand how differences between age groups in cross-sectional studies may reflect generational differences (cohort effects) rather than developmental changes. - In a cross-sectional study, a difference among groups of different ages may reflect developmental age changes; however, the differences may result from cohort effects. Most important, the researcher must infer that age is associated with a difference among age groups are due to the developmental variable of age. The developmental change is not observed directly among the same group of people, but instead is based on comparisons among different cohorts of individuals. ▪Sequential Designs Understand the compromise between longitudinal and cross-sectional methods through sequential designs. - A combination of the cross-sectional and longitudinal design to study developmental research questions. ▪Instrument Decay Understand how measurement standards may change over time and how this can affect results. - Sometimes, the basic characteristics of the measuring instrument change over time. Consider sources of instrument decay when human observers are used to measure behavior: over time, an observer may gain skill, become fatigued, or change the standards on which observations are based. ▪Testing Effects Describe testing effects, including how pretests can influence participants' subsequent behavior. - Testing becomes a problem if simply taking the pretest changes the participant’s behavior. ▪One-Group Pretest-Posttest Designs Understand the structure and limitations of one-group pretest-posttest designs, including the absence of control groups. - Obtains a comparison by measuring participants before and after manipulation. The absence of a control group makes it susceptible to various threats of internal validity due to the inability to isolate the treatment effect from other potential influencing factors. Chapter 13 ▪Control Groups Understand the importance of control groups in experimental research to draw causal conclusions. - It allows researchers to isolate the effect of the independent variable being tested. ▪Inferential Statistics Understand the purpose of inferential statistics in making conclusions about data, and how they differ from descriptive statistics. - Inferential statistics are used to determine whether the results match what would happen if the experiments were repeatedly conducted with multiple samples. It also allows researchers to make inferences about the true difference in population on the basis of the sample data. - Inferential statistics are used to make predictions or generalizations about a larger population based on a sample of data, whereas descriptive statistics simply summarize the characteristics of a data set without attempting to draw conclusions about a wider group. ▪Hypotheses in Testing Differentiate between null and research hypotheses, focusing on what each implies about the independent variable's effect. - The null hypothesis is that the population means are equal and that the observed difference is due to random error. The independent variable had no effect. - The research hypothesis is that the population means are, in fact, not equal. The independent variable did have an effect. ▪Probability Understand how probability is used to determine the likelihood of outcomes and its relevance in inferential statistics. - Probability is the likelihood of the occurrence of some event or outcome. - Specifically, inferential statistics give the probability that the difference between means reflects random error rather than a real difference…to make inferences about a population based on information extracted from a sample. ▪Type I and Type II Errors Understand the difference between rejecting a true null hypothesis (Type I error) and failing to reject a false null hypothesis (Type II error). - Type 1 error is made when we reject the null hypothesis but the null hypothesis is actually true. - Type 2 error occurs when the null hypothesis is accepted, although, in the population, the research hypothesis is true. ▪Null Hypothesis Examples Study examples of null hypotheses in real-world research scenarios. - 𝐻 0 (Null): the population mean of the no-model group is equal to the population mean of the model group. - Sports Science Scenario: Studying whether a new training regimen improves athletes’ endurance. ○ Null Hypothesis (H0): The new training regimen has no effect on athletes’ endurance. This assumes that any difference in endurance is not due to the new training regimen. Scenario: Investigating whether wearing a specific type of shoe improves running performance. ○ Null Hypothesis (H0): Wearing the specific type of shoe has no effect on running performance. This suggests that the shoe choice does not lead to significant changes in running performance. ▪Effect Size Describe how effect size quantifies the magnitude of a result and its importance in interpreting statistical findings. - Effect size is a statistical measure that quantifies the strength or magnitude of a relationship between variables or the difference between groups in a study. It provides a particle interpretation of the result beyond just statistical significance, meaning it tells you how meaningful the observed effect is, regardless of sample size. This allows for better comparison across different studies. ▪Statistical Tests Identify various statistical tests, their purposes, and when to use them (e.g., chi-square, t-tests, F tests, Pearson correlation). - T-test is commonly used to examine whether two groups are significantly different from each other. - F-test: used to compare the variance of two or more groups to see if they are significantly different. - Analysis of variance: is a more general statistical procedure than the t-test. Analysis of variance is used when there are three or more levels of an independent variable and when a factorial design with two or more independent variables (at least two levels of each factor) has been used. - Chi-square: determines if there is a significant difference between expected and observed frequencies in categorical data. - Pearson correlation: a common correlation coefficient. Measures the strength and direction of a linear relationship between two continuous variables. ▪Systematic Variance Understand systematic variance and its relationship to group differences. - Systematic Variance: is the deviation of the group means from the grand mean, or the mean score of all individuals in all groups. Systematic variance is small when the difference between group means is small and increases as the group mean differences increase. ▪Sampling Distributions Describe the role of sampling distributions in hypothesis testing and determining probabilities. - Providing a theroerical probability distribution that allows researchers to calculate the likelihood of observing a particular sample statistic if the null hypothesis is true. Chapter 14 ▪Internal vs. External Validity Differentiate between internal validity (cause and effect accuracy) and external validity (generalizability of findings). - Internal validity: refers to the accuracy of conclusions drawn about cause and effect - External validity: the extent to which findings may be generalized. ▪Generalizability Describe the factors that influence generalizability, including the sample population and replication methods. - Volunteers and internet users - Sex, gender, race and identity - Location and culture - Nonhuman animals - Meta analsis - Literature review ▪Replication in Research Understand the importance of replication in verifying findings and overcoming generalization problems. - Replication of research is a way of overcoming any problems of generalization that occur in a single study. ▪Exact Replications Understand exact replications and their role in confirming the reliability of study results. - Exact replication: an attempt to precisely replicate the procedures of a study. ▪Volunteer Bias and Gender Generalization Examine how volunteer characteristics can affect research findings and the limits of generalizing results based on gender-specific studies. - They tend to be more highly educated - Of a higher socioeconomic status - More in need of approval - More social ▪Random Sampling Understand how random sampling enhances external validity by ensuring a representative sample. - The participants selected for a study accurately reflect the broader population being studied, thereby allowing researchers to generalize their findings with greater confidence to the larger group; essentially by randomly choosing participants, the likelihood of bias in the sample is minimzied, leading to a more accurate representation of the population as a whole. ▪Conceptual Replications Identify how conceptual replications extend findings by testing them under different conditions or with varied operational definitions. - Conceptual replication: the use of different procedures to replicate a search finding. - In a conceptual replication, the same independent variable is operationalized in a different way, and the dependent variable may be measured in a different way too. Complete understanding of any variable involves studying the variable using a variety of operational definitions. ▪External Validity Threats Explore common threats to external validity, such as sample bias and stereotypical assumptions, and ways to address them. 1. Sample bias - How to fix it: random and stratified sampling 2. Volunteer Bias - How to fix it: recruitment strategies 3. Stereotyping and overgeneralization - How to fix it: avoid stereotypical framing - Focus on individual variability

Use Quizgecko on...
Browser
Browser