🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Week 4 Cooper Chapter 10.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Chapter 10: Planning and Evaluating Applied Behavior Analysis Research Cooper, Heron, and Heward Copyright © 2007 by Pearson Applied Behavior Analysis, Education, Inc. Second Edition...

Chapter 10: Planning and Evaluating Applied Behavior Analysis Research Cooper, Heron, and Heward Copyright © 2007 by Pearson Applied Behavior Analysis, Education, Inc. Second Edition All rights reserved Purpose of Graphic Displays Primary function communication Display relationships between dependent variable and independent variable Summarization of data collected Facilitates of accurate analyses KEY LEARNINGS Fundamental Properties of Behavior Change Level (high, medium, low) Trend (increasing, decreasing, no effect) Variability (high or low) KEY LEARNINGS Example Types of Graphs Utilized in ABA Line graph Bar graphs TYPES Cumulative record OF GRAPHS UTILIZED IN ABA Semilogarithmic charts Standard Celeration Chart Scatterplots Importance of Individual Subject Enables applied behavior analysts to discover and refine effective interventions for socially significant behaviors Contrasted with groups-comparison approach Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Groups-Comparison Experiment Randomly selected pool of subjects from relevant population Divided into experimental and control groups Pretest, application of independent variable to experimental group, and posttest Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Group Data Not Representative of Individual Performance Individuals within a group could stay the same or decrease, while the improvement of others could make it appear as overall average improvement To be most useful, treatment must be understood at an individual level Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Group Data Masks Variability Hides variability that occurs within and between subjects Statistical control should not be a substitute for experimental control To control effects of any variable, must either hold it constant or manipulate it as an independent variable Copyright © 2007 by Cooper, Heron, and Heward Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Absence of Intrasubject Replication Power of replicating effects with individuals is lost Many applied situations in which overall performance of group is socially significant When group results don’t represent individuals, should supplement the data with individual results Cooper, Heron, and Heward Copyright © 2007 by Pearson Applied Behavior Analysis, Education, Inc. Second Edition All rights reserved Importance of Flexibility in Design An effective researcher must actively design each experiment so that it achieves its own unique design Good experimental design is any independent variable manipulation that produces data that convincingly addresses the research question Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Experimental Designs Often designs entail a combination of analytic tactics Component analysis of elements Infinite number of possible designs with different combinations Most effective use ongoing evaluation of data from individuals to employ baseline logic of prediction, verification, and replication Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Internal Validity Experiments that demonstrate clear functional relations have high degree of internal validity Experimental control refers to all relevant variables Steady state responding as evidence Confounding variables are threats to internal validity Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Subject Confounds Maturation: changes in subject over course of experiment Repeated measurement controls and detects uncontrolled variables Cooper, Heron, and Heward Copyright © 2007 by Pearson Applied Behavior Analysis, Education, Inc. Second Edition All rights reserved Setting Confounds Studies in natural settings are more prone to confounding variables than in controlled laboratories If change in setting occurs, should then hold new conditions constant until steady state responding is observed Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Measurement Confounds Observer drift or bias Keeping observers naïve to expected outcomes can reduce observer bias Must maintain baseline conditions long enough for reactive effects to run their course and then obtain stable responding Could use intermittent probes except when practice effects would be expected Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Independent Variable Confounds Placebo control separates effects produced by subject’s perceived expectations Double-blind control eliminates confounding by subject expectations, teacher and parent expectations, differential treatment by others, and observer bias Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Treatment Integrity Similar to procedural fidelity Extent to which the independent variable is implemented or carried out as planned Low treatment integrity makes it very difficult to confidently interpret experimental results Treatment drift: when application of independent variable in later phases differs from original application Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Precise Operational Definition A high level of treatment integrity requires a complete, precise operational definition of treatment procedures Define in 4 dimensions: verbal, physical, spatial, and temporal Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Verbal Dimension The content or nature of the verbal behavior What is said, how it is said, under what circumstances Example: Yelling statements like “I don’t want to do this” at a volume louder than conversational speech Physical Dimension Observable physical movements or actions involved in the behavior What physically happens Able to be clearly observed and measured Example: Hitting with a closed fist or on another person’s arm or back or Flapping hands above should height, with a rapid wrist movement Spatial Dimension Location or area where the behavior occurs or the position of the body during the behavior Ensures context and location are both included in the definition Example: Running across the classroom from one corner to another without permission Temporal Dimension Refers to the timing of the behavior including when it occurs, how long it lasts, and the frequency or duration of behavior Example: tantrums are crying and lying on the floor for more than 30 seconds, occurring within 5 minutes of being told to transition to a new activity Simplify, Standardize, and Automate Simple, precise treatments are more likely to be consistently delivered Simple, easy-to-implement techniques are more likely to be used and socially validated Experimenters should standardize as many aspects as possible and practical If possible, without compromise, could use an automated device to deliver independent variable Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Training and Practice Train or provide practice for individual who will conduct the experimental sessions Could provide a detailed script, verbal instructions, modeling, or performance feedback Copyright © 2007 by Cooper, Heron, and Heward Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Assessing Treatment Integrity Collect treatment integrity data to measure how the actual implementation of the conditions matches the written methods Observation and calibration give the researcher the ongoing ability to use retraining and practice to ensure high treatment integrity Reduce, eliminate, or identify the influence of as many potentially confounding variables as possible Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Social Validity Includes the social significance of the target behavior, the appropriateness of the procedures, and the social importance of the results Usually assessed by asking direct and indirect consumers Consumer satisfaction Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Social Importance of Behavior Change Goals To determine socially valid goals: ◦ Assess the performance of persons considered competent ◦ Experimentally manipulate different levels of performance to determine which produces optimal results Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Social Importance of Interventions Rating scales and questionnaires for obtaining consumers’ opinions on acceptability of interventions Examples: ◦ Intervention Rating Profile ◦ Treatment Acceptability Rating Form Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Social Importance of Behavior Changes Methods for assessing outcomes: ◦ Compare subject’s performance to a normative sample ◦ Use standardized assessment instrument ◦ Ask consumers to rate social validity of performance ◦ Ask experts to evaluate subject’s performance ◦ Test subject’s new performance in natural environment Copyright © 2007 by Cooper, Heron, and Heward Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Normative Sample Not limited to posttreatment comparisons Compare subject’s behavior to ongoing probes of behavior of normative sample to provide ongoing measure of improvement and how much is still needed Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Consumers and Experts Most frequently used method for assessing social validity is to ask consumers Experts can be called upon to judge the social validity of some behavior changes Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Standardized and Real-World Tests Example of standardized test: Self-Injury Trauma Scale (SITS) Real-world test in the natural environment provides direct assessment of social validity Also exposes subject to naturally occurring reinforcement, which may promote maintenance and generalization Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Iwata BA, Pace GM, Kissel RC, Nau PA, Farber JM. The Self-Injury Trauma (SIT) Scale: a method for quantifying surface tissue damage caused by self-injurious behavior. J Appl Behav Anal. 1990 Spring;23(1):99-110. doi: 10.1901/jaba.1990.23-99. PMID: 2335488; PMCID: PMC1286214. External Validity Degree to which a functional relation in an experiment will hold under different conditions A matter of degree, not all-or-nothing Those with greater degrees of generality, make greater contribution to applied behavior analysis Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved External Validity and Groups-Design Research There is nothing in the results of a groups-design experiment that can have external validity ▪ Limited Sample Representativeness – sample may not represent broader populations ▪ Controlled Experimental Conditions – don’t reflect real world ▪ Manipulation of Variables ▪ Artificiality of Experimental Interventions ▪ Homogeneous Group Design – group averages ▪ Time-Bound findings – changes in social and cultural context Unable to provide data that lead to improved practice in education Groups-design is effective in large-scale evaluations Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved External Validity and Applied Behavior Analysis Generality of findings in ABA is assessed, established, and specified through replication of experiments Two major types of scientific replication: direct and systematic Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Direct Replication Duplicates exactly the conditions of an earlier experiment Intrasubject direct replication uses same subject to establish reliability of functional relation Intersubject direct replication uses different but similar subjects to determine generality Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Systematic Replication Researcher purposefully varies one or more aspects of earlier experiment Can demonstrate reliability and external validity of earlier findings Can alter any aspect: subjects, setting, administration of independent variable, or target behaviors Copyright © 2007 by Cooper, Heron, and Heward Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Evaluating Applied Behavior Analysis Research Questions to ask in evaluating the quality of research in applied behavior analysis fall under 4 categories: ◦ Internal validity ◦ Social validity ◦ External validity ◦ Scientific and theoretical significance Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Internal Validity Extent to which a study or experiment can demonstrate a causal relationship between an independent variable (intervention/treatment) and a dependent variable (the behavior being measured) Must decide whether functional relation has been demonstrated Requires close examination of measurement system, experimental design, and the researcher’s control of potential confounds Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Threats to Internal Validity History – events that happen outside of the intervention period that may affect outcome (i.e., child starts taking medication for aggression) Maturation – changes that occur within the subject over time such as growing older or gaining experience Testing – the act of repeatedly measuring behavior can lead to improvements in itself (i.e., test performance) Instrumentation- changing in the measurement instruments or observers over time can affect how behavior is recorded (i.e., different observers use different criteria) Regression to the Mean – if behavior was particularly extreme during initial measurement (i.e., behavior may naturally decrease without intervention) Selection Bias – differences in characteristics in participants can influence the results if groups or individuals are not randomly selected Strengthening Internal Validity Use of Single Subject Designs to establish clear cause and effect relationships Consistent Measurement through strong operational definitions of target behaviors and data collection and observer training Use of Controls such as reversal design or alternating treatment designs to demonstrate changes in behavior are due to intervention Replication of the study or design with other individuals, different settings or across multiple behaviors Evaluating Internal Validity Definition and measurement of dependent variable Graphic display Meaningfulness of baseline conditions Experimental design Visual analysis and interpretation Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Visual Analysis and Interpretation Factors that favor visual analysis over tests of statistical significance in ABA: ◦ Want to see socially significant behavior change, not statistically significant ◦ Good for identifying variables that produce strong, large, and reliable effects ◦ Accepting statistical analysis as evidence of functional relation may cause researcher not to experiment further ◦ Tests of statistical significance may cause data sets to conform, losing flexibility in design Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Errors Type I error: when researcher concludes that independent variable had effect on dependent variable, when it did not Type II error: when researcher concludes that independent variable did not have effect on dependent variable, when it did Visual analysis leads to less Type I and more Type II errors Statistical analysis leads to more Type I and less Type II errors Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Social Validity Independent variable should be assessed in terms of its effects on dependent variable, as well as social acceptability, complexity, practicality, and cost Consider maintenance and generalization of behavior change in evaluation of a study Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved External Validity The extent to which the findings of an intervention or study c an be generalized beyond the specific conditions of the experiment or intervention Can the results from a particular setting, individual or behavior be applied to other individuals, environments, or behaviors To effectively judge external validity, compare a study’s results with those of other relevant published research Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Key Aspects of External Validity Generalizations to Other Individuals Generalization to Other Settings Generalizations Across Time Generalizations Across Behaviors Threats to External Validity Small sample size may not be applicable to others with different characteristics in different environments Highly controlled settings minimize distractions and variables when real-world settings are less predictable and more complex Unique characteristics of participants may not generalize particularly if intervention is tailored to a small, homogeneous group Short duration of study makes it difficult to determine whether the results will persist over time Specific behaviors targeted may not generalize to other related behaviors Strengthen External Validity Include diverse participants Use real-world settings Test for generalization across time Train for generalization Replication in different contexts Measure generalization in other behaviors (improvements in other, non- targeted behaviors) Theoretical Significance and Conceptual Sense Evaluate a study in terms of its scientific merit Look at its contribution to the advancement of the field “knowledgeable reproducibility” Cooper, Heron, and Heward Copyright © 2007 by Applied Behavior Analysis, Pearson Education, Inc. Second Edition All rights reserved Need for More Thorough Analyses Need for more conceptual understanding of the principles that underlie successful demonstrations of behavior change Readers should consider the technological description, the interpretation of results, and the level of conceptual integrity in experimental reports Copyright © 2007 by Cooper, Heron, and Heward Pearson Education, Inc. Applied Behavior Analysis, Second Edition All rights reserved

Use Quizgecko on...
Browser
Browser