Applied Behavior Analysis - Chapter 10 PDF
Document Details
Uploaded by ExceptionalCurl
The University of Kansas
John O. Cooper, Timothy E. Heron, William L. Heward
Tags
Related
- Baer et al. (1968) Some Current Dimensions of Applied Behavior Analysis PDF
- Baer et al. (1968) Some Current Dimensions of Applied Behavior Analysis PDF
- Baer et al. (1968) Some Current Dimensions of Applied Behavior Analysis PDF
- Baer et al. (1968) Some Current Dimensions of Applied Behavior Analysis PDF
- Chapter 10: Planning and Evaluating Applied Behavior Analysis Research PDF
- Applied Behavior Analysis Third Edition PDF
Summary
This document covers Chapter 10 of "Applied Behavior Analysis", a textbook by John O. Cooper, Timothy E. Heron, and William L. Heward. It details the concepts of experimental design, planning, research, and application. The document is written by a third party and not an exam board so is not a past paper.
Full Transcript
Applied Behavior Analysis Third Edition Chapter 10 Planning and Evaluating Applied Behavior Analysis...
Applied Behavior Analysis Third Edition Chapter 10 Planning and Evaluating Applied Behavior Analysis Research Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Learning Objectives Section 1. Foundation D. Experimental Design D-1 Distinguish between dependent and independent variables D-2 Distinguish between internal and external validity. D-3 Identify the defining features of single-subject experimental designs (e.g., individuals serve as their own controls, repeated measures, prediction, verification, replication). D-4 Describe the advantages of single-subject experimental designs compared to group designs. D-5 Use single-subject experimental designs (e.g., reversal, multiple baseline, multielement, changing criterion). D-6 Describe rationales for conducting comparative, component, and parametric analyses. Section 2. Applications H. Selecting and Implementing Interventions H-6 Monitor client progress and treatment integrity. Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Importance of the Individual Subject in Behavior Analysis Research Behavior analysis research methods feature direct and repeated measures of the behavior of individual organisms Between-groups approach to experimental design has predominated “behavioral research” in psychology, education, and other social sciences for decades – Researchers who compare measures of groups of subjects in this way do so for two primary reasons: ▪ The assumption that averaging the measures of many subjects’ performance controls for intersubject variability – Enables the belief that any changes in performance are brought about by the independent variable ▪ Increasing the number of subjects increases the study’s external validity – A treatment variable found effective with the subjects in the experimental group will also be effective with other subjects in the population Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Four Fundamental Concerns with Typical Between-Groups Designs May not represent the performance of individual subjects Mask variability Do not represent real behavioral processes Lack intrasubject replication Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Group Data May Not Represent the Performance of Individual Subjects Average performance of a group of subjects reveal nothing about the performance of individual subjects Factors responsible for one subject’s improvement and another’s lack of improvement must be discovered Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Figure 10.1 Hypothetical data showing that the mean performance of a group of subjects may not represent the behavior of individual subjects. Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Group Data Mask Variability The mean performance of a group of subjects hides variability in the data – A researcher who relied on the group’s mean performance as the primary indicator of behavior change would be ignorant of the variability that occurred within and between subjects When repeated measurement reveals significant levels of variability, an experimental search of identifying and controlling the factors responsible for the variability is in order Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Group Data Do Not Represent Real Behavioral Processes Skinner (1938) contended that researchers must demonstrate behavior—environment relations at the level of the individual organism or risk discovering synthetic phenomena that represent mathematical, not behavioral, processes “The use of separate groups destroys the continuity of cause and effect that characterizes an irreversible behavioral process” (Sidman, 1960, p.53) Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Between-Groups Designs Lack Intrasubject Replication Power of replicating effects within and across individual subjects is lost One of the great strengths of single-case experimental designs is the convincing demonstration of functional relations made possible by replication of treatment effects – “An effect that emerges only after individual data have been combined is probably artifactual and not representative of any real behavioral processes” (Johnston & Pennypacker, 1980, p. 257) Improving overall performance of a group is socially significant in many applied situations – When group results do not represent individual performances, researchers should supplement group data with individual results Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Importance of Flexibility in Experimental Design An effective experimental design is any sequence of independent variable manipulations that produces data that are interesting and convincing to the researcher and the audience. No ready-made experimental designs await selection, nor is there a set of rules that must be followed – “There are no rules of experimental design” (Sidman, 1960, p.214) Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Experimental Designs Combining Analytic Tactics An experimental design that combines analytic tactics may allow a more convincing demonstration of experimental control than a design using a single analytic tactic – Combining multiple baseline, reversal, and/or multielement tactics can provide the basis for comparing the effects of two or more independent variables (see Figure 10.2) – Alternating treatments into experimental designs containing multiple baseline elements Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Figure 10.2 Experimental design employing multiple baselines across settings and reversal tactics counterbalanced across two subjects to analyze the effects of time-out (TO) and differential reinforcement of other behavior (DRO) treatment conditions. Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Treatment Packages When a behavioral intervention consists of multiple components, it is termed a treatment package Practitioners and researchers create, implement, and evaluate treatment packages for many reasons – May believe two or more research-based interventions will be more effective than any of the component interventions in isolation – May be effective across a wider range of settings or participant characteristics – An intervention known to be mildly effective in isolation may, when implemented in conjunction with other research-based interventions, yield an additive, value-enhancing effect – In an effort to change a behavior that has resisted numerous interventions Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Component Analyses A component analysis is any experiment designed to identify the active elements of a treatment package, the relative contributions of different components in a treatment package, and/or the necessity and sufficiency of treatment components Two methods for conducting component analyses – Drop-out component analysis: the investigator presents the treatment package and then systematically removes components ▪ If the treatment’s effectiveness wanes when a component is removed, the researcher has identified a necessary component – Add-in component analysis assesses components individually or in combination before the complete treatment package is presented ▪ Can identify sufficient components Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Internal Validity Experiments that demonstrate a clear functional relation have a high degree of internal validity An experimental design’s strength is determined by the extent to which it – demonstrates a reliable effect – eliminates or reduces the likelihood that factors other than the independent variable produced the behavior change Experimental control is often used to signify a researcher’s ability to reliably produce a specified behavior change by manipulating an independent variable The level of experimental control obtained by a researcher refers to the extent to which she controls all relevant variables in a given experiment Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Confounding Variables Uncontrolled factors known or suspected to have exerted influence on the dependent variable are called confounding variables Confounding variables can be viewed as related primarily to one of four elements of an experiment: – Subject – Setting – Measurement of the dependent variable – Independent variable Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Subject Confounds Maturation, which refers to changes that take place in a subject over the course of an experiment, is a potential confounding variable. – Experimental designs that incorporate rapidly changing conditions or multiple introductions and withdrawals of the independent variable over time usually control for maturation effectively A subject’s behavior may also be influenced by events that occur outside the experiment – Repeated measurement is both the control for and the means to detect the presence and effects of such variables. Concern that the characteristics of one or more subjects may confound an experiment’s results is generally not an issue in the single-case experiments – Each participant in a single-case study serves as her own control – Extent to which a functional relation applies to other subjects is established by replicating the experiment with different subjects Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Setting Confounds Most applied behavior analysis studies are conducted in natural settings where a host of variables are beyond the investigators’ control More prone to confounding by uncontrolled events than are studies conducted in laboratories where extraneous variables can be eliminated or held consistent. “Bootleg” reinforcement: when, unbeknownst to the experimenter, subjects have access to the same items or events to be used as putative reinforcers in the study. Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Measurement Confounds Numerous sources of confounding may exist within a well- planned measurement system Data might be confounded by: – Observer drift – The influence of the experimenter’s behavior on observers – Observer bias. Unless a completely unobtrusive measurement system is devised (e.g., a covert system using one-way mirrors, or observations conducted at some distance from the subject), reactivity to the measurement procedure must always be considered Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Independent Variable Confounds Most independent variables are multifaceted – More to a treatment condition than the specific variable of interest Placebo control: separates effects that may be produced by a subject’s perceived expectations of improvement When neither the subject(s) nor the observers know whether the independent variable is present or absent from session to session, this type of control procedure is called a double-blind control. Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Treatment Integrity Treatment integrity refers to the extent to which the independent variable is implemented as planned. Procedural fidelity refers to the extent to which procedures in all conditions of an experiment, including baseline, are correctly implemented (Ledford & Gast, 2014). Low treatment integrity invites a major source of confounding into an experiment, making it difficult, if not impossible, to interpret the results with confidence. Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Threats to Treatment Integrity Experimenter bias – Influence the researcher to administer the independent variable in such a way that it enjoys an unfair advantage over baseline or comparative conditions. Treatment drift – The application of the independent variable differs from the way it was applied at the study’s outset. Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Methods for Ensuring Treatment Integrity (1 of 2) Developing a complete and precise operational definition of the treatment procedures – Requisite for meeting the technological dimension of applied behavior analysis – Clear, concise, unambiguous, and objective – Operationally defined in each of four dimensions: verbal, physical, spatial, and temporal Simplifying and standardizing the independent variable – Treatments that are simple, precise, brief, and require little effort are more likely to be delivered with consistency – Simple, easy-to-use techniques have a higher probability of being accepted and used – Standardize as many of its aspects as cost and practicality allow Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Methods for Ensuring Treatment Integrity (2 of 2) Providing criterion-based training and practice for the people who will be responsible for implementing the independent variable – Scripts detailing treatment procedures – Cue cards or other devices that remind and prompt people through steps of an intervention – Various prompting tactics – Performance feedback – Self-monitoring Provides the necessary skills and knowledge to carry out the treatment Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Methods for Measuring Treatment Integrity Procedural fidelity data reveal the extent to which the actual implementation of all experimental conditions over the course of a study matches their descriptions in the method section of a research report Observation and recording of the independent variable – Provide the experimenter with data indicating whether calibration of the treatment agent is necessary – Give the researcher an ongoing ability to use retraining and practice to ensure a high level of treatment integrity over the course of an experiment. Graphic displays of treatment integrity data may help researchers and consumers of a study judge the intervention’s effectiveness Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Social Validity “Clients must understand and admire the goals, outcomes, and methods of an intervention” (Risley, 2005, p.284) Social Validity of an applied behavior analysis study should be assessed in three ways: – The social significance of the behavior change goals – The appropriateness of the intervention – The social importance of the results Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Validating the Social Importance of Behavior Change Goals Begins with a clear description of those goals Experts can inform the selection of socially valid target behaviors Persons who use the prospective skill in natural environments can help researchers identify socially valid target behaviors. Approaches for determining socially valid goals – Assess the performance of persons considered competent – Experimentally manipulate different levels of performance to determine empirically which produces optimal results Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Validating the Social Acceptability of Interventions Several scales and questionnaires for obtaining consumers’ opinions of the acceptability of behavioral interventions have been developed – Intervention Rating Profile (IRP-15) – Behavior Intervention Rating Scale – Treatment Acceptability Rating Form (TARF) Some investigators present graphic displays of treatment acceptability data Letting practitioners or clients choose which of multiple treatments to implement or receive Extent it meets the standards of best practice and the ethical, legal, and professional standards of relevant learned and professional societies Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Validating the Social Importance of Behavior Change Applied behavior analysts assess the social validity of outcomes with a variety of methods – Have consumers rate the social validity of participants’ performance – Ask experts to evaluate participants’ performance – Compare participants’ performance to that of a normative sample – Assess with standardized assessment instruments – Test participants’ performance in the natural environment Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved External Validity and Between-Groups Research Design Group design researchers assume that including many subjects in an experiment increases the external validity of the result Demonstrating a functional relation with various subjects in different settings is exactly how applied behavior analysts document external validity A between-groups experiment does not demonstrate a functional relation between the behavior of any subject and some aspect of his or her environment Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved External Validity and Applied Behavior Analysis Behavior analysts assess, establish, and specify the external validity, or scientific generality of single-case research findings by replicating experiments Replication means repeating a previous experiment – Two major types of replication ▪ Direct ▪ Systematic Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Direct Replication In a direct replication, the researcher makes every effort to duplicate exactly the conditions of an earlier experiment If the same subject is used in a direct replication, the study is an intrasubject direct replication – Primary tactic for establishing the existence and reliability of a functional relation Intersubject direct replication maintains every aspect of a previous experiment except that different, although similar subjects are involved – Primary method for determining the extent to which research findings have generality across subjects – Intersubject replication is the rule in applied behavior analysis Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Systematic Replication In a systematic replication the researcher purposefully varies one or more aspects of a previous experiment – Subjects – Settings – Administration of the independent variable – Target behaviors Refers to concerted and directed efforts to establish and specify the generality of a functional relation – Across subjects sometimes reveal different patterns of effects – Some attempt to reproduce the results reported by another researcher in a different situation or context – Researchers sometimes report multiple experiments ▪ Each experiment serving as a systematic replication – Evident when a research team pursues a consistent line of related studies Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Evaluating Applied Behavior Analysis Research—Internal Validity Decide whether a functional relation has been demonstrated Examination of – Measurement system – The experimental design – The degree to which the researcher controlled potential confounds – Visual analysis and interpretation of the data ▪ Two types of errors – Type I error (also called a false positive) is made when the researcher concludes that the independent variable had an effect on the dependent variable, when in truth no such relation exists in nature – Type II error (also called a false negative) is the opposite of a Type I error. In this case, the researcher concludes that an independent variable did not have an effect on the dependent variable, when in truth it did ▪ Limitation – Poor interrater agreement on whether various data patterns demonstrate experimental control Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Figure 10.12 Ideally, an experimental design and methods of data analysis help a researcher conclude correctly that a functional relation between the independent and dependent variables exists (or does not exist) when in fact such a relation does (or does not) exist in nature. Concluding that the results of an experiment reveal a functional relation when no such relation exists in nature is a Type I error. Conversely, concluding that that independent variable did not have an effect on the dependent variable when such a relation did occur is a Type I I error. Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Evaluating Applied Behavior Analysis Research—Social Validity The reader of a published study in applied behavior analysis should judge – The social significance of the target behavior ▪ Will an increase (or decrease) in the measured dimension of this behavior improve the person’s life directly or indirectly? – The appropriateness of the procedures ▪ Acceptability ▪ Practicality ▪ Cost – The social importance of the outcomes ▪ Improvements in behavior are most beneficial when – They are long-lasting – Appear in other appropriate environments – spill over to other related behaviors Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Evaluating Applied Behavior Analysis Research—External Validity The generality of a behavior–environment relation can be established only through the active process of systematic replication Reader of an applied behavior analysis study should compare the study’s results with those of other published studies sharing relevant features Examine previous studies in the literature and compare the results of these studies with those of the current experiment Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Evaluating Applied Behavior Analysis Research— Theoretical Significance and Conceptual Sense A published experiment should also be evaluated in terms of its scientific merit Baer, Wolf, and Risley (1987) emphasized the need to shift from demonstrations of behavior changes to a more complete analysis and conceptual understanding of the principles that underlie the successful demonstrations Component analyses and parametric analyses are necessary steps to a more complete understanding of behavior Evaluation of the authors’ technological description of the experiment as well as their interpretation and discussion of the results. Readers should consider the level of conceptual integrity displayed in an experimental report Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved Copyright This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning. Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted. The work and materials from it should never be made available to students except by instructors using the accompanying text in their classes. All recipients of this work are expected to abide by these restrictions and to honor the intended pedagogical purposes and the needs of other instructors who rely on these materials. Copyright © 2020, 2007, 1990 Pearson Education, Inc. All Rights Reserved