Stimulus Preference and Reinforcer Assessment Applications - Chapter 1 PDF

Summary

This chapter details stimulus preference and reinforcer assessment applications in behavior analysis. It examines how environmental and stimulus factors influence behavior changes. Various methods for assessing reinforcing stimuli are discussed, to ensure the identification of effective strategies for behavior change applications.

Full Transcript

xviii Stimulus Preference and Reinforcer Assessment Applications 19 Chapter 1 Stimulus Preference and Reinforcer Assessment Applications Martin T. Ivancic...

xviii Stimulus Preference and Reinforcer Assessment Applications 19 Chapter 1 Stimulus Preference and Reinforcer Assessment Applications Martin T. Ivancic Western Carolina Center The practice of assessing reinforcers includes a large portion of the work of behavior analysis. That is, almost any procedure that in application increases a behavior or in removal decreases a behavior is either a demonstration of the effect of a reinforcer or a demonstration of some dimension of that reinforcer’s effect. The applied history of reinforcer assessment has involved searching for conditions that produce optimal effects for changing an individual’s behavior that are durable, as well as efficient to conduct. Often, such procedures have addressed the behavioral excesses or deficiencies of the population of people with developmental disabilities, but this is probably more an artifact of this group’s extreme need for effective treatment than an indication of who can benefit from a reinforcer assessment procedure. What is currently understood about behavior-change interventions has been discovered by analyzing the conditions under which the future frequency of behavior is changed (Skinner, 1953; 1966). That is, target responses have been observed to change in some measurable, predictable way only when specified conditions are present. Observing this effect repeatedly across time, with other responses, in different settings, or with other individuals supports the belief that these are the conditions responsible for the repeated change (Baer, Wolf, & Risley, 1968; 1987). There are three interlocking variables, broadly referred to as the “contingencies of reinforcement” or the “three-term contingency” that are consid- ered in explaining how environmental conditions relate to behavior changes for a particular organism. These conditions include that which has occurred prior to, during, and following the occurrence of a target response (Skinner, 1966; 1969). The stimulus change that occurs following a response and is demonstrated to increase the frequency of that response is known as “reinforcement” (see Michael, 1993a for more detail). This stimulus change, the third component of the three-term contingency, has become a major focus of applied behavior-change procedures. The use of reinforcers to change behavior has probably become popular because of its general effectiveness and because of the relative simplicity of utilizing only one component of the complex behavior-change process. Hence, surveys of behavioral consequences known as reinforcer assessments have become frequently and success- 20 Chapter 1 fully used as tools for determining optimal behavior change programming by identifying particular stimuli to be used as reinforcers. However, if this chapter were to focus exclusively on reviewing procedures that simply surveyed potentially effective behavioral consequences, there would be a danger in proliferating the mistaken idea that a given stimulus change following a response changes that behavior regardless of the situation (and behavior) that is used (for a discussion of the “trans-situational hypothesis,” see Meehl, 1950; Timberlake & Farmer-Dougan, 1991). The fact remains that a behavioral consequence acts as a reinforcer to change behavior only as it functions within the interlocking variables of its contingencies, including the organism’s current status or history of such change (i.e., conditions prior to the response) and the response itself. Although behavior has often been changed without addressing these other conditions, the change cannot be understood to have occurred without the function of all three of these variables. In fact, there are often situations in the practice of changing behavior in which simply applying a specific consequence following a behavior does not result in the predicted behavior change. In these situations, it becomes important to appreciate how research has demonstrated the effects of other variables related to establishing a stimulus as a reinforcer. These variables will be reviewed under this topic of reinforcer assessment. To summarize, in order to avoid the tendency to believe different qualities and/or intensities of behavioral consequences have their own independent effects, this chapter will include research not only on popular survey reinforcer assessment procedures, but also on antecedent and response variables as they relate to behavioral deficits and excesses in order to more completely understand the effects of reinforcers identified in standard reinforcer assessments. Standard Reinforcer Assessments Early Surveys Surveys used to identify stimuli with potential reinforcing effects began with investigations of edibles and manipulatibles (Bijou & Sturges, 1959; Ferster & DeMeyer, 1962), but soon sensory stimuli, based on a wealth of basic research (Kish, 1966), were included in these investigations. One reason sensory stimulation was initially included in research for people with developmental disabilities was evidence there was better responding to sensory stimulation by people with more severe developmental disabilities than by higher functioning individuals (Cantor & Hottel, 1955; Ohwaki, Brahlek, & Stayton, 1973; Stevenson & Knights, 1961). Also, sensory stimulation was shown to be more durable (Bailey & Meyerson, 1969) than commonly used consumables which were candidates for rapid satiation (Rincover & Newsom, 1985). Some early examples of sensory reinforcer assessments addressed receptors related to audition (Remington, Foxen, & Hogg, 1977), touch (Rehagen & Thelen, 1972), vision (Rynders & Friedlander, 1972), vibration (Murphy & Doughty, 1977), and vestibular stimulation (Sellick & Over, 1980) as well as social stimulation (Harzem & Damon, 1976), food, and manipulatibles. Pace, Ivancic, Edwards, Iwata, and Page (1985) took an important step in the systematic advancement of reinforcer assessment technology by offering an assess- Stimulus Preference and Reinforcer Assessment Applications 21 ment that not only surveyed a reasonable variety of stimuli, but also used the stimuli identified as preferred as reinforcers to change socially relevant behavior (e.g., increased compliance). This two-part procedure of first identifying preferences and then demonstrating their behavior-change effects became the framework from which a technology of reinforcer assessment began to develop. The first part, the stimulus preference assessment, is a fairly efficient review of a number of potentially reinforcing stimuli to determine which stimuli are most (and least) preferred. The second part, the reinforcer evaluation, uses one or more of these stimuli in some design to demonstrate their reinforcing effects. Stimulus Preference Assessments Stimulus preference assessments are conducted by making presentations of available stimuli and observing for approach or preference responding. When stimuli are presented singly, avoidance or non-preference is sometimes measured (Pace et al., 1985). Approach responding is generally defined as “a voluntary movement toward the stimulus, maintaining eye contact with the stimulus for at least three seconds, exhibiting a positive facial expression, or making a positive vocalization within five seconds of the stimulus presentation” (Green et al., 1988). Avoidance of a non-preferred stimulus is typically defined as “exhibiting a negative vocalization, pushing the stimulus away, or making a movement away from the stimulus within five seconds of the presentation” (Green et al., 1988). Some preference assessments have utilized duration of contact between simul- taneously available materials as the measurement of preference (Favell & Cannon, 1976). Others have used trial presentation because explicit presentation of a stimulus item not only allows multiple stimuli to be assessed in a relatively short period of time, but also offers an opportunity to insure the participant comes in contact with the available stimulus. All direct-observation preference assessments utilize a proce- dure to insure contact with the stimulus either in a noncontingent application before each assessment trial (Green et al., 1988) or in a post-trial application after which a second presentation is made on trials when no response occurred to the first presentation (Fisher et al., 1992; Pace et al., 1985). Presenting two stimuli at once (Fisher et al., 1992; Witryol & Fischer, 1960) has proven to differentiate between preferred stimuli much better than the single- stimulus presentation method (e.g., Pace et al., 1985). To be more descriptive, if money were a preferred stimulus, single presentations of a dime or a dollar would likely generate identical absolute rates of approach responding. However, the proportion of approach responding when a choice between the dime and the dollar is given (i.e., presenting them together) is likely to be more sensitive in demonstrating that preference for the dollar is greater than for the dime (for a more thorough discussion of choice responding, see Fisher & Mazur, 1997). In addition, the paired- choice procedure has been shown to be more sensitive in showing difference in preference for visually impaired people who appeared to be predisposed to manipu- late all presented objects (Paclawskyj & Vollmer, 1995). The single-stimulus presen- tation preference assessment is the simplest procedure to provide but often yields an 22 Chapter 1 overestimate of preference (Fisher et al., 1992). However, the single-stimulus assessment may still be useful to identify preferences for individuals who show clear differential choice, for individuals who have difficulty responding to a choice occasion (Ivancic & Bailey, 1996), or when non-preferred stimuli are the focus of the investigation (Fisher et al., 1994). Single-stimulus presentation procedures have used from 10 trials per stimulus (Pace et al., 1985) to 30 trials per stimulus (Green et al., 1988; Green, Reid, Canipe, & Gardner, 1991) to assess preference. Paired-stimulus presentation procedures (also referred to as choice, concurrent operant, and or forced-choice assessments) offer each stimulus with every other stimulus (Fisher et al., 1992). Stimulus presentations are typically counterbalanced for stimulus item and presentation position to control for sequence or position preferences. In an attempt to increase the efficiency of conducting the stimulus preference assessment, procedures offering multiple-stimulus presentations (more than two) have been used to assess preferences (DeLeon & Iwata, 1996; Windsor, Piche, & Locke, 1994). When DeLeon and Iwata removed previously chosen items from multiple-stimulus arrays, this multiple-stimulus presentation without replacement format (MSWO) was found to provide rankings of stimulus preference similar to procedures in which stimuli were presented in pairs. In addition, the MSWO format, which presented the stimuli in a group and only five times each, was more efficient than the others. Reporting this efficiency, Fisher and Mazur (1997) suggested that the most important clinical advancement of the MSWO procedure may not be in its overall efficiency, but that a similar procedure could be used before or during training sessions by offering a participant to select one or two potential reinforcers from an array of several (cf. Mason, McGee, Farmer-Dougan, & Risley, 1989) thereby taking advantage of any momentary state of motivation for available stimuli. However, even though presenting a large array of stimuli may be an efficient stimulus preference assessment for some, there is still a physical limit to the number of stimuli an individual can attend to at once, and this may be even more true when used with people who have the most severe disabilities. An important part of stimulus preference assessments is offering a convincing representation of the stimuli with reinforcer potential for an individual. Pace et al. (1985) made an apparent attempt to survey the universe of stimuli available by offering 16 different stimuli that addressed seven sensory modalities (i.e., visual, auditory, olfactory, gustatory, tactile, thermal, vestibular) and social stimulation. Others have chosen stimuli most likely to be found available and most easily provided (Green et al., 1988). It is possible that further investigations into different qualities of stimulation or receptors may produce preferred stimulation currently not included in standard assessments. For example, sensory assessments (e.g., Reisman & Hanschu, 1992) may include activities which stimulate joints via deep muscle receptors (e.g., “pushes against [things]”, “holds arms flexed”), which have been demonstrated as reinforcers (Favell, McGimsey, Jones, & Cannon, 1981; Foxx & Dufrense, 1984), but have not yet been included in standard stimulus preference Stimulus Preference and Reinforcer Assessment Applications 23 assessments. Nevertheless, increasing the number of stimuli utilized in a stimulus preference assessment increases the effort to conduct it and likely decreases the probability of completing the assessment. In addition, no assessment can presume to include the potentially infinite number of stimuli available. Efforts to simplify the process of stimulus preference assessments by use of a standard list of stimuli found that staff opinion was not as accurate in identifying preferences compared to more formal assessments (Green et al., 1988). However, there has been agreement with more formal assessments on stimuli that staff report a participant likes the most (Green et al., 1991). Further, Fisher, Piazza, Bowman, and Amari (1996) found stimuli identified by caregivers in their Reinforcer Assessment for Individuals with Severe Disabilities (RAISD) were more potent reinforcers than those identified in standard reinforcer assessments. The RAISD is a structured interview that uses general sensory domains as guidelines to ask familiar staff members questions about an individual’s preferences. Bowman, Fisher et al., (1997) reported that the 15 to 20 minutes required to use the RAISD, combined with an MSWO assessment, could allow accurate identifica- tion of preferences for a person in less than an hour. The authors suggested that such an efficient technology should begin to make stimulus preference assessments, which are largely conducted only in clinics and hospitals now, available for everyday use in schools and residential facilities. Still, it appears that for a clinician who does not have the advantage of interviewing familiar caregivers, more standardized assessments may still be reasonably successful in identifying a reinforcer for an individual by presenting lists of commonly available, easy-to-provide stimuli (Green et al., 1991). There may be a tendency for a clinician to use a person’s failure to show preference in a reinforcer assessment as an indication that this individual is untrainable. Of course, this is not true. A person could never be proven untrainable merely by showing an inability to respond to a given procedure (for a related discussion see Baer, 1981) unless every procedure had been used and all stimuli had been assessed. That is, claiming one set of stimuli were not useful in training an individual is very different from saying no stimuli could be used to train that person.1 Failure to identify a reinforcer for an individual represents exactly that — a situation in which a person was not trained with a given procedure. Preference assessment limitations. Much of the work with preference assess- ments has been shaped by individuals with extreme needs who failed to show reinforcer effects (e.g., Green et al., 1991). Ivancic and Bailey (1996) provided reinforcer assessments to 10 individuals determined to have chronic training deficits. None of these participants showed reinforcer effects, despite the use of common switch-activated responses (leaf or panel switches) that required minimal effort (range, 57-341 g of force), and were “bridged” with automatic auditory and visual feedback to decrease any delay between the switch response and the manually delivered consequence. In addition, it was reported that the individuals who failed to show reinforcer effects in this study, as a group, showed minimal movement (Experiment 2) (see also Giacino & Zasler, 1995). It may come as no surprise that 24 Chapter 1 individuals who show minimal overt responding may also show difficulty demon- strating the effect of reinforcement based on increasing overt behavior. The trainability question for these individuals could be pursued by selecting a different variety of stimuli or further minimizing the effort of the response or some other aspect of the behavior change process. However, a more rational solution to addressing the needs of these individuals, given their minimal responsiveness, may be to rely on responses currently available in their repertoires to increase contact with stimulation that can be identified as preferred. Utilizing a technology which recently demonstrated forms of approach re- sponding (e.g., smiling and laughing) as “happiness” indices (Green & Reid, 1996), others have increased happiness indices for people with chronic training problems, but not minimal responsiveness (Ivancic, Barrett, Simonow, & Kimberly, 1997). It appears that stimulus preference assessments may be used to identify preferred stimulation that may or may not be able to function as reinforcers, but may hold value to noncontingently enrich the environment of a person who has difficulty operating independently in the environment. For those who do show preferences in single- and paired-stimulus assessments, an approach response of 80% of the opportunities has been used as a general guideline for use in subsequent reinforcer evaluations (Fisher et al., 1992; Green et al., 1988; Pace et al., 1985). Nevertheless, others have shown that stimuli that are eventually shown to be effective as reinforcers may show a very low approach (i.e., may be displaced) in multiple-stimulus formats due to particularly strong prefer- ences (e.g., food) unless arrangements are made to separate these overpowering stimuli from others that are being assessed (DeLeon, Iwata, & Roscoe, 1997). Reinforcer Evaluations A preference assessment makes no contribution to a reinforcer assessment unless the stimuli identified as preferred can be used to change behavior (i.e., used as reinforcers). Relative to the efficiency of most preference assessments, reinforcer evaluations are more effortful to conduct, per stimulus, given the need for a design (e.g., reversal, multielement) to analyze whether the stimulus actually changed behavior. In addition, the effects of the reinforcer in applied research are judged by a socially significant standard of behavior change (Baer, et al., 1968; 1987). A separate requirement of social significance is not typically required for responses emitted in a preference assessment. Pace et al. (1985) used preferred stimuli following simple requests (e.g., percentage of trials with compliance) in evaluating their preferred stimuli as reinforcers. Green et al. (1991) measured mean (reduced) prompt level during skill acquisition training trials to evaluate their reinforcers. However, as the different variables of reinforcer assessment procedures began to be investigated, the effort of the response during the reinforcer evaluation became more minimal and standard, hence less socially significant, in order to reveal the effect of the stimulus without being confounded by the effort of the response. For example, Fisher et al. (1992) evaluated the reinforcing effect of preferred stimuli by using easily trained in-square Stimulus Preference and Reinforcer Assessment Applications 25 or in-chair behavior in a free-operant (unprompted) situation in which engaging in the targeted response resulted in receiving the associated stimulus under evaluation. Subsequent research has added a no-consequence area to this procedure to control for area preference (e.g., Fisher, Thompson, Piazza, Crosland, & Gotjen, 1997). Consistent choice between the two areas offering different stimulus opportunities shows the relative reinforcing value of each stimulus. Others have used free operant responding of minimal effort with microswitches (Ivancic & Bailey, 1996) or head turning (Piazza, Fisher, Hanley, Hilker, & Derby, 1996) to evaluate preferred stimuli as reinforcers. Using a RAISD interview and a paired-choice assessment to identify preferred stimuli, and in-chair/in-square responding in the reinforcer evaluation, Piazza, Fisher, Hagopian, Bowman, and Toole (1996) showed that relative rankings of high, medium, and low-preference stimuli predicted reinforcer efficacy. That is, stimuli showing the highest preference consistently functioned as reinforcers (compared to medium- and low-preference stimuli), medium-preferred stimuli sometimes func- tioned as reinforcers, and low-preference stimuli rarely did. Nevertheless, other work has shown that highly preferred stimuli may be ineffective in reducing high-rate problem behaviors, thus indicating that the identification of reinforcers for simple behaviors such as those mentioned above may not adequately predict reinforcer effectiveness for other behaviors (Piazza, Fisher, Hanley et al., 1996). Alternative Procedures for Identifying Reinforcer Effects Reinforcer surveys evaluate different qualities of stimulus consequences to determine reinforcement value. However, the “interlocked” nature of the contingen- cies of reinforcement indicate that the effect of a stimulus consequence on a given response is not only related to the other contingencies, but the effects of one contingency can constrain or facilitate the effect of another. That is, a given behavioral consequence can be more or less effective by the action of the other contingencies. Michael (1993b) called the antecedent conditions to a given response that are responsible for momentary changes in the effectiveness of that stimulus consequence an establishing operation2, and considered the effect of the establishing operation so important he identified it as the “fourth” term in the three-term contingency of operant relations. Applied research explicitly identified as addressing establishing operations will be reviewed in the rest of this chapter, as well as important work identifying the effectiveness of stimuli within choice methodolo- gies. In addition, applied work will be mentioned characterizing response changes in relation to different reinforcement schedules and response rates. Establishing Operations One antecedent condition that obviously affects the momentary effectiveness of a reinforcer is the continuum of deprivation and satiation of a stimulus. In animal research, the use of ongoing schedules of food and water consumption are com- monly used techniques for maintaining the effectiveness of stimuli used as reinforc- ers. Naturally occurring events are rarely, if ever, disrupted in applied research. 26 Chapter 1 However, it has recently been demonstrated that reinforcers are more and less effective at different moments during a routine day. Vollmer and Iwata (1991) demonstrated the differential effectiveness of food, music, and attention during periods of satiation and deprivation. This study suggests that the judicious schedul- ing of training sessions, or events just prior to training sessions may be important procedures (i.e., establishing operations) for increasing the effectiveness of the stimuli used as reinforcers. Another study has shown that unequal durations of consequences provided during functional analyses made attention, which was a functionally relevant stimulus for disruptive behavior, more effective (Fisher, Piazza, & Chiang, 1996). Smith, Iwata, Goh, and Shore (1995) showed how individuals exhibiting escape- maintained self-injury emitted more behavior when (a) new tasks were presented (novelty) compared to the same task (for two participants), (b) sessions lasted longer relative to shorter sessions (for five participants), and (c) more instructions were presented during sessions (for two participants). The authors suggested that identi- fying the functional properties of behavior problems (e.g., escape) can lead to selection of an antecedent procedure, or a combination of procedures (e.g., decreasing novelty and shortening sessions), based on those functions which can, not only treat the problem, but contribute to our knowledge of behavior processes. Using Choice Procedures to Assess Basic Processes Recently, a proliferation in the use of the choice procedures as assessment tools has resulted in many extensions of the work identifying individual reinforcement sensitivities for several variables including reinforcer quality, reinforcer delay, schedule of reinforcement, and response effort (Horner & Day, 1991; Mace & Roberts, 1993; Neef, Mace, & Shade, 1993). The choice situation in which each response alternative is made distinctive (Nevin & Mace, 1994) and responding is allocated to the available reinforcers (Fisher & Mazur, 1997) is considered to be most representative of responding that occurs in the natural environment (McDowell, 1988). Fisher and Mazur, who provide an extensive review of the basic and “bridge” research (i.e., basic research findings demonstrated in applied settings) toward the clinical use of choice methods, suggested that the commonly used procedure of differential reinforcement be conceptualized as a choice between concurrently available reinforcement. In such a paradigm, a participant can choose either an inappropriate (often dangerous or harmful) response or a more appropriate alterna- tive, and the clinician has the opportunity to arrange contingencies that favor the alternative response. Choice-Making Procedures to Increase Life Quality Choice procedures have not only been useful in the assessment of stimulus preference, but the making of that choice is considered an important component in the quality of life for disabled people (Felce & Perry, 1995). Such applications make important advances, not only in helping improve reinforcer effectiveness, but also in clarifying what is meant by words such as “quality of life.” Stimulus Preference and Reinforcer Assessment Applications 27 While an individual’s choice between stimulus alternatives is presumed by some (Bannerman, Sheldon, Sherman, & Harchik, 1990) and assessed by others (e.g., Dyer, Dunlap, & Winterling, 1990) to have more value than when one’s reinforcers are selected by another, other investigations do not always show an absolute value for choice (Kahng, Iwata, DeLeon, & Worsdell, 1997; Parsons, Reid, Reynolds, & Bumbarger, 1990). Lerman et al. (1997) found, for six people with severe disabilities, reinforcer choice did not improve their task performance when reinforcer items in the no-choice situation (yoked to the rate provided in the choice condition) were assessed as highly preferred. Fisher et al. (1997) investigated whether the effect of choice making occurred because choosing brought opportunities to choose more highly valued consequences or because choice-making itself had high value. To determine this, the authors taught three children with developmental disabilities to press three distinctive switches. The first two switches resulted in reinforcement and the third did not (extinction), controlling for automatically reinforced switch responding. In Experiment 1, after teaching participants to respond on variable- interval (VI) schedules, the authors compared the responding allocated to the three switches when the first switch was followed by a choice between highly preferred stimuli and the second switch was followed by no-choice of these same highly preferred stimuli yoked to the rate of reinforcement in the previous choice session. Participants always chose the switch associated with the choice of highly preferred stimulation. In Experiment 2, the consequences to switch activating were changed so that the first switch gave the participants a choice between two less preferred stimuli and the second switch provided various no-choice amounts of the more highly preferred stimuli. In this experiment participants responded more in the no- choice condition. That is, when stimuli available through the choice and no-choice options were equated (Experiment 1), participants preferred to select their reinforc- ers themselves, but when higher quality reinforcers were available through the no- choice option (Experiment 2), they preferred to have the therapist select their reinforcer for them. Regardless of the absolute value of choice, choice may become an important component to valid treatment selection by determining an individual’s preference for treatment alternatives. Hanley, Piazza, Fisher, Contrucci, and Maglieri (1997), using a concurrent-chain procedure with three switches, determined client prefer- ence (Phase 3) by rates of responding on the switches (the initial link of the chain). The first two switches each activated two minutes of different reductive treatments (the terminal links in the chain) and the third was a control (extinction). One reductive treatment was noncontingent attention (NCR), which was previously determined to maintain the participants’ destructive behavior, and the other was a communicative response requesting that functionally relevant stimulus (attention), which had been previously taught in functional communication training (FCT) sessions. Again, the noncontingent stimulus was yoked to the rate of communica- tion in the previous FCT session. Hanly et al. found their participants activated the switch producing the FCT consequences more than the NCR switch, even though NCR treatment was determined to be just as effective in reducing the destructive 28 Chapter 1 behavior (Phase 2). Interestingly, even though the FCT procedure might have been considered to be more effortful for the two participants, both eventually preferred the treatment when they requested the stimulus (FCT) rather than receiving it noncontingently. Peck et al. (1996) also used FCT within a choice paradigm to demonstrate client preference for treatments. In this study, toddlers reliably chose higher quality, longer duration rewards of stimulus consequences functionally related to their problem behavior regardless of the treatment. Functional stimuli included attention for three participants and escape for two others. The choice situation was set up between the communicative response and a neutral response because the problem responses were life threatening (e.g, pulling out gastric tubes). Reinforcer-Reinforcer Relations A relatively new area of applied behavior-analytic study involves a topic referred to as “behavioral economics” (see Green & Kagel, 1987; 1990; 1996), in which the effect of a reinforcer is conceptualized as an event in which responding is “ex- changed” for reinforcers (Tustin, 1994). In typical behavior-economic research, relations between two concurrently available reinforcers are inferred from changes in the rate of reinforcement with one response requirement (i.e., price) when the response requirement of the alternative reinforcer remains the same. One reinforcer is said to “substitute” for the other if the unchanged reinforcer’s rate decreases when the new reinforcer is introduced. The newly introduced reinforcer is said to “complement” the other if the unchanged reinforcer rate increases (or changes with) that of the newly introduced stimulus (Green & Freed, 1993; Iwata & Michael, 1994). Tustin (1994), using the button on a joystick as the response, measured choice for two concurrently available sensory stimuli on a changing schedule of response requirements or pay rates (fixed-ratio 1, 2, 5, 10, & 20). Across the various schedule requirements, Tustin found one individual (Participant 2) for which a stimulus was preferred over (i.e., substituted for) another at low requirements but showed a reverse preference at higher requirements. DeLeon, Iwata, Goh, and Worsdell (1997) found similar preference switches at high and low schedule requirements with similar (but not dissimilar) concurrently available stimuli. This led the authors to suggest that, in order to identify effective reinforcers for some individuals it may be necessary to conduct preference assessments with response requirements similar to those being utilized in training protocols. Tustin’s (1994) third participant illustrated a preference for constant stimuli over complex stimuli with higher response requirements. Several past studies have indicated that people with developmental disabilities (e.g., autism) preferred varied stimulation to constant stimulation (Egel, 1980; 1981). However, Bowman, Piazza, Fisher, Hagopian, and Kogan (1997) found that when seven children with develop- mental disabilities were given the choice between a constant stimulus assessed to be the most highly preferred on a preference assessment or a varied one of three stimuli of “slightly lower” quality from that preference assessment, 3 out of 7 participants responded more to the constant stimulus alternative in the reinforcer evaluation. Stimulus Preference and Reinforcer Assessment Applications 29 The natural clinical extension of a choice between reinforcing alternatives would be to reduce a problematic behavior by identifying a reinforcing alternative that substituted for the maintaining reinforcer. There are frequent examples in the literature showing competition with problem behaviors by introducing alternative stimulation (e.g., Davenport & Berkson, 1963; Favell, McGimsey, & Schell, 1982), particularly when that stimulation has been assessed as functioning to maintain the problem behavior (Ringdahl, Vollmer, Marcus, & Roane, 1997). This suggests one functional definition of “environmental enrichment” as the provision of a stimulus assessed as highly preferred, and an important category of stimuli to review would be those stimuli that function to maintain high-rate (often inappropriate) behavior (see Berg & Wacker, 1991). Using the conceptualization of reinforcer substitution, Shore, Iwata, DeLeon, Kahng, and Smith (1997) showed stimulus objects that decreased stereotypic self- injurious behavior (assessed to be controlled by automatic reinforcement) when continuously and concurrently available with the self-injury. However, reductions in self-injury did not occur when those same stimuli were presented in a differential reinforcement of zero rates procedure. In addition, the authors showed that when the effort to obtain the object was changed by tying the object to a string and requiring a movement forward to manipulate or mouth the object, even slight changes caused the preference for the object for all three participants to switch to self injury. A final topic regarding reinforcer-reinforcer relations is the phenomenon of conjugate reinforcement schedules (for a review see Rovee-Collier & Gekoski, 1979). Conjugate reinforcement schedules increase or decrease the quality/intensity of a consequence as a function of increased or decreased responding. This requires a consequence which can be gradated such as vibration (Nunes, Murphy, & Doughty, 1980), movies or television (Greene & Hoates, 1969; Switzky, & Haywood, 1973), or sound (Lovitt, 1968a; 1968b), and an easily repeated response such as a switch pulling (Nunes et al.), switch pressing (Lindsley, 1956), or motion (Switzky & Haywood). The conjugate schedule is ideal for showing not only preference quality, but the preferred intensity of that reinforcement as well. Response-Response Relations One commonly used reinforcer assessment increases a response by making a higher probability response contingent upon a lower probability response (Premack, 1959; 1965). Charlop, Kurtz, and Casey (1990) used higher probability inappropri- ate motor and vocal stereotypic behaviors as contingent responses for task perfor- mance to increase these appropriate behaviors above that obtained with food and other reinforcers, with no negative side effects (e.g., increases in aberrant behavior). Timberlake and Farmer-Dougan (1991) suggested a much more complex arrangement of using one behavior to reinforce another behavior known as “disequilibrium.” Specifically, after determining the free-operant rates of any two behaviors (e.g., coloring and math work), the disequilibrium inequality describes how one of those behaviors can be used to increase the other (i.e., act as reinforce- 30 Chapter 1 ment) if the ratio between the instrumental response (the response being increased, labeled I) and the contingent response (the response made contingent upon the instrumental response, labeled C) is greater in the contingency than the ratio is in their free-operant baseline rates (the baseline rates of the instrumental response and the contingent responses, labeled i and c, respectively). That is, the instrumental response will increase if the ratio between the instrumental and contingent response in the arranged contingency is relatively greater than the free-operant ratio (I/C>i/ c). This procedure has also been referred to as “response deprivation” (e.g., Konarski, Johnson, Crowell, & Whitman, 1980). Similarly, there should be a decrease in the instrumental response if the ratio between the instrumental response and the contingent response is relatively less than the free-operant ratio (I/C< i/c). This procedure has also been referred to as “response satiation” (e.g., Realon & Konarski, 1993). In examining the advantages and disadvantages of the disequilibrium approach to reinforcement, Timberlake and Farmer-Dougan suggested that al- though this inequality equation leads to a more complex explanation of reinforcer effects, development of this approach could lead to a more accurate explanation for some current circumstances of reinforcement, increased predictability of reinforcer effects, and increased flexibility in the practice of behavior control in applied setting (for applied implications see Iwata & Michael, 1994). Directions for Future Research There is clearly room for more research in determining preference via hard and software technology. For example, the development of mechanical devices allowing less effortful motor responses (e.g., Dewson & Whiteley, 1987) or overt physiological responses such as heart rate (e.g., Jones, Hux, Morton-Anderson, & Knepper, 1994) may reveal preferences of people with minimal response repertoires for whom no successful preference assessments have been previously possible. In addition, the use of computer programming to arrange reinforcement schedules (e.g, Fisher, Thomp- son et al., 1997), conjugate schedule requirements (e.g., Nunes et al. 1980), and response-schedule predictions (i.e., disequilibrium theory) should allow these technologies to be available to more individuals. As work identifying behavior-change conditions to alleviate problems of individuals who show extreme deficits and excesses develops, it is reasonable to believe other uses of reinforcer assessment for people without disabilities may increase. There appears to be room for improvement in motivational variables used in the technology of education (Geller, 1992), including the use of programmed instruction (Keller, 1968; Skinner, 1968). Very little work has been conducted showing how reinforcers are conditioned (e.g., Watson, Orser, & Sanders, 1968). In particular, the conditioned effect of social reinforcement (Fisher, Ninness, Piazza, & Owen-DeSchryver, 1996; Harzem & Damon, 1976) is a conspicuously under- researched variable of behavior change given the pervasiveness of its use and the ease with which it can be delivered. Finally, the choice study described in this chapter in which participants selected their own reductive procedures (e.g., Hanley et al., 1997) stands to be a very

Use Quizgecko on...
Browser
Browser