Schedules of Reinforcement PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Summary

This document discusses different schedules of reinforcement in operant conditioning. It covers various types such as fixed ratio, variable ratio, fixed interval, and variable interval schedules. Examples from real-life situations are included.

Full Transcript

10/10/2023 Schedules of Reinforcement Operant Condititoning Chapter 7 – Schedules of Reinforcement 1 2  Does each lever press by the rat result in a food, or are several lever presses required?  Did your mom give you a cookie each time you asked for one, or only some of the time?  A continu...

10/10/2023 Schedules of Reinforcement Operant Condititoning Chapter 7 – Schedules of Reinforcement 1 2  Does each lever press by the rat result in a food, or are several lever presses required?  Did your mom give you a cookie each time you asked for one, or only some of the time?  A continuous reinforcement schedule (CRF) is one in which each specified response is reinforced.  An intermittent (or partial) reinforcement schedule (PRF) is one in which only some responses are reinforced. EXAMPLES - Schedules of Reinforcement Schedules of Reinforcement 3 4  Each time you flick the light switch, the light comes on. The behavior of flicking the light switch is on a(n) _____________ schedule of reinforcement.  When the weather is very cold, you are sometimes unable to start your car. The behavior of starting your car in very cold weather is on a(n) _____________ schedule of reinforcement.  4 types of intermittent (partial) schedules  (Fixed Ratio=FR)  (Variable ratio=VR)  (Fixed Interval=FI)  (Variable Interval=VI) 1 10/10/2023 Schedules of Reinforcement Schedules of Reinforcement 5 6  Fixed Ratio=FR  Reinforcement is contingent upon a fixed, predictable number of responses. Note that an FR 1 schedule is the same as a CRF schedule in which each response is reinforced.  Fixed Ratio=FR On a fixed ratio 5 schedule (abbreviated FR 5), a rat has to press the lever 5 times to obtain a food. On an FR 50 schedule, it has to press the lever 50 times. Schedules of Reinforcement Schedules of Reinforcement 7 8  Fixed Ratio=FR  Fixed Ratio=FR  FR schedules generally produce a high rate of  An FR 200 schedule of reinforcement will result in a response along with a short pause following the attainment of each reinforcer. This short pause is known as a postreinforcement pause.  E.g. a rat will take a short break following each reinforcer. (longer/shorter) ____pause than an FR 50 schedule.  Schedules in which the reinforcer is easily obtained are said to be very dense or rich, while schedules in which the reinforcer is difficult to obtain are said to be very lean.  E.g. An FR 5 schedule is considered a very dense schedule of reinforcement compared to an FR 100.  An FR 12 schedule of reinforcement is (denser/leaner) _____________ than an FR 100 schedule.  Higher ratio requirements produce longer postreinforcement pauses. 2 10/10/2023 Schedules of Reinforcement Schedules of Reinforcement 9 10  Variable Ratio=VR  Reinforcement is contingent upon a varying, unpredictable number of responses.  On a variable ratio 5 (VR 5) schedule, a rat has to emit an average of 5 lever presses for each food pellet, with the number of lever responses on any particular trial varying between, say, 1 and 10. (3,7,5)  E.g. gambling, lottery Think of the development of an abusive relationship. At the start of a relationship, the individuals involved typically provide each other with an enormous amount of positive reinforcement (a very dense schedule). And then what happens? As the relationship progresses, such reinforcement naturally becomes somewhat more intermittent. One person (the victimizer) providing reinforcement on an extremely intermittent basis, and the other person (the victim) working incredibly hard to obtain that reinforcement. Schedules of Reinforcement Schedules of Reinforcement 11 12 VR schedules generally produce a high and steady rate of response with little or no postreinforcement pause.  Fixed Interval=FI  Reinforcement is contingent upon the first response after a fixed, predictable period of time. e.g. receiving your salary after a 1-month period. 3 10/10/2023 Schedules of Reinforcement Schedules of Reinforcement 13 14  Fixed Interval=FI  Fixed Interval=FI  FI schedules often produce a “scalloped” (upwardly For a rat on a fixed interval 30-second (FI 30-sec) schedule, the first lever press after a 30-second interval has elapsed results in a food pellet. Following that, another 30 seconds must elapse before a lever press will again produce a food pellet. curved) pattern of responding, consisting of a postreinforcement pause followed by a gradually increasing rate of response as the interval draws to a close. Schedules of Reinforcement Schedules of Reinforcement 15 16  Variable Interval=VI  Reinforcement is contingent upon the first response after a varying, unpredictable period of time.  Variable Interval=VI VI schedules usually produce a moderate, steady rate of response with little or no post-reinforcement pause.  For a rat on a variable interval 30-second (VI 30-sec) schedule, the first lever press after an average interval of 30 seconds will result in a food pellet, with the actual interval on any particular trial varying between, say, 1 and 60 seconds. 4 10/10/2023 Schedules of Reinforcement Schedules of Reinforcement 17 18 https://www.youtube.com/watch?v=GLx5yl0sxeM Schedules of Reinforcement 19 Schedules of Reinforcement 20  On ____ schedules, the reinforcer is largely time contingent, meaning that the rapidity with which responses are emitted has (little/considerable)______ effect on how quickly the reinforcer is obtained.  In general, ________ schedules produce postreinforcement pauses because obtaining one reinforcer means that the next reinforcer is necessarily quite (distant/close) _________.  In general, (variable/fixed) ________ schedules produce little or no postreinforcement pausing because such schedules provide the possibility of relatively imm_____ reinforcement, even if one has just obtained a reinforcer. 5 10/10/2023 Schedules of Reinforcement Schedules of Reinforcement 21 22  A schedule in which 15 responses are required for each reinforcer is abbreviated _____________.  A mother finds that she always has to make the same request three times before her child does. The mother’s behavior of making requests is on an ___________ schedule of reinforcement.  If I have just missed the bus when I get to the bus stop, I know that I have to wait 15 minutes for the next one to come along. Given that it is absolutely freezing out, I snuggle into my parka as best I can and wait out the interval. Every once in a while, though, I emerge from my cocoon to take a quick glance down the street to see if the bus is coming. My behavior of looking for the bus is on a(n) __________ (use the abbreviation) schedule of reinforcement.  In the previous example, I will probably engage in (few/frequent) ______ glances at the start of the interval, followed by a gradually (increasing/decreasing) ____ rate of glancing as time passes. Operant Condititoning  You find that by frequently switching stations on your radio, you are able to hear your favorite song an average of once every 20 minutes. Your behavior of switching stations is thus being reinforced on a _________ schedule.  On a _________ schedule, a response cannot be reinforced until 20 seconds have elapsed since the last reinforcer. (A) VI 20-sec, (B) VT 20-sec, (C) FT 20-sec, (D) FI 20-sec, (E) none of the preceding.  Ayşe accepts Ahmet’s invitation for a date only when he has just been paid his monthly salary. Of the four simple schedules, the contingency governing Ahmet’s behavior of asking Ayşe for a date seems most similar to a _____________ schedule of reinforcement. Theories of Reinforcement 24 Chapter 7 – Theories of Reinforcement 23 - Clark Hull (Drive Reduction Theory) - Sheffield (Drive Induction Theory) - D. Premack (Premack Principle) - Timberlake & Allison (Response Deprivation Hypothesis) 6 10/10/2023 Clark Hull (Drive Reduction Theory, 1943) 25  Food is a reinforcer because the hunger drive is reduced when you obtain it.  Food deprivation produces a “hunger drive,” which then propels the animal to seek out food. When food is obtained, the hunger drive is reduced.  When a stimuli is associated with a reduction in some type of physiological drive, we can call this stimuli as ‘reinforcing’ and the behavior that the organism performs before the drive reduction is strengthened.  e.g. if a hungry rat in a maze turns left just before it finds food in the goal box, the act of turning left in the maze will be automatically strengthened. Clark Hull (Drive Reduction Theory) 27  Research has shown that hungry rats will perform more effectively in a T-maze when the reinforcer for a correct response (right turn versus left turn) consists of several small pellets as opposed to one large pellet (Capaldi, Miller, & Alptekin, 1989).  Chickens will also run faster down a runway to obtain a popcorn kernel presented in four pieces than in one whole piece (Wolfe & Kaplon, 1941).  The fact that several small bites of food is a more effective reinforcer than one large bite is consistent with the notion of (drive reduction/incentive motivation) __________. Clark Hull (Drive Reduction Theory) 26  The problem with this theory is some reinforcers do not seem to be associated with any type of drive reduction. e.g. A rat will press a lever to obtain access to a running wheel.  So, as opposed to an internal drive state, incentive motivation could be the key.  So, it may be because of the reinforcing stimulus itself, not some type of internal state.  E.g. Playing a video game for the fun of it, attending a concert because you enjoy the music.  Going to a restaurant for a meal might be largely driven by hunger; however, the fact that you prefer a restaurant that serves hot, spicy food is an example of incentive motivation. Sheffield (Drive Induction Theory) 28 According to Sheffield, Hull just explained the half of the story. The other half requires another thing. It’s not the drive reduction, but drive induction which makes a stimuli a reinforcer. e.g. Rabbit-carrot. Animal learns to react to me and carrot? The animal never eats the carrot. Where is the drive reduction here? Sheffield says you can support learning by allowing induction of a drive, not allowing reduction of it. 7 10/10/2023 Sheffield (Drive Induction Theory) Sheffield (Drive Induction Theory) 29 30 Sexual behavior is also similar. In a barber shop, the owner of the shop puts some playboys on the desk. While customers are waiting, they read them. And they go to the same barber ‘unconsciously’ again and again. This is the induction of sexual contact. -Male and female rats (no reduction, but drive induction) -male&male version Inducing the drive is SR! Sheffield (Drive Induction Theory) 31 In a factory, you have a standard payment for your workers. As a manager, you say that if they produce more, you will make an increament in their salary. When you make this promise, you are not reducing anything. Just the opposite! You are inducing something, new things. If you want a very strong SR, you should combine Hull and Sheffield. First induce the drive and then give the opportunity to reduce it. Drive induction followed by drive reduction! Sheffield (Drive Induction Theory) 32 In order to sell a product, first create a need state. And then give the product which reduces the desires. Fear of perspiration & then give the deodorant. Induce the drive & reduce it. This is the general strategy of commercials. SR has 2 legs; induction of drive and reduction of drive. 8 10/10/2023 Premack Principle (1965) Premack provides a more objective way to determine whether something can be used as a reinforcer. 33 34  Reinforcers can often be viewed as behaviors rather than stimuli. For example, rather than saying that lever pressing was reinforced by food (a stimulus), we could say that lever pressing was reinforced by the act of eating food (a behavior).  We should first decide the free-choice preference rate of behaviors.  A high-probability behavior can be used to reinforce a low- probability behavior.  E.g. eating food (the high-probability behavior [HPB])  Then the process of reinforcement can be conceptualized as a running in a wheel (the low-probability behavior [LPB]). sequence of two behaviors: (1) the behavior that is being reinforced, followed by (2) the behavior that is the reinforcer.  Moreover, by comparing the frequency of various behaviors, we When a rat is hungry On the other hand, when the rat is not hungry… can determine whether one can be used as a reinforcer for the other. Premack Principle Premack Principle 35 36  More probable behaviors will reinforce less probable  The Premack principle in applied settings. behaviors.  This principle is like Grandma’s rule: First you work (a lowprobability behavior), then you play (a high-probability behavior).  For example, a person with autism who spends many hours  First, eat your spinach, and then you can get your ice cream.  If you drink 5 cups of coffee each day and only 1 glass of orange juice, then the opportunity to drink ________ can likely be used as a reinforcer for drinking ________. each day rocking back and forth might be very unresponsive to consequences that are normally reinforcing for others, such as receiving praise.  The Premack principle suggests that the opportunity to rock back and forth can be used as an effective reinforcer for another behavior, such as interacting with others.  Thus, the Premack principle is a handy principle to keep in mind when confronted by a situation in which normal reinforcers seem to have little effect. 9 10/10/2023 Premack Principle Premack Principle 37  One problem is the probabilities of behaviors might fluctuate so it might be difficult to measure. It does not fit well in the lab. There may be an error in determination of free-choice preference rate.  Another problem also arises when two behaviors have the same probability. Initial probabilities are not stable over time. Every SR looses its reinforcement power. 38 Everytime you reinforce another response, you change the probability of this response. We have variability in the reinforcement event. The erosion of SR after extensive usage of same SR, it begins to show a decline. This is called erosion-effect. eating spinach eating ice-cream 10% 90% Response Deprivation Theory (Allison & Timberlake, 1974) Premack Principle 39  The Premack principle holds that reinforcers can often be viewed as _____ rather than stimuli. E.g., rather than saying that the rat’s lever pressing was reinforced with food, we could say that it was reinforced with _______ food.  The Premack principle states that a _____ ____ behavior can be used as a reinforcer for a _____ ____ behavior.  Play video games is a diagram of a reinforcement procedure based on the Premack principle, then chewing bubble gum must be a (lower/higher) _______ probability behavior than playing video games. 40  A behavior can be used as a reinforcer if access to the behavior is restricted so that its frequency falls below its baseline rate (preferred level) of occurrence.  Do not need to know the relative probabilities of two behaviors beforehand. The frequency of one behavior relative to its baseline is the important aspect.  Example: Man normally studies 60 min, exercises 30 min a day Schedule: Every 20 min of study earns 5 min of exercise  Exercise is the deprived behavior (restricted)  Prediction: Study time should increase.   10 10/10/2023 Response Deprivation Theory (Allison & Timberlake, 1974) Response Deprivation Theory (Allison & Timberlake, 1974) 41 42  Do homeworks  Read comic books  Example: A rat typically runs for 1 hour a day whenever it has free access to a running wheel (the rat’s preferred level of running).  If the rat is then allowed free access to the wheel for only 15 minutes per day, it will be unable to reach this preferred level (deprivation)  So, the rat will now be willing to press a lever to obtain additional time on the wheel. R SR  According to them, it is not important whether the probability of reading comic books is higher or lower than the probability of doing homeworks.  The important thing is that whether comic book reading is now in danger of falling below its preferred level.  For Premack, frequency of one behavior relative to another is important.  For Allison & Timberlake, the frequency of one behavior relative to its baseline is important. Examples Examples 43 44  If a child normally watches 4 hours of television per night, we can make television watching a reinforcer if we restrict free access to the television to (more/less)_______ than 4 hours per night.  Gina often goes for a walk through the woods, and even more often she does yardwork. According to the ______, walking through the woods could still be used as a reinforcer for yardwork given that one restricts the frequency of walking to _______ its ________ level.  Kaily typically watches television for 4 hours per day and reads comic books for 1 hour per day. You then set up a contingency whereby Kaily must watch 4.5 hours of television each day in order to have access to her comic books. According to the Premack principle, this will likely be an (effective/ineffective) __________ contingency.  Yasmin often goes for a walk through the woods, but she rarely does yardwork. According to the ______, walking through the woods could be used as a _____ for yardwork.  Drinking a soda to quench your thirst is an example of _______ reduction; drinking a soda because you love its sweetness is an example of _______ motivation. 11

Use Quizgecko on...
Browser
Browser