Schedules of Reinforcement

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

A researcher observes that a rat presses a lever more consistently when the number of required presses for reinforcement varies, compared to when the number is fixed. Which schedule of reinforcement is most likely in effect when the lever presses are more consistent?

  • Fixed Interval
  • Fixed Ratio
  • Continuous Reinforcement
  • Variable Ratio (correct)

In a scenario where a dog receives a treat every time it sits on command, what type of reinforcement schedule is being used, and how does it typically affect the dog's learning and behavior?

  • Partial reinforcement, leading to slower initial learning but greater resistance to extinction.
  • Fixed ratio reinforcement, resulting in a high rate of response followed by a post-reinforcement pause.
  • Continuous reinforcement, facilitating quick learning but potentially rapid extinction if reinforcement stops. (correct)
  • Variable interval reinforcement, causing a steady but moderate rate of response.

An employee receives a bonus for every 10 sales they make. However, after several months, the employee begins to take a long break after receiving each bonus before starting to make more sales. This behavior is most indicative of which reinforcement schedule?

  • Fixed Interval
  • Variable Interval
  • Fixed Ratio (correct)
  • Variable Ratio

A pigeon pecks a key under a fixed interval schedule. The graph of its cumulative responses shows a gradually increasing slope as the end of the interval approaches. Which behavioral pattern does this cumulative response graph reflect?

<p>Fixed-Interval Scallop (D)</p> Signup and view all the answers

A researcher is comparing the effectiveness of different reinforcement schedules on the acquisition of a new behavior. Which schedule is most likely to produce the most rapid initial learning?

<p>Continuous Reinforcement (D)</p> Signup and view all the answers

A child is learning to play a video game. Initially, every correct move results in a reward, but as the child becomes more skilled, rewards are given intermittently. Which strategy is being employed, and why is it effective?

<p>Moving from continuous reinforcement to partial reinforcement to increase resistance to extinction. (B)</p> Signup and view all the answers

How does the variable ratio schedule differ from the variable interval schedule, and what are the implications of these differences for maintaining behavior?

<p>Variable ratio schedules deliver reinforcement after a varying number of responses, while variable interval schedules give reinforcement after a changing amount of time; variable ratio typically sustains higher response rates because more responses lead to more reinforcement opportunities. (A)</p> Signup and view all the answers

Consider a scenario where a supervisor wants to increase the number of completed reports by their employees. They decide to implement a reinforcement schedule. Which strategy would most likely result in the highest rate of report completion?

<p>Rewarding employees after a variable number of reports are completed, with the number changing unpredictably (Variable Ratio). (C)</p> Signup and view all the answers

An animal is trained to press a lever for food. Over time, the number of lever presses required for each food pellet is significantly increased. If the ratio is increased too rapidly, what behavior is most likely to occur?

<p>Ratio Strain (D)</p> Signup and view all the answers

Why does a variable interval schedule typically produce a steadier rate of response compared to a fixed interval schedule?

<p>Because the interval changes randomly, so subjects are encouraged to respond consistently as they cannot predict when the next reinforcement will be available. (D)</p> Signup and view all the answers

If an organism's behavior is being maintained by reinforcement, what does Skinner suggest about the most important factor influencing the pattern of that behavior?

<p>The form of the contingency between the behavior and its consequences (D)</p> Signup and view all the answers

In what way does partial reinforcement differ from continuous reinforcement, and what implications does this difference have for the persistence of learned behaviors?

<p>Partial reinforcement involves reinforcing some but not all responses, leading to slower initial learning but greater resistance to extinction, while continuous reinforcement involves reinforcing every response, resulting in faster initial learning but quicker extinction. (C)</p> Signup and view all the answers

What is the key characteristic of interval schedules, and how do they differ from ratio schedules in terms of how reinforcement is provided?

<p>Interval schedules require that a certain amount of time must pass before a response is reinforced, whereas ratio schedules require a certain number of responses to be made. (C)</p> Signup and view all the answers

In ratio schedules of reinforcement, what establishes the 'ratio' and how does it influence behavior?

<p>The 'ratio' is between the number of responses and the amount of reinforcement; higher ratios might lead to ratio strain. (B)</p> Signup and view all the answers

A researcher observes that a pigeon demonstrates a 'break-and-run' pattern in its responding; it pauses after reinforcement and then responds at a high, steady rate until the next reinforcement. This pattern is MOST characteristic of which reinforcement schedule?

<p>Fixed Ratio (A)</p> Signup and view all the answers

An individual is more likely to engage in a behavior when the reinforcement is less predictable. Which type of reinforcement schedule is conducive for this to occur?

<p>Variable Ratio Schedule (C)</p> Signup and view all the answers

A person habitually checks their email at random times throughout the day. This behavior is reinforced by occasionally finding an important message. What reinforcement schedule best describes this situation, and why?

<p>Variable Interval, because the important messages arrive at unpredictable times. (A)</p> Signup and view all the answers

In what key aspect do variable schedules of reinforcement differ from fixed schedules, and what are the implications of this difference for the predictability of reinforcement?

<p>Variable schedules provide consequences based on the average response or time, making the reinforcement unpredictable , while fixed schedules provide reinforcers after a set amount of time or number of responses, making the reinforcement predictable. (C)</p> Signup and view all the answers

In both fixed ratio (FR) and fixed interval (FI) schedules, a pause often occurs after each reinforcer is delivered. What underlies this similarity in pausing, and what distinguishes the schedules?

<p>Both result in temporary satisfaction of motivation, but FR requires responses and FI requires waiting. (D)</p> Signup and view all the answers

According to Reynolds' 1975 study, what happens when pigeons on both variable ratio (VR) and variable interval (VI) schedules of reinforcement get the same frequency and distribution of reinforcers? How are the behaviors different and what do those differences imply?

<p>Behaviours differ, suggesting motivation is impacted by response style, and aren't exclusively due to rate of reinforcement. (C)</p> Signup and view all the answers

Why might behaviors on ratio schedules be more sensitive to changes in reinforcement rate than those on interval schedules?

<p>Performance is more directly tied to reinforcement on ratio schedules, which can have a linear output, while interval schedules produce feedback. (A)</p> Signup and view all the answers

What is the primary implication of Herrnstein's Matching Law for understanding how reinforcers affect our behavior?

<p>The rate of responding to a particular choice is not determined solely by the rate of reinforcement for that action itself. (D)</p> Signup and view all the answers

According to the matching law, what broader implication is there when a reinforcer affects someone's behavior?

<p>An operant response must compete with all other possible behaviors for an individual's time. (A)</p> Signup and view all the answers

In the context of choice behavior, what reflects the study of how these factors influence an individual's decision-making?

<p>How learning theorists study how each factor affects behavior to determine distribution between different alternatives. (A)</p> Signup and view all the answers

Given that everyday choices are influenced by a number of factors, how do learning theorists commonly determine how individuals distribute behaviors?

<p>By studying what factors might affect behaviors and their effects of individuals choices. (B)</p> Signup and view all the answers

What is a concurrent schedule, and why is it useful in the study of choice behavior?

<p>It involves two or more reinforcement schedules operating simultaneously, enabling ongoing assessment of choices among responding to the concurrent reinforcement. (B)</p> Signup and view all the answers

In a concurrent-chain schedule of reinforcement, what differentiates the 'choice link' from the 'terminal link', and how are these links connected?

<p>The choice link has the initial selection that makes the other unavailable, whereas the terminal link has the final reinforcement; successful completion of one dictates the other. (A)</p> Signup and view all the answers

When does self-control occur, and how do you measure decision making?

<p>When choosing larger delayed rewards over small immediate rewards. (D)</p> Signup and view all the answers

When considering the 'value discounting function', how is the value of a reinforcer related to both reward magnitude (M) and reward delay (D)?

<p>Value is influenced both by the amount of reward, and inversely by the amount of time one has to wait. (D)</p> Signup and view all the answers

What is the role of the discounting rate parameter k in the context of reward value and delay?

<p>It indicates how rapidly reward value decreases proportional to the amount of time that passes. (A)</p> Signup and view all the answers

In value discounting, which parameter affects decision making?

<p>The subjective value a stimulus has and the individuals impulsivity. (D)</p> Signup and view all the answers

During Madden et al (1997)'s investigation into self-control in heroin addicts, what choices were given, and amongst which participants?

<p>Heroin addicts were shown as separate to a non-dependent group receiving hypothetical decisions . (C)</p> Signup and view all the answers

How did the graph of subjective value of $1000 reward alter for participants in the substance abuse segment?

<p>Value was sharply devalued with the introduction of the alternative action (C)</p> Signup and view all the answers

Is impulsive choice solely related to addictive behaviours?

<p>No, it is related to behaviors with higher reward discounts, alongside lower grades alongside multiple other human behaviors. (B)</p> Signup and view all the answers

What are the best ways to train self-control?

<p>Through the utilization of reward and delayed reinforcement in choosing larger rewards later. (C)</p> Signup and view all the answers

When attempting to instill or modify new decision making skills, in what way must we set goals, and in what way must any reinforcers complement those goals?

<p>The goals set should have reinforcers provided for small but successful steps, alongside honest application of claimed rewards. (D)</p> Signup and view all the answers

When are multiple responses reinforced under a VI schedule such as the matching law, what predictions will follow?

<p>Organisms will allocate time across the reactions, based on relative stimulus reinforcement. (D)</p> Signup and view all the answers

In a concurrent-chain schedule, an animal first chooses between two options (the choice link), each leading to a different reinforcement schedule (the terminal link). If one option in the choice link consistently leads to a variable ratio (VR) schedule in the terminal link and the other leads to a fixed interval (FI) schedule, how would an animal's behavior likely evolve over time, assuming both schedules eventually offer equivalent rates of reinforcement?

<p>The animal would increasingly favor the option leading to the VR schedule due to its resistance to extinction and the higher response rate it typically elicits, even if the overall reinforcement rate is the same. (D)</p> Signup and view all the answers

An individual is presented with two choices: Option A delivers a small reward immediately, while Option B delivers a larger reward after a delay. According to value discounting models, which factor most significantly determines the point at which the individual switches from preferring the immediate, smaller reward to the delayed, larger reward?

<p>The individual's discounting rate (<code>k</code>), reflecting how rapidly the subjective value of the reward decreases with delay. (C)</p> Signup and view all the answers

According to Reynold's 1975 study, what are the behavioral differences when pigeons get the same frequency and distribution of reinforcers on VR and VI schedules?

<p>Underlying motivation differs between VR and VI Schedules. (D)</p> Signup and view all the answers

An educational video game is designed to promote consistent study habits. Initially, students receive a reward after completing each level, but the reward system gradually changes so that rewards are given after completing a variable number of levels. To optimize motivation and persistence, which adjustment to the variable reward system would be MOST effective, according to research on reinforcement schedules?

<p>Ensuring the average number of levels required for a reward remains consistent, but varying the number of levels required for each reward. (A)</p> Signup and view all the answers

A researcher aims to train a laboratory animal to differentiate between two stimuli: a high-pitched tone and a low-pitched tone. The animal must press one lever for the high tone and another lever for the low tone. Initially, both levers operate on a continuous reinforcement schedule, but the researcher plans to transition to a partial reinforcement schedule to improve resistance to extinction. Considering the unique response patterns associated with different partial reinforcement schedules, which schedule should be implemented to maintain a high and consistent response rate on both levers?

<p>Transition to a variable ratio (VR) schedule on both levers to maintain a high and consistent response rate because the reinforcement is less predictable. (A)</p> Signup and view all the answers

Flashcards

Schedule of reinforcement

The rule that determines how and when a response will be reinforced.

Continuous reinforcement

Reinforcing every correct response. Most efficient way to condition a new response.

Partial reinforcement

Reinforcing some, but not all, correct responses. More effective at maintaining or increasing the rate of response.

Ratio schedule

A pattern of behavioral contingency where the delivery of reinforcement depends on the number of responses performed.

Signup and view all the flashcards

Fixed Ratio (FR) schedule

Reinforcement is given if the subject completes a pre-set number of responses.

Signup and view all the flashcards

Ratio strain

Pause during the ratio run, following a sudden, significant increase in ratio requirement.

Signup and view all the flashcards

Variable Ratio (VR) schedule

The number of responses required to get each reinforcer is not fixed, it varies around an average.

Signup and view all the flashcards

Fixed Interval Scallop

Time to the end of the interval approaches and rate of responding increases.

Signup and view all the flashcards

Interval schedules

Responses are reinforced only if they occur after a certain amount of time has passed.

Signup and view all the flashcards

Fixed Interval (FI) schedule

A response is only reinforced if a constant or fixed amount of time has elapsed from the previous delivery of a reinforcer

Signup and view all the flashcards

Variable Interval (VI) schedule

A response is reinforced only if it occurs more than a variable amount of time after the delivery of an earlier reinforcer

Signup and view all the flashcards

Choice Behaviour

The voluntary act of selecting or separating from two or more things that which is preferred.

Signup and view all the flashcards

Concurrent schedule

Two schedules of reinforcement are in effect at the same time. The subject is free to switch from one response key to the other

Signup and view all the flashcards

Self-control

Choosing a large delayed reward over an immediate small reward

Signup and view all the flashcards

Value Discounting Function

Value of a reinforcer is directly related to reward magnitude and inversely related to reward delay.

Signup and view all the flashcards

Pre-commitment

Making a decision to choose a larger delayed alternative in advance, in a manner that is difficult or impossible to change later on.

Signup and view all the flashcards

Study Notes

Schedules of Reinforcement

  • Delivery of reinforcement after a behavior influences how it is learned and maintained.
  • A schedule of reinforcement dictates when a response will be reinforced.
  • Most schedules can be categorized into types sharing relations between responses and reinforcers.
  • In Instrumental Conditioning (IC), a contingency is learned such as "If S, then R → O".
  • Skinner believed the contingency would control behavior patterns.
  • A schedule is defined as the pattern of behavioral contingency.
  • Skinner explored how different reinforcement schedules affect behavior.

Reinforcement Rates

  • Continuous reinforcement entails reinforcing every correct response.
  • Continuous reinforcement is most efficient for conditioning new responses.
  • Continuous reinforcement is rare in real life.
  • Partial reinforcement involves reinforcing some, but not all, responses.
  • Partial reinforcement is more effective for maintaining or increasing response rates.

Partial Reinforcement Schedules

  • Schedules yield distinct response rates, response patterns, and varying resistance to extinction.
  • Ratio and Interval are two main types of schedules.
  • Ratio schedules reinforce after a certain number of responses.
  • Interval schedules reinforce after a certain amount of time has passed.
  • Schedules can be further broken down into fixed and variable categories.

Ratio Schedules

  • Reinforcement is dependent on the number of responses performed.
  • A 'ratio' is established between 'work' and 'reinforcement'.
  • Continuous Reinforcement (CRF) and Partial (intermittent) Reinforcement are types of ratio schedules.

Fixed Ratio Schedules

  • Reinforcement is given after completing a pre-set number of responses.
  • CRF is also referred to as an FR1 schedule.
  • FR10 means reinforcement after 10 responses.
  • FR50 means reinforcement after 50 responses.
  • FR200 means reinforcement after 200 responses.
  • Every X responses produces one outcome (O).

Cumulative Recorder

  • Slope indicates the rate of responding.

Fixed Ratio (FR) Schedules

  • Ratio run refers to the period of responding.
  • Post-reinforcement pause (pre-ratio pause) occurs after reinforcement.
  • Ratio strain is a pause during the ratio run after a sudden increase in ratio requirement (e.g., FR 5 to FR 50).

Variable Ratio Schedules

  • The number of responses required for reinforcement is not fixed, but varies around an average.
  • The reinforcer is less predictable with variable ratio schedules.
  • There is less likelihood of regular pauses in responding.
  • Numerical value indicates the average number of responses required per reinforcer.
  • Every X responses produces one outcome but X changes with each reinforcer.
  • Variable Ratio schedules are identified by the average number of responses per outcome.
  • With a VR 6 schedule the responses might be 3, 7, 1, 1, 18 which averages to 6.

Ratio Schedule Behaviour

  • Fixed ratio: every X Rs produces one outcome.
  • Fixed ratio behaviour shows steady responding until reinforcement.
  • Fixed ratio schedules have post-reinforcement pauses.
  • The higher the ratio, the longer the pause after each reward on a fixed ratio schedule.
  • An example of this is piece work in factories.
  • Variable ratio means that every X responses produces one outcome, but X varies around the mean.
  • Variable Ratio behavior is a constant and high rate of responding.
  • Gambling, video games and sports are real-life examples of variable ratio schedules

Interval Schedules

  • Responses are reinforced only if they occur after a certain amount of time has passed.
  • A response is only reinforced if a constant or fixed amount of time has elapsed from the previous reinforcer delivery in Fixed Interval (FI) schedule.
  • The fixed interval scallop sees time to the end of the interval approaching, and an increased rate of responding. -After Y seconds, 1 response produces 1 outcome.
  • Behavior before the interval expires has no consequence in fixed interval schedules.
  • Variable Interval (VI) Schedule -A response is reinforced if it occurs after a variable amount of time after the delivery of an earlier reinforcer. -The reinforcer is less predictable, thus the subject shows steady rate of response. -After Y seconds, 1 response produces 1 outcome, but Y randomly changes after each outcome. -Checking email is a real-life example.
  • In fixed-interval schedule, after Y seconds, one response produces one outcome.
  • Fixed interval behavior has an FI Scallop and is indicated by purple line.
  • At the beginning of the interval there is little to no responding.
  • There are increases to rapid responding before interval expiration.
  • Watching the clock for an appointment is a real-life example of fixed interval behaviour.
  • The green line behaviour for variable interval schedules is steady but low rate of responding.

Four Basic Schedules of Reinforcement

  • Fixed-ratio.
  • Variable-ratio.
  • Fixed-interval.
  • Variable-interval.

Comparing VR, VI schedules

  • Fixed schedules show regular reinforcers.
  • Variable Schedules show variable reinforcers.
  • VR generally maintains the highest rate.
  • VI generally maintains the lowest rate.
  • The VR schedule has more responses that equal more reinforcers.
  • The VI schedule has more responses which doesn't equal more reinforcers, you only need to check in.

Schedules of Reinforcement

  • The variable ratio schedule produces the highest rate of responding and is the most resistant to extinction.
  • 73,000 pecks in 4.5 hours from a pigeon when extinction started after VR900 conditioning.
  • Nagging or whining for something from parents can be hard to extinguish if it is reinforced here and there on a variable ratio schedule.
  • Interval schedules, both fixed interval and variable interval, produce slower rates of responding.

Schedules of Reinforcement

  • Fixed ratio involves a very high response rate, steady responses with a low ratio, and a brief pause after each reinforcement which makes it more resistant to extinction.
  • Constant response pattern with no pauses, highest response rate and most resistant to extinction are characteristics of variable ratio schedules.
  • Fixed intervals have the lowest response rates and a long pause after reinforcement followed by gradual acceleration.
  • With Variable intervals there is moderate response rates and stable, uniform response behaviour.

Ratio vs. Interval Schedules: Similarities

  • FR and FI schedules have a typical pause after each reinforcer.
  • There is an increased response rate before reinforcer delivery in FR and FI schedules.
  • Response rate is steady without predictable pauses in VR and VI schedules.

Ratio vs. Interval Schedules: Difference

  • Different response rates, even when reinforcement frequency is similar.
  • Reynolds compared key pecking rates on VR and VI schedules in 1975.
  • Reinforcement to pigeons on VI schedule controlled by responses of pigeon on VR schedule, so there was nearly identical reinforcement rate.
  • The same frequency and distribution of reinforcers, but different rates of responding.

Reinforcement of Inter-Response Times (IRT)

  • Faster responses are more likely to receive reinforcement.
  • Variable ratio schedules reinforce shorter IRTs.
  • Interval schedules favor long IRTs.

Feedback Function

  • The relationship between response rate and reinforcement rate is larger on VR schedules.
  • Higher responding will persist if it is reinforced MORE than low responding.
  • Ratio schedules get higher reinforcement with more responding.
  • Feedback Function is an increasing linear function with no limit.

Choice Behaviour

  • 'The voluntary act of selecting or separating from two or more things that which is preferred.'
  • Everyday choices are influenced by reinforcer quality and quantity.
  • Everyday choices are influenced by behaviour such as type of response and schedule of reinforcement.
  • Choices are influenced by available alternatives.
  • Choices are influenced by the delay in reinforcement.
  • A goal of learning theorists is to study how factors influence choice behaviour.

Animal Models to Study Choice: Complex Schedules

  • A concurrent schedule entails 2 schedules of reinforcement in effect at the same time.
  • The subject is free to switch from one response key to another.
  • Continuous measurement of choice is permitted.
  • The organism is free to switch between options at any time.

Measures of Choice Behaviour: Reinforcer Quality

  • Option A - Fixed Ratio 5, $1.50.
  • Option B - Fixed Ratio 5, chocolate.
  • Same reinforcement schedule with different reinforcer.

Measures of Choice Behaviour: Reinforcer Quantity

  • Option A - Fixed Ratio 5, $10.
  • Option B - Fixed Ratio 5, $100.
  • Same reinforcement schedule and reinforcer, but different reinforcer quantity.

Measures of Choice Behaviour:Reinforcement Schedule

  • Option A - Fixed Ratio 5, $100.
  • Option B - Fixed Ratio 50, $100.
  • Same reinforcer and behavioural response, but different schedule of reinforcement.

Measures of Choice Behaviour: Type of Response

  • Option A - Variable Interval 5, $100.
  • Option B - Variable Interval 5, $100.
  • Same reinforcement schedule and reinforcer, but different behavioural response.

More Choice Behaviour: Schedule of Reinforcement

  • Option A - Variable Interval 20, $10.
  • Option B - Variable Interval 60, $10.
  • Same reinforcer and behavioural response, and different schedule of reinforcement.

Herrnstein's Matching Law

  • Relative rate of responding matched the relative rate of reinforcement.

Applying Matching Law

  • Matching law can predict how Person X would spend the day given Option A with a variable interval 20, and $10 or Option B with a variable interval 60 and $10.
  • Rate of reinforcement from A = 480/20 = 24.
  • Rate of reinforcement from B = 480/60 = 8. Total # of reinforcers possible = 32.
  • Ra = 24/32 = 0.75 & Rb = 8/32 = 0.25.
  • Distribution of work would be
  • Ba = 0.75 * 480 = 360 minutes.
  • Bb = 0.25 * 480 = 120 min.

Matching Law: Implication

  • An operant response must compete with all other possible behaviours for an individual’s time.
  • It is impossible to predict how a reinforcer will affect a behaviour without taking into account the context.
  • This means all other reinforcers that are simultaneously available for all other behaviours.

Complex Choice: Choice with Commitment

  • Concurrent-chain schedule of reinforcement.
  • Choosing one option makes the other unavailable.
  • In the lab: concurrent-chain schedule of reinforcement.
  • There are two stages (links): -Choice link -Terminal link

Complex Choice & Self-Control

  • Self-control involves choosing a large delayed reward over an immediate small reward.

Value Discounting Function

  • Value of a reinforcer (V) is directly related to reward magnitude (M) and inversely related to reward delay (D).
  • (k) is the discounting rate parameter that indicates how rapidly reward value declines as a function of delay.

Self-Control Deficiencies

  • Conducted by Madden et al (1997): Self-control in heroin addicts was studied. -Herion addicts vs non-dependent subjects. -$1000 in future vs. smaller amount of money now.

Impulsive Choice in Addicts

  • Addicts are more impulsive.
  • They have a steeper discounting function.

Delay Discounting

  • Graph indicates Less Impulsivity versus More Impulsive behaviours.

Impulsive Choice in Other Human Affairs

  • Impulsivity and delay discounting affect many other human behaviours.
  • Slower reward discounting is associated with higher grades.
  • Faster reward discounting is associated with unsafe sex.
  • Slower delay discounting is associated with age (less impulsivity with age).

Training Self-Control

  1. Training with delayed reinforcement increases likelihood of choosing larger delayed reward in the future.
  2. Pre-commitment is to make decisions to choose larger delayed alternatives in advance, and in a manner that is difficult or impossible to change later.

Modify Your Own Behaviour

  1. Identify the target behaviour that you want to modify. -Must be observable and measureable. -Example: amount of time studying.
  2. Gather and record baseline data. -Keep a daily record, noting when the target behaviour occurs and cues present.
  3. Plan your behaviour modification program. -Formulate a plan and set goals to increase or decrease your target behavior.
  4. Choose your reinforcers. -Any activity you enjoy more can be used to reinforce any activity you enjoy less. -Example: watch TV only after a desired amount of studying.
  5. Set the reinforcement conditions and begin recording and reinforcing your progress. -Be careful not to set goals so high that earning a reward is nearly impossible. -Remember shaping: reward small steps to reach a desired outcome. -Be honest with yourself and claim rewards only when goals are met. -Chart your progress as you work toward gaining more and more control over the target behaviour.

Summary

  • Schedules of reinforcement define whether the outcome follows every response, is available after some number of responses, or is available only after some time interval.
  • When multiple responses are reinforced under a VI schedule, the matching law predicts that organisms will allocate time among those responses based on the relative rates of reinforcement for each response.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Use Quizgecko on...
Browser
Browser