🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

PSYCHOLOGY_ Learning .pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Transcript

Learning Types of learning Simple learning ○ Simple learning is usually involuntary ○ biological systems are involved in simple learning are often reflexes ○ simple learning does not usually last long ○ the change in behaviour is usually of a very rest...

Learning Types of learning Simple learning ○ Simple learning is usually involuntary ○ biological systems are involved in simple learning are often reflexes ○ simple learning does not usually last long ○ the change in behaviour is usually of a very restricted form ○ simple learning tends to be specific to one biological system ○ Habituation Habituation is the tendency of an organism to become familiar with a stimulus as a result of repeated exposure. People and animals notice novelty from birth, when something happens we pay attention to it and show an orienting response: move toward new events. After repeated exposure we habituate: Decline in the tendency to respond to an event that has become familiar through repeated presentation; can be short or long term. Both habituation and sensitisation are natural responses to repeated events. Whether repetition of a stimulus results in habituation or sensitisation, depends on several factors one of which is the intensity of the stimulus: Mild = habituation ○ Sensitisation Sensitisation occurs when our response to an event increases rather than decreases with repeated exposure. Often we become sensitised to repeated loud noises and our reaction becomes more intense and prolonged. Intense = sensitisation ○ Simple learning is usually involuntary ○ biological systems are involved in simple learning are often reflexes ○ simple learning does not usually last long ○ the change in behaviour is usually of a very restricted form ○ simple learning tends to be specific to one biological system General learning ○ Action —> consequence discovered by Edward Thorndike ○ cats stuck in small cages with foods outside (pressing button on floor to open door was an action that led to a positive consequence) Some behaviours (actions) occur by chance. When such an action is followed by a positive consequence, that action will become more likely in the future Thorndike’s law: random actions that are followed by good things are more likely to occur Added on by Skinner ○ Expanded on consequence learning with rats & pigeons in a box Shaping: this is the incremental process of getting an animal to engage in a particular action desired by the experimenter. It is usually done in a series of small steps, using food as a reward in hungry animal ○ Free will: if you could locate the cause for all human behaviour in the environment, then it made no sense to say that we have a choice in what we do ○ This is radical behaviourism —> denies that we have free will Consequences learning has several names: skinnerian conditioning, instrumental conditioning, operant conditioning ○ Event —> consequence discovered by Ivan Pavlov The salivating dogs experiment This learning has several names: Associative learning, classical conditioning and Pavlovian conditioning But its essence is that an event predicts a consequence Reinforces Pleasant consequences Unpleasant consequence Presenting Positive reinforcement (e.g Positive punishment (e.g a eat food when hungry) flogging causes pain) Avoiding/Removing Negative punishment (e.g Negative reinforcement (e.g avoid a flogging by escaping) remove access to food when hungry) ○ pleasant and unpleasant consequences are basically things that are biologically salient ○ difference between action —> consequence and event —> consequence is that one starts with action and then is followed by a consequences, the other starts with an event followed by a consequence ○ Consequences are biologically salient to an animal or person, which means they are usually pleasant or unpleasant Specialised learning ○ Some forms of specialised learning are restricted to only certain species - in contrast to general and simple learning, which seems to occur in the majority of animals ○ The ability to generate an idea (hypothesis), to then test it and see if it works, and then refine it some more, is a skill that is perhaps restricted to a limited number of species ○ Insight is also species restricted ○ Imitation ○ Think, Test, Revise (cognition) ○ Insight ○ Language learning ○ Imprinting Classical Conditioning Classical Conditioning is learning a new association between two previously unrelated stimuli. We learn that a stimulus predicts the occurrence of a certain event and we respond accordingly. In classical conditioning, all responses are reflexes or autonomic responses; responses we cannot voluntarily emit. Neutral Stimulus (NS): the stimulus that, before conditioning, does not naturally bring about the response of interest. E.g. in Pavlov’s experiments, the NS was a sound such as from a metronome, bell, tuning fork, or a tactile stimulus. Unconditioned Stimulus (US): A stimulus (an event) that elicits/ triggers an unconditioned (involuntary) response, without previous conditioning. Unconditioned Response (UR): An unlearned response to an unconditioned stimulus occurring without prior conditioning. For example, salivation to food, jumping when hearing a loud noise, moving away from something painful. In Pavlov’s experiments, salivation to the food was the UR. Conditioned Response (CR): learned reaction to a CS occurring because of previous repeated pairings with an UCS. E.g. in Pavlov’s experiments, the CR was salivation. Conditioned Stimulus (CS): Previously NS that, through repeated pairings with an US, now causes a CR. E.g. in Pavlov’s experiments, the CS was a sound such as from a bell, tuning fork, a tactile stimulus. Basic Principles of Classical Conditioning Acquisition of a Conditioned Response: The CS doesn’t just “substitute” for the US. CR is not always the same as the UR; Rats “freezing” instead of jumping when a shock is about to occur. The cognitive view of classical conditioning the CS predicts the US (Learn association) and so we react by preparing for that event. Extinction and Spontaneous Recovery What would happen if Pavlov later presented the bell without the food? - Extinction: The conditioned response (CR) would weaken when the conditioned stimulus (CS) is presented without the unconditioned stimulus (US). Extinction is not an unlearning of the conditioned response. It is a learned inhibition of responding. But after a period of time, in the absence of any more presentations of either the CS or the US. - Spontaneous Recovery: The re-emergence of a previously extinguished conditioned response. Changes over time in the strength of a conditioned response: Extinction & Spontaneous Recovery (left). Spontaneous Recovery of Extinguished Responding (right). Principles of Classical Conditioning Reacquisition - extinction is not unlearning. Acquisition of Phobias by Classical Conditioning Watson, J.B. & Rayner, R. (1920) Conditioned Emotional Reactions Journal of Experimental Psychology, 3(1), 1-14. The Conditioning of Little Albert An 11-month old boy named Albert was conditioned to fear a white laboratory rat. Each time he reached for the rat (CS), Watson made a loud clanging noise (US) right behind Albert. - Stimulus Generalisation: A tendency to respond to stimuli that are similar, but not identical to, a conditioned stimulus (Albert was afraid of rabbits, seal-fur coat, Santa Claus mask, cotton wool when the initial CS was a rat). - Transfer of Training: Being able to apply knowledge gained in one situation to that of a similar one. - Stimulus Discrimination: The learned ability to respond differently to similar stimuli. Discrimination is the learned ability to distinguish between a conditioned stimulus and other stimuli that do not signal an unconditioned stimulus. People may develop allergic reactions through classical conditioning. Pairing a neutral stimulus (CS Sight of flowers) with an allergic reaction (US = Pollen which produces a UR = allergic response). Person will begin releasing histamines (CR = Allergic response) at the sign of the flower; not just the pollen. This could be an artificial flower (stimulus generalisation). Higher Order Conditioning Two factors determine the extent of higher order conditioning: 1. The similarity between the higher-order stimulus and the original conditioned stimulus. 2. The frequency and consistency with which the two conditioned stimuli are paired. Classical conditioning Classical conditioning is about predicting future events. - The CS prepares the animal/person for an imminent event, CS sets up the expectation for that event and elicits the CR. For a CS to elicit a CR, it is not just a matter of close proximity of the two events in time; rather, the CS must be a predictor of the imminent arrival of the US. Predictions are indicative of the organism being able to recognise the likelihood of the US (after a CS) i.e. there is a cognitive element to classical conditioning. How does the CR Form Contiguity theory - when two stimuli are presented together in time, associations are formed between the two. Temporally contiguous events tend to be associated together. This theory suggests that in order to form a CR, one merely needs to put the two stimuli together in time. - Contingency theory - a CR develops when the CS is able to predict the occurrence of the US. This theory relies heavily on predictability and expectation. After repeated CS-US pairings, the animal can begin to predict when the US is coming based on the CS being present. When the CS is presented, the animal forms an expectancy of the US. This expectancy is what fuels the CR. - Reliability of CS-US pairing: How often is the CS followed by US? What is the probability that the US will occur given that the CS has just occurred. - Uniqueness of CS-US pairing: How often does the US happen without the CS? What is the probability of the US occurring given that No CS has occurred Contingency Theory of Classical Conditioning In the 1960s, an alternative theory was proposed by Robert A. Rescorla, the Contingency Theory. Rescorla agreed with Pavlov that for learning to take place, the CS had to be a useful predictor of the US. But he disagreed on what made the CS a useful predictor. It was more complicated than the number of CS-US pairings. He maintained that it was the contingency between the CS and US. Contingency is the relationship between two events, one being "contingent" or a consequence of the other event. That is, the occurrence of a future event is possible, given that one event has occurred, but cannot always be predicted with certainty. Rescorla challenged the simple mechanistic views of learning. He conceptualised classical conditioning as involving the acquisition of information about the relationship among events in the environment. Essentially, two different association patterns produce two different outcomes. Contingency (CS must be able to predict US) What happens when the CS is a less than perfect predictor? What if 50% of time it follows and 50% of time it doesn’t? If US follows more often and sometimes it does then it doesn’t then doesn’t – CS / US then no CR, in other words the dog doesn’t salivate. Contingency CS can also predict absence of US, If CS predicts no CS the dogs will not salivate. So contingency is about probability. - Excitatory conditioning - Likelihood of something occurring given that something else did. - Inhibitory conditioning - Likelihood of something NOT occurring given that something else did. The Effect of Contingency on Classical Conditioning For both groups there’s only a 40% chance that bells will be followed by shock. However, for Group B, shock is less likely when no bell is sounded, and, for this group, the bell becomes a fearful stimulus. What if for Group A, shock is less likely when the bell is sounded, then for this group, the bell can become a safety-signal indicating a lower prediction of shock. Fixed interval schedule: getting paid to work, you turn up each day, may be work hard may be not, but still get paid at the end of the day Variable interval schedule: pressing a button on a crossing (hoping the more you do it the quicker the reward/response will come) Fixed ratio schedule: getting paid after doing a certain amount of work - what is used to be called a piece of work Acquisition of CR Sequence of CS-US Presentation The time when the CS and US are presented is particularly important in how the CR is formed. Four types of sequences that learning theorists talk about, 1. Delayed conditioning - CS comes on first and overlaps with the US coming on. 2. Trace conditioning - a gap between the CS and US. 3. Simultaneous conditioning - CS and US come on at the same time and go off at the same time. 4. Backward conditioning - US comes before the CS. CS should function as a signal that the US is about to occur (inform regarding timing). Such a signal is most effective when it: - Comes before the US, not after it (“backward”) or at the same time (“simultaneous”) - The US follows it closely in time. Long delay > Learning less likely. - Provides new information about the US. Other stimuli may create “blocking” An animal will be able to learn about the CS in these four situations. Does the CS predict the US? Trace Conditioning Delayed Conditioning Simultaneous Backward - Yes, as long as the - Yes, the CS predicts - The CS does not - No predictability. gap isn’t too large. the US. predict the US very - CS cannot predict - Memory traces predict - Large amounts of well. anything about the the US. conditioning are - Why try to predict the US. - Some conditioning is observed US based on the CS - Almost no conditioning observed. - The best type of when the US is there at all. learning. itself. - Can get inhibitory - Very little conditioning. conditioning where the “learner” recognises that the CS means the US is over and won't be coming again – though can’t predict when. Other factors that influence the CR Strength of the US – the larger the US value, the greater the conditioning. Some US’s are extremely effective and produce rapid conditioning. - E.g. Conditioned taste aversion. Number of CS-US pairings - The more often the two stimuli are paired, the greater the conditioning to a point. At some point a response ceiling is reached - asymptote. For Pavlov, the key variable in associative learning was the number of times the CS was paired with the US. This was because the CS became a more reliable signal that the US was going to occur. Opponent-Process Theory of Emotion Opponent-Process Theory When you experience one emotion, the other is temporarily inhibited. With repeated stimulus, the initial emotion becomes weaker, and the opposing emotion intensifies. Emotional after-reaction – an emotional stimulus creates an initial response that is followed by adaptation, then opposite response. With repeated exposure to the stimulus, the pattern changes. - The primary affective response habituates (a-process). - The after-reaction strengthens (b-process). Common characteristics of emotional reactions 1. Emotional reactions are biphasic; a primary reaction is followed by an opposite after reaction. 2. The primary reaction becomes weaker with repeated stimulations. 3. The after-reaction is strengthened. The Opponent-Process Theory is a homeostatic theory. The theory assumes that neurophysiological mechanisms involved in emotional behaviour serve to maintain emotional stability. Solomon & Corbitt (1974) Examined fear and relief of skydivers before and after their jumps. Beginners experience extreme fear as they jump, which is replaced by great relief when they land. With repeated jumps, the fear decreases and the post-jump pleasure increases. This process may explain a variety of thrill-seeking behaviours. - Stage A (fear) decreases with more jumps. - Stage B (relief/thrill) increases with more jumps. Solomon and Corbit (1978) give a number of examples of Hedonic-Affective Phenomena exhibiting opp processes at work including Sky-diving. Solomon (1980) - An event that elicits a strong emotional response produces an opposite response when that event is withdrawn. EXAMPLE When a dog is presented with electric shock its heart rate increases to a peak, decreases slightly, then stabilises at normal level. When shock is turned off its heart rate plunges below normal level. Opponent Process Theory Emotional events elicit two competing responses. 1. a-process (directly elicited by the stimulus). 2. b-process (compensatory response to counteract the a-process) elicited to maintain homeostasis. Solomon & Corbit’s (1978) Standard Pattern of Affective Dynamics A- process: initial reaction: plot positive side of graph Regardless of whether you find the experience pleasant or not. Onset of the stimulus causes a sudden emotional reaction, which quickly reaches its peak. Lasts as long as the stimulus is present, then ends quickly. B- process: after reaction: the offset of the stimulus causes an emotional after reaction that in some sense is the opposite of the initial reaction. Is more sluggish in its onset and decay than the initial reaction. Disturb homeostasis with A process. Compensatory response with B process. Back To Baseline or set point. CR opposite to UR - Heterogeneous CR Environmental cues become CS - CR: B process - Anticipatory starts earlier Characteristics of a-processes & b-processes 1. a-process is directly related to the presentation of the emotional stimulus - Stimulus removal = a-process ceases immediately. 2. b-process is slow to increase and slow to decrease - b-process begins after a-process. - Once stimulus is removed the b-process slowly declines (but a-process ceases immediately). 3. With repeated presentation of the emotional event, the b-process increases in strength & duration. With repeated exposure to the emotional stimulus effects of a-process becomes less extreme (e.g., less heart rate increase). This is because the b-process becomes more extreme (e.g., greater and longer heart rate deceleration). Opponent-process theory can account for a number of emotional phenomena in humans, for example; Drug addiction - Physiological and psychological reactions to the drug directly relate to the a- and b-processes. - Drug effect = net effect of a-process minus b-process. - Repeated experience with drugs results in less of a ‘high’ (a-process), but withdrawal symptoms are stronger and last longer (b-process). - Withdrawal is so bad that redosing with the drug occurs. - Addiction = person takes the drug to avoid the symptoms of withdrawal. Thrill seeking behaviours - Stage A – euphoric “rush”. - Stage B – decrease in euphoria, coming down from a high. - Stage A’ – after repeated exposures, A’ becomes normal, there is no longer a rush, drugs are needed for normalcy. - Stage B’ – more physiologically disturbing and longer- lasting; “abstinence agony”. Addiction - The B’ state lasts a long time. - The B’ state is intensely aversive. - The elicitation of State A or A’ is effective in causing immediate removal of State B or B’. - The user learns to employ the drug which elicits States A and A’ in order to get rid of state B or B’ The manifest temporal dynamics generated by the opponent-process system during the first few stimulations. (The five features of the affective response are labelled.) The manifest temporal dynamics generated by the opponent-process system after many repeated stimulations. (The major features of the modified pattern are labelled.) Changes in the Standard Pattern of Affective Dynamics Initial Drug Effect - After Addiction Withdrawal symptoms seem to be an opponent-process elicited by a CS associated with the drug (S*). The CR is the b-process and it offsets the body’s response to the drug itself. The opponent process appears to be learned, not innate. It is problematic for the learning explanation that the b-process increases with massed exposure. Compensatory-Response Model Siegel, Hinson, Krank & McCully (1982) Rats injected with heroin every second day for 30 days. Alternate days injected with dextrose (sugar) solution. Administered either in home room or different room. Half received heroin in home room; dextrose in other room; other half received opposite injecting room order. Heroin intake increased each day. Third group of rats (controls) received dextrose only in both rooms. - Test – double dose of heroin given to all animals. - Half experimental group in room where heroin normally received; half in other room; control group also got double dose. - DV = mortality. Context cues where the same room group normally received drug offset effects. When large heroin dose administered in new context – no compensatory response = mortality Opponent-process theory - a-process direct effect of the drug – b-process conditioned to the contextual cues (room) Context and Drug Tolerance 40% of US soldiers tried heroin in Vietnam, 20% were addicted and went to rehab on return to the USA. After rehab. only 5% relapsed (95% were rehabilitated) Cf. with ‘home-grown’ addicts - relapse rate = 90% Soldiers spent all day surrounded by a certain environment. They were inundated with the stress of war. They built friendships with fellow soldiers who were heroin users. The end result was that soldiers were surrounded by an environment that had multiple stimuli driving them toward heroin use. Once each soldier returned to the United States, however, they found themselves in a completely different environment. Environment now devoid of the stimuli that triggered their heroin use in the first place. Without the stress, the fellow heroin users, and the environmental factors to trigger their addiction, many soldiers found it easier to quit. Drug tolerance - Repeated use of drug in specific context ® b-process becomes stronger ® reduced net effect of drug ® need increased quantity of drug for same effect. - Repeated experience with drugs results in less of a ‘high’ (a-process). Drug withdrawal - With repeated exposure to the drug in a specific context, the b- process increases in strength and duration. - a-process ceases immediately but b-process declines slowly. - Negative effects of b-process become extreme ® withdrawal. Punishment Positive Punishment: The presentation of an aversive stimulus after a behaviour reduces the likelihood of the behaviour occurring in the future. Negative Punishment: The removal of a pleasant stimulus after a behaviour reduces the likelihood of the behaviour occurring in the future. Speeding > Lose licence Positive Reinforcement: The presentation of a pleasant stimulus after a behaviour makes the behaviour more likely to occur in the future. Negative Reinforcement: The removal of an aversive stimulus after a behaviour makes the behaviour more likely to occur in the future. Discriminative Stimuli In classical conditioning: they elicit autonomic responses (i.e. involuntary reflexes). In operant conditioning: they inform us as to when we can emit a voluntary response. Discriminative stimulus: when present a response will be followed by reward or punishment. Can be a particular situation or thing in the environment. May produce the behaviour in response to a similar stimulus (stimulus generalisation), unless it doesn’t produce the same reward (stimulus discrimination). Acquiring Complex Behaviours: Shaping Complex behaviours, such as bar-pressing, are unlikely to occur spontaneously, so they are hard to reinforce. Solution = Shaping: A procedure in which reinforcement is delivered for successive approximations of the desired response. - Training a dog to fetch the paper. - Teaching a child to tie shoelaces. In 1951 Skinner published a paper in Scientific American in which he claimed it was easy to train animals. Journalist Joseph Roddy for Look in 1952 called his bluff, and arranged to meet Skinner with a dog, and ask Skinner to teach the dog a trick of Roddy’s choosing. In the span of 20 minutes, Skinner was able to use reinforcement of successive approximations to shape Agnes’s behaviour. The result was a pretty good trick: Agnes would wander in, stand on her hind legs, and jump on command. Variables that Affect Operant Conditioning Reinforcer Magnitude - The larger the reward the faster the acquisition of learning. - The quality of the reinforcer influences behaviour. - N.B. The reward has to be of a certain value in order for the instrumental response to be performed (after acquisition). - Crespi - the larger the reward the faster the rats run down the alley. - Likelihood and intensity of a response depends on the size of the reward. - Reward size also affects human learning. - Children learn faster when given small prizes instead of tokens. - Adults show higher achievement when paid more money. - Rats prefer 1 cube in pieces to one cube as it appears to be greater. Delay of Reward The greater the delay the weaker the learning. Frequency of Reinforcement The response must always be reinforced when it is a new response. Continuous reinforcement will ensure the desired response occurs each time. However problems may arise for example habituation to the reinforcer, the reinforcement loses its ‘reinforcing qualities’ or satiation, the organism becomes sated with the reinforcer. Reinforcement Contingencies Timing Intermittent Reinforcement: periodic administration of the reinforcement. Partial Intermittent Reinforcement: Maintains behaviours with fewer reinforcement trails following initial learning. Reinforcing a response only part of the time results in slower acquisition and greater resistance to extinction. Schedules of Reinforcement Ratio Schedules: Reinforcement depends on the number of responses made. Fixed Ratio (FR): Reinforces a response only after a specified number of responses. The faster you respond the more rewards you get. It uses different ratios to achieve a high rate of responses. Variable Ratio: Reinforces a response after an unpredictable number of responses. Uses average ratios, very hard to extinguish because of unpredictability. Interval Schedules: Based on the amount of time between reinforcements. The first response following the minimum time is reinforced. Fixed Interval (FI): Reinforces a response only after a specified time has elapsed. Responses occur more frequently as the anticipated time for reward draws near. Variable Internal (VI): Reinforces a response at unpredictable time intervals. Produces slow steady responding. Motivated Behaviours Not all Behaviours are about Food Primary: Reinforcers such as food, water and sex that have an innate basis because of their biological value to the organism. Secondary: Stimuli, such as money or grades, that acquire their reinforcing power by a learned association with a primary reinforcer. Also called Conditioned Reinforcers. The basic procedure for establishing a secondary reinforcer is the process of classical conditioning. - Skinner used the flash of a strobe light as a conditioned reinforcer to train Agnes flash light > cube of beef; jump up wall > flash the light. The Premack Principle Essentially using a desired or high frequency behaviour to reinforce a less desirable or lower frequency behaviour; A more-preferred activity can be used to reinforce a less-preferred activity. Issues of Punishment Punishment does not usually result in long term behavioural change often the effects are temporary. - Punishment does not necessarily promote better behaviour. For example, if you punish a child for fighting with a sibling it does not teach the child to cooperate with their sibling. - Punishments typically lead to escape behaviour. Learners may learn to fear the administrator rather than the association between their behaviour and the punishment. - Punishment may not undo existing rewards for the behaviour unless it is delivered every time. - Punitive aggression may lead to modelling of aggression. Learned Helplessness When there is no perceived relationship between the individual’s behaviour and punishment. If the punishment is very aversive it can lead to PTSD. Overmier and Seligman (1967) found that dogs exposed to inescapable and unavoidable electric shocks in one situation later failed to learn to escape shock in a different situation where escape was possible. Shortly thereafter Seligman and Maier (1967) demonstrated that this effect was caused by the uncontrollability of the original shocks. Applications of Operant Conditioning Behavioural Therapy There are a wide variety of everyday behaviour problems, including obesity, smoking, alcoholism, social anxiety, depression, delinquency, and aggression through which behavioural therapy can attempt to remedy through; - Token Economies. - Remedial Education. - Therapy for Autism. - Training dogs. - Biofeedback. In Biofeedback training: 1. Internal bodily processes (like blood pressure or muscle tension) are electrically recorded. 2. Information is amplified and reported back to the patient through headphones, signal lights, and other means. 3. This information helps the person learn to control bodily processes not normally under voluntary control. Most useful for promoting relaxation, which can help relieve a number of conditions related to stress. Observational Learning Observational learning, vicarious conditioning or behavioural contagion. Learning by watching others “models” is how we acquire new information by being exposed to one another in a common environment. Observational learning is learning that occurs as a result of observing the experience of others. We copy when a-social learning is costly (dangerous/uncertain situations) and when we can’t afford to learn from your own mistakes as in operant conditioning. We copy successful individuals when our established behaviour is unproductive. Besides true imitation, social learning results from one or more of a number of other social phenomena. Social Facilitation: One’s behaviour prompts similar behaviour of another. An increase in the frequency or intensity of a behaviour (that is already in the animal’s repertoire) caused by the presence of others (of the same species) performing the same behaviour at that time. Local or Stimulus enhancement: Behaviour of one person directs attention of others to an object. After observing another individual engage in that activity, the observer does not necessarily attend to the actions of the “model”. True imitation: Imitation of a novel behaviour pattern in order to achieve a specific goal of particular interest that is either very unusual or quite improbable to have occurred by other means (spontaneously). Essentially, when an animal imitates a behaviour that it has never done before. True imitation can be defined as duplicating a novel behaviour (or sequence of behaviours) in order to achieve a specific goal, without showing any understanding of the behaviour. Observational Learning Processes In order to learn by observation four processes are involved: 1. Attention 2. Retention 3. Reproduction 4. Motivation (from reinforcement) Albert Bandura proposes we learn through imitation or modelling, “Observational (or Vicarious) Learning”. This explains the speed of learning in young children as there is no need for trial-and-error. Social Learning Theory A child can learn without immediate performance of the behaviour. This is achieved through formation of a symbolic representation, they have to see someone do it (a model) “Vicarious reinforcement”. Key Features of the MODEL Appropriateness: aggressive male models are more likely to be imitated than aggressive female ones, due to cultural factors in Western world. Similarity: children are more likely to imitate someone they perceive as similar to themselves, those having the same sex, age, or ethnic group, etc. Bandura Ross & Ross (1961) Aim - If children were witnesses to an aggressive display by an adult they would imitate this aggression when given the opportunity. Method - A laboratory experiment in controlled conditions. There were three conditions, and 24 children in each condition. - Non aggressive condition. - Aggressive condition. - Control condition. They found that; - Exposure to aggressive models will lead to imitation of the aggression observed. - Exposure to non-aggressive models generally has an inhibiting effect on aggressive behaviour. - Same-sex imitation is greater than opposite-sex imitation for some behaviours (Boys especially). - Boys imitate aggression more than girls and are generally more aggressive except for verbal aggression. Conclusions Aggression is a learned behaviour, not an in-built instinct. Learning can take place in absence of any reinforcement, only via observation and modelling. Modelling is a powerful and fast way of learning. Bandura’s further research - Bandura, Ross & Ross (1963): children watched films with either an aggressive or non-aggressive model. Filmed model produced even more aggression than the live model. - Model rewarded or punished for aggression. - Children imitated the rewarded aggressive model the most. - Bandura’s research was the ‘first generation’ of scientific research on the effects of media violence on children.

Tags

psychology learning behaviour
Use Quizgecko on...
Browser
Browser