Chapter 10 The Learning Perspective PDF
Document Details
Uploaded by PromptMeter5224
Swinburne University of Technology
Tags
Summary
This document contains learning objectives and a detailed explanation of learning theories, including classical and instrumental conditioning. It also covers topics such as observational learning, media violence, and behavior modification.
Full Transcript
Chapter 10 The Learning Perspective Learning Objectives 10.1 Relate the process of classical conditioning to the concepts of discrimination, generalization, and emotional conditioning 10.2 Examine the processes of instrumental conditioning, schedules of reinforcement, and partial reinforcement 1...
Chapter 10 The Learning Perspective Learning Objectives 10.1 Relate the process of classical conditioning to the concepts of discrimination, generalization, and emotional conditioning 10.2 Examine the processes of instrumental conditioning, schedules of reinforcement, and partial reinforcement 10.3 Analyze cognitive and social learning theories as new generation of learning theories 10.4 Summarize the process of acquiring a behavior potential through observational learning 10.5 Examine three processes that influence the impact of media violence on real-life aggression 10.6 Evaluate behavioral assessment 10.7 Assess how behavior problems can be treated through conditioning procedures of behavior therapy or behavior modification 10.8 Recall two strengths and problems of the learning perspective on personality Linda has a fondness for pastels. When asked why, she looks sort of blank and says she doesn't know, except she's felt that way at least since her eighth birthday, when she had the most wonderful surprise party, decorated all in pale pink, green, and violet. I was watching my two-year-old the other day in the kitchen, when he popped open the childproof latch on one of the cabinet doors, just like that, and reached in for a pan. I never taught him how to do that. I wonder how he figured it out. Maybe he was watching me. Why do people have the preferences they have? How do people acquire new ways to act? A common answer is that these aspects of behavior arise through learning. From this perspective, personality consists of all the tendencies you've learned over all the experiences of your life. If personality is the residue of learning, it's important to know how learning works. Disagreement remains about whether learning is one process that has several manifestations or whether several distinct processes are involved (e.g., Locurto, Terrace, & Gibbon, 1980; Rescorla, 1987; Staats, 1996). For ease in presentation, we'll adopt the view that there are distinct types of learning that have their own rules. The first part of this chapter focuses on basic forms of learning called conditioning. Much of the work on these processes uses animals other than humans. Nonetheless, many people think these processes underlie the qualities we know as personality. As the study of learning progressed, learning began to appear more complex than it seemed at first. There grew a need for more elaborate theories, reflecting the fact that human knowledge can accumulate in great leaps, rather than just small increments. The elaborated theories also proposed a larger role for cognition in learning. The later part of this chapter discusses these types of learning. 10.1: Classical Conditioning 10.1 Relate the process of classical conditioning to the concepts of discrimination, generalization, and emotional conditioning An early discovery about learning was that reactions could be acquired by associating one stimulus with another. This type of learning is called classical conditioning. It's sometimes also called Pavlovian conditioning, after the Russian scientist Ivan Pavlov, whose work opened the door to understanding it (e.g., Pavlov, 1927, 1955). 10.1.1: Basic Elements Classical conditioning seems to require two things. First, the organism must already respond to some class of stimuli reflexively. That is, the response must occur automatically whenever the stimulus occurs. A reflex is an existing connection between a stimulus and a response, such that the first causes the second. For example, when you put something sour in your mouth (perhaps a tart candy), you start to salivate. When you touch a hot oven, you pull your hand away. These reactions happen reflexively for most people. Some reactions are innate; others were learned in the past. But in each case, a stimulus leads reliably to a particular response. The second condition for classical conditioning is that the stimulus in the reflex must become associated in time and place with another stimulus. The second stimulus is usually (though not always) neutral at first. That is, by itself it causes no particular response beyond being noticed. In principle, there are no special requirements for this stimulus. It can be pretty much anything---a color, a sound, an object, a person. People often describe classical conditioning in stages (see Figure 10.1). The first stage is the situation before conditioning. At this point, only the reflex exists---a stimulus causing a response. The stimulus is termed the unconditioned or unconditional stimulus (US), and the response it creates is called the unconditioned or unconditional response (UR). The word unconditional here means no special condition is required for the response to occur. It's automatic when the stimulus occurs (see Figure 10.1, A). Figure 10.1 The various stages of a typical classical conditioning procedure (time runs left to right in each panel): (A) There is a pre-existing reflexive connection between a stimulus (US) and a response (UR). (B) A neutral stimulus (CS) is then paired repeatedly in time and space with the US. (C) The result is the development of a new response, termed a conditioned response (CR). (D) Once conditioning has occurred, presenting the CS by itself will now lead to the CR. The diagram shows that the stage "A" of the procedure shows an unconditional stimulus (US) producing an unconditional response (UR). Stage "B" shows a conditional stimulus (CS) paired with the unconditional stimulus. CS starts slightly before the US. The stage "C" shows that over the time conditional stimulus starts producing conditioned response in presence of unconditional stimulus. Stage "D" shows that the conditional stimulus is able to produce conditioned response even in absence of unconditional stimulus. The second stage is conditioning. In this stage, the neutral stimulus occurs along with, or slightly before, the US (see Figure 10.1, B). The neutral stimulus is now termed a conditioned or conditional stimulus (CS). Here are two ways to keep track of what that means. First, this is the stimulus that's becoming conditioned. Second, a response occurs in its presence only under a specific condition: that the US is there, as well. When the US comes, the UR follows automatically, reflexively (and remember that it does so whenever the US is presented, whether something else is there or not). When the US and the CS are paired frequently, something gradually starts to change (see Figure 10.1, C). The CS starts to acquire the ability to produce a response of its own. This response is termed the conditioned response (CR). The CR is often very similar to the UR. Indeed, in some cases, they look identical (see Table 10.1, rows A and D), except that the CR is less intense. In other cases, the two can be distinguished. Even so, there is a key similarity: If the UR has an unpleasant quality, so will the CR (see Table 10.1, row B). If the UR has a pleasant quality, so will the CR (see Table 10.1, rows C and D). Table 10.1 Illustrations of the elements of classical conditioning in two common research procedures (A and B), in one common childhood experience (C), and in one common adult experience. (Note that the elements are arranged here in terms of stimulus and the associated response, not in time sequence.) ![](media/image2.png) How does any of this apply to you? Suppose you've started squandering your evenings at a restaurant that specializes in Italian food and Sicilian folk music. One night while you're there, you meet a person (US) who induces in you an astonishingly high degree of sexual arousal (UR). As you bask in candlelight, surrounded by crimson wallpaper and the soft strains of a Sicilian love song (CSs), you may be acquiring a conditioned sexual response (CR) to these previously neutral features of the setting. Candlelight may become a turn-on, and the song you're hearing may gain a special place in your heart. If you know that a US has occurred repeatedly along with a neutral stimulus, how do you know whether conditioning has taken place? To find out, present the CS by itself---without the US (see Figure 10.1, D). If the CS (alone) gets a reaction, conditioning has occurred. If there's no reaction, there's been no conditioning. The more frequently the CS is paired with the US, the more likely conditioning will occur. If a US is very strong, however---causing a very intense UR---conditioning may occur with only one pairing. For example, cancer patients undergoing chemotherapy often experience extreme nausea from the medication and can develop very strong CRs to surrounding stimuli after only one exposure. Once conditioning has taken place, the CS--CR combination acts just like any other reflex. That is, once it's there, this combination can act as reflex for another instance of conditioning. Returning to our example, once Sicilian music has been conditioned to induce sexual arousal, Sicilian music can be used to condition that arousal to other things, such as a particular photograph in the place where you listen to Sicilian songs. This process is termed higher-order conditioning. 10.1.2: Discrimination, Generalization, and Extinction in Classical Conditioning Classical conditioning provides a way for new responses to become attached to CSs (though see Box 10.1 for questions about this). Yet the CS almost never occurs later in precisely the same form as during conditioning. On the other hand, you will run across many stimuli later that are somewhat similar to the CS. What happens then? Box 10.1 What's Going On in Classical Conditioning? Classical conditioning has been part of psychology courses for decades. In most accounts, it's presented as a process that was well mapped out early in the development of learning theory and to which little new has been added since then. Classical conditioning is usually portrayed as a low-level process in which a response gets spread from one stimulus to another because they occur close in time. But Robert Rescorla (1988) has argued that's not the way it is. He says that organisms use their experiences of relations between parts of the world to represent reality (see also Mowrer & Klein, 2001). In his view, association in time and place isn't what makes conditioning occur. Rather, it's the information one stimulus gives about the other. To Rescorla, learning is a process by which the organism's representation of the world is brought into line with the actual state of the world. Organisms learn only when they're "surprised" by something that happens to them. As a result, two stimuli experienced together sometimes don't become associated. Consider two animals. One has had a series of trials in which a light (as a CS) was paired with a shock (as a US). The other hasn't had this experience. Both animals then get a series of trials in which both the light and a tone (as two CSs) are paired with the shock. The second animal acquires a CR to the tone, but the first one doesn't. Apparently, the first one's earlier experience with the light has made the tone redundant. Because the light already signals that the US is coming, there's no need to condition to the tone, and it doesn't happen. In the same way, cancer patients undergoing chemotherapy can be induced to form conditioned aversions to specific unusual foods by giving those foods before chemotherapy (Bernstein, 1985). Doing this can make that specific food a "scapegoat," and prevent conditioning of aversions to other foods, which otherwise is very common. Rescorla (1988) has also challenged other aspects of the traditional view. He argues against the idea that classical conditioning is a slow process requiring many pairings. He says learning commonly occurs in five to six trials. He says that classical conditioning "is not a stupid process by which the organism willy-nilly forms associations between any two stimuli that happen to co-occur. Rather, the organism is better seen as an information seeker using logical and perceptual relations among events, along with its own preconceptions, to form a sophisticated representation of the world" (p. 154). The position taken by Rescorla (and others) is clearly different from that expressed here in the body of the chapter: that classical conditioning reflects learning of an association between stimuli. The views these researchers have expressed also foreshadow a broad issue that's prominent in a later part of this chapter: the role of cognition in learning. Suppose your experiences in the Sicilian restaurant have led you to associate candlelight, crimson wallpaper, and Italian food (as CSs) with sexual arousal (as CR). What would happen if you walked into a room with muted lamplight, burgundy-painted walls, and Spanish food? These aren't quite the stimuli that got linked to sexual arousal, but they're similar. Here a process called generalization occurs. Generalization is responding in a similar way to similar-but-not-identical stimuli. In this setting, you'd probably start to feel the glow of arousal, although probably not as much as in the original room. Your reaction would fall off even more if the new room differed even more from the first room. One purpose of a business lunch is to associate your company and its products (as CSs) with positive feelings produced by a good meal in a nice restaurant. Why would it fall off more? The answer lies in a concept called discrimination. Discrimination means responding differently to different stimuli. If you walked into a room with fluorescent lights and blue walls, the mellow glow associated with the Sicilian restaurant would surely not emerge. You would discriminate between the two sets of stimuli. Discrimination and generalization are complementary. Generalization gives way to discrimination, as the stimuli become more different from the initial CS. Do conditioned responses go away? Discussions of conditioning don't use words such as forgetting. CRs do seem to weaken, however, a process called extinction. This occurs when a CS appears repeatedly without the US (Pavlov, 1927). At first, the CS leads reliably to the CR (see Figure 10.2). But gradually, over repeated presentations, the CR grows weaker. The CR doesn't actually disappear, however. Even when a response stops in a session, there's a "spontaneous recovery" the next day (Wagner, Siegel, Thomas, & Ellison, 1964). In fact, it is now believed that classical conditioning leaves a permanent record in the nervous system, that its effects can be muted but not erased (Bouton, 1994, 2000). It is also now often said that extinction is really a process of creating new conditioning of "no-response" responses to the CS (Miller & Laborda, 2011). Figure 10.2 Extinction and spontaneous recovery in classical conditioning. When a CS appears over and over without the US, the CR becomes progressively weaker and eventually disappears (or nearly does). If the CS is repeated again after the passage of time, the CR returns at a lower level than it was initially but at a higher level than it was when the CS was last presented. Over repeated occasions, the spontaneous recovery also diminishes. ![](media/image4.png) The diagram shows conditioned response produced by conditional stimulus without unconditional stimulus for three days and multiple times a day. On day 1, the conditional stimulus is presented 8 times. Each time the intensity of conditioned response is shown to be less than the previous one. On day 2, the conditional stimulus is presented 5 times. The intensity of conditioned response for the first time on day 2 is shown to be much higher than that of the last time the previous day. Like day 1, the intensity on day 2 each time is shown to be less than the previous one. On day 3, the conditional stimulus is presented only 3 times. The intensity of conditioned response for the first time on day 3 is shown to be slightly higher than that of the last time the previous day. The intensity on day 3 each time is shown to be less than the previous one. 10.1.3: Emotional Conditioning As you may have realized already, a lot of the classical conditioning in humans involves responses with emotional qualities. That is, many of the stimuli that most clearly cause reflexive reactions are those that elicit positive feelings (hope, delight, excitement) or bad feelings (fear, anger, pain). The term emotional conditioning is sometimes used to refer to classical conditioning in which the CRs are emotional reactions. An interesting aspect of emotional conditioning is emotional reactions to properties such as colors. Andrew Elliot and his colleagues (e.g., Elliot & Maier, 2007) argue that the color red evokes negative emotions in academic contexts, because it's been related to poor grades. (Teachers tend to use red pens to mark errors in students' work.) Their studies found that exposing test takers to red (compared to other colors) caused performance to drop (Elliot, Maier, Moller, Friedman, & Meinhardt, 2007; Lichtenfeld, Maier, Elliot, & Pekrun, 2009). This may have occurred because the color red induced avoidance motivation (Elliot, Maier, Binser, Friedman, & Pekrun, 2009), but emotional conditioning was also involved (Moller, Elliot, & Maier, 2009). Other research shows that the color red can influence the amount of romantic behavior men show toward women (Kayser, Elliot, & Feltman, 2010), as well as the amount of physical effort that a person puts into an activity (Elliot & Aarts, 2011). Conditioning of emotional responses is important to the learning view on personality. It's argued that people's likes and dislikes---all the preferences that help define personality---develop through this process (De Houwer, Thomas, & Baeyens, 2001; Walther, Weil, & Düsing, 2011). Linking a neutral stimulus to a pleasant event creates a "like." Linking a stimulus to an upsetting event creates a "dislike." In fact, just hearing someone describe a good or bad trait in someone else can link that trait in your mind to the person who's doing the describing (Skowronski, Carlston, Mae, & Crawford, 1998). And remember that the principle of generalization applies to all kinds of stimuli, including other people (Verosky & Todorov, 2010). We all experience different bits of the world and thus have different patterns of emotional arousal. Different people also experience the same event from the perspective of their unique "histories." As noted in Chapter 6, even children from the same family experience the family differently (Daniels & Plomin, 1985). As a result, people can wind up with remarkably different patterns of likes and dislikes (see Box 10.2). Thus, emotional conditioning can play a major role in creating the uniqueness of personality (Staats & Burns, 1982). Box 10.2 Classical Conditioning and Attitudes Where do attitudes come from? The answer from this chapter is that you develop attitudes through classical conditioning. A neutral stimulus (CS) starts to produce an emotional reaction (CR) after it's paired with another stimulus (US) that already creates an emotional reaction (UR). This approach says that people acquire emotional responses to attitude objects (classes of things, people, ideas, or events) that way. If the attitude object is paired with an emotion-arousing stimulus, it comes to evoke the emotion itself. This response, then, is the basis for an attitude. A good deal of evidence fits this depiction. More than 75 years ago, Razran (1940) presented political slogans to people and had them rate how much they approved of each. Later, he presented the slogans again under one of three conditions: while the people were eating a free lunch, while they were inhaling noxious odors, or while they were sitting in a neutral setting. Then the people rated their approval of the slogans again. Slogans paired with a free lunch were now rated more positively. Slogans paired with unpleasant odors were now rated more negatively. Many other studies have found similar results (De Houwer et al., 2001). Attitudes toward people can form the same way. Walther (2002) found that pairing photos of neutral persons with liked or disliked persons led to positive and negative attitudes, respectively, toward the neutral persons. There's also the potential for higher-order conditioning here. Negative attitudes formed by associating a neutral person with a disliked person can produce further conditioning from that person to another neutral person (Walther, 2002). And think about the fact that words such as good and bad are tied in most people's experiences with positive and negative events (Staats & Staats, 1957, 1958) and thus probably cause emotional responses themselves. People use such words all the time around others, creating many opportunities for higher-order conditioning. A large number of studies show that classical conditioning can be involved in attitude formation. Events that arouse emotions are common in day-to-day life, providing opportunities for conditioning. For example, the "business lunch" is remarkably similar to Razran's experimental manipulation. It seems reasonable that classical conditioning may underlie many of people's preferences for persons, events, things, places, and ideas. Given that preferences are important aspects of personality, conditioning also seems an important contributor to personality. 10.2: Instrumental Conditioning 10.2 Examine the processes of instrumental conditioning, schedules of reinforcement, and partial reinforcement A second form of learning is called instrumental conditioning. (This term is often used interchangeably with operant conditioning, despite slight differences in meaning.) Instrumental conditioning differs in several ways from classical conditioning. For one, classical conditioning is passive. When a reflex occurs, conditioning doesn't require you to do anything---just to be there and be aware of other stimuli. In contrast, instrumental conditioning is active (Skinner, 1938). The events that define it begin with a behavior (even if the behavior is the act of remaining still). 10.2.1: The Law of Effect Instrumental conditioning is a simple process, although its ramifications are widespread. It goes like this: If a behavior is followed by a better (more satisfying) state of affairs, the behavior is more likely to be done again later in a similar situation (see Figure 10.3, A). If a behavior is followed by a worse (less satisfying) state of affairs, the behavior is less likely to be done again later (see Figure 10.3, B). Figure 10.3 Instrumental conditioning: (A) Behavior that is followed by a more satisfying state of affairs is more likely to be done again. (B) Behavior that is followed by a less satisfying state of affairs is less likely to be done again. (C) This principle accounts for the fact that (over time and experiences) some behaviors emerge from the many possible behaviors as habitual responses that occur in specific situations. Part A of the chart shows that when a behavior is followed by a better state of affairs the probability of doing the behavior again increases. Part B shows that when a behavior is followed by a worse state of affairs the probability of doing the behavior again decreases. Part C shows behavior A to F is followed by a better state of affairs but behavior D emerges as the most probable. This simple description---linking an action, an outcome, and a change in the likelihood of future action---is the law of effect deduced by E. L. Thorndike more than a century ago (Thorndike, 1898, 1905). It is simple but profound. It accounts for regularities in behavior. Any situation allows many potential acts (see Figure 10.3, C). Some acts come to occur with great regularity; others happen once and disappear, never to return; still others turn up occasionally---but only occasionally. Why? Because some have been followed by satisfying outcomes whereas others haven't. As outcomes are experienced after various behaviors, a habit hierarchy evolves (Miller & Dollard, 1941). The order of responses in the hierarchy derives from prior conditioning. Some responses are very likely (high on the hierarchy), because they've often been followed by more satisfying states of affairs. Others are less likely (lower on the hierarchy). The form of the hierarchy shifts over time, as outcome patterns shift. 10.2.2: Reinforcement and Punishment Today, the term reinforcer replaces the phrase satisfying state of affairs. This term conveys that it strengthens the tendency to do the act that preceded it. Reinforcers can reduce biological needs (food or water) or satisfy social desires (smiles and acceptance). Some get their reinforcing quality indirectly (money). Different kinds of reinforcers have different names. A primary reinforcer diminishes a biological need. A secondary reinforcer has acquired reinforcing properties by association with a primary reinforcer (through classical conditioning) or by virtue of the fact that it can be used to get primary reinforcers (Wolfe, 1936; Zimmerman, 1957). The term punisher refers to unpleasant outcomes. Punishers reduce the tendency to do the behavior that came before them, although there's been controversy about how effective they are (Rachman & Teasdale, 1969; Solomon, 1964; Thorndike, 1933). Punishment can also be primary or secondary. That is, some events are intrinsically aversive (e.g., pain). Others are aversive because of their associations with primary punishers. Another distinction is also important but a little confusing. Reinforcement always implies making the preceding behavior more likely to occur again. But this can happen in two ways. The more obvious way is by receiving something good (food, gifts, money). Getting these things is termed positive reinforcement. "Positive" implies adding something good. When positive reinforcement occurs, the behavior that preceded it becomes more likely. There's also a second kind of reinforcement, called negative reinforcement. Negative reinforcement occurs when something unpleasant is removed. For instance, when your roommate stops playing his annoying CD of "Polka Favorites" over and over, that might act as a negative reinforcer for you. Removing something unpleasant moves the state of affairs in a positive direction---from unpleasant to neutral. It thus is reinforcing and will cause the behavior that preceded it to become more likely to occur. Punishment also comes in two forms. Most people think of punishment as adding pain, moving the state of affairs from neutral to negative. But sometimes punishment involves removing something good, changing from a positive to a neutral state of affairs (thus less satisfying). This principle---punishing by withdrawing something good---underlies a tactic that's widely used to discourage unwanted behavior in children. It's called a time out, short for "time out from positive reinforcement" (Drabman & Spitalnik, 1973; Risley, 1968). A time out takes the child from whatever activity is going on to a place where there's nothing fun to do. Many find this technique appealing, because it seems more humane than punishments such as spanking. In principle, however, a time out creates a "less satisfying state of affairs" for the child and thus should have the same effect on behavior as any other punishment. ![](media/image6.png) Time out is an effective way of discouraging unwanted behavior in children. 10.2.3: Discrimination, Generalization, and Extinction in Instrumental Conditioning Several ideas introduced in the discussion of classical conditioning also apply to instrumental conditioning, with slight differences in connotation. For example, discrimination still means responding differently in the presence of different stimuli. In this case, however, the difference in response results from variations in prior reinforcement or punishment. Imagine that when a stimulus is present, a particular action is always followed by a reinforcer. When the stimulus is absent, the same action is never followed by a reinforcer. Gradually, the presence or absence of the stimulus comes to influence whether the behavior takes place. It becomes a discriminative stimulus: a stimulus that turns the behavior on and off. You use the stimulus to discriminate among situations and thus among responses. Behavior that's cued by discriminative stimuli is said to be "under stimulus control." Earlier we said that a habit hierarchy (an ordering of the likelihood of doing various behaviors) can shift because of the ongoing flow of reinforcing (and nonreinforcing) events. It shifts constantly for another reason, as well: Every change in situation means a change in cues (discriminative stimuli). The cues suggest what behaviors are reinforced in that situation. Thus, a change in cues rearranges the list of behavior probabilities. Changing cues can disrupt even very strong habits (Wood, Tam, & Witt, 2005). The principle of generalization is also important here. As you enter new settings and see objects and people you've never seen before, you respond easily and automatically, because there are similarities between the new settings and previous discriminative stimuli. You generalize behaviors from the one to the other, and your actions flow smoothly forward. For example, you may never have seen a particular style of spoon before, but you won't hesitate to use it to eat the soup. You may never have driven a particular make of car before, but if that's what the rental agency gives you, you'll probably be able to handle it. The principle of generalization gives conditioning theorists a way to talk about trait-like qualities. A person will behave consistently across time and circumstances if discriminative stimuli stay fairly similar across the times and circumstances. Because key stimulus qualities often do stay the same across settings (even if other qualities differ greatly), the person's action tendency also stays the same across the settings. The result is that, to an outside observer, the person appears to have a set of traits. In this view, however, behavioral consistency depends on similarities of environments (an idea that's not too different from the discussion of consistency late in Chapter 4). Extinction in instrumental conditioning occurs when a behavior that once led to a reinforcer no longer does so. As the behavior is done over and over---with no reinforcer---its probability falls. Eventually it's barely there at all (though just as in classical conditioning there's a tendency for spontaneous recovery, causing some to believe that it hasn't gone away; Bouton, 1994; Lansdale & Baguley, 2008; Rescorla, 1997, 1998). Thus, in extinction, behavioral tendencies fade. 10.2.4: Schedules of Reinforcement In reading about instrumental conditioning, people often assume that reinforcement occurs every time the behavior occurs. But common sense and your own experience should tell you life's not like that. Sometimes reinforcements are frequent, but sometimes not. Variations in frequency and pattern are called schedules of reinforcement. One simple variation is between continuous and partial (or intermittent) reinforcement. In continuous reinforcement, the behavior is followed by a reinforcer every single time. In partial reinforcement, the behavior is followed by a reinforcer only some of the time. Many personal superstitions are learned through a schedule of random partial reinforcement. Continuous and partial reinforcement differ in two ways in their effects on behavior. The first is that new behaviors are acquired faster when reinforcement is continuous than when it's not. Eventually, even infrequent reinforcement results in high rates of the behavior, but it may take a while. The other effect is less intuitive, but more important. It's often called the partial reinforcement effect. It shows up when reinforcement stops (see Figure 10.4). Take away the reinforcer, and a behavior acquired by continuous reinforcement will go away quickly. A behavior built in by partial (less frequent) reinforcement remains longer---it's more resistant to extinction (Amsel, 1967; Humphreys, 1939). Figure 10.4 Effect of partial reinforcement and continuous reinforcement on persistence. People first played on a slot machine that paid off 25%, 50%, 75%, or 100% of the time. Then they were allowed to continue playing for as long as they liked, but they never again won. As can be seen, partial reinforcement leads to greater resistance to extinction. Those initially rewarded less than 100% of the time persist longer when all reward is removed. The lower the percentage of partial reinforcement, the greater the persistence. ![](media/image8.png) The vertical axis of the graph is labeled "Logarithm of Number of Lever Pulls Made before Quitting" and ranges from 1.7 to 2.0 in increments of 0.1, with a kink below 1.7. The horizontal axis is labeled "Percentage Reinforced during Acquisition" and ranges from 0 to 100 in increments of 25. The graph shows value of logarithm of number of lever pulls made before quitting for 25, 50, 75, and 100 percentage reinforced during acquisition as 1.88, 1.82, below 1.7, and below 1.7 respectively. The values used in the description are approximate. 10.2.5: Reinforcement of Qualities of Behavior One final point about learning through instrumental conditioning: It's most intuitive to think that the reinforcer makes a particular act more likely in the future. However, there's evidence that what becomes more likely isn't always an act but rather some quality of action (Eisenberger & Selbst, 1994). For example, reinforcing effort in one setting can increase effortfulness in other settings (Mueller & Dweck, 1998). Reinforcing accuracy on one task increases accuracy on other tasks. Reinforcing speed on one task increases speed elsewhere. Reinforcing creativity yields more creativity (Eisenberger & Rhoades, 2001). Reinforcing focused thought results in more focused thinking elsewhere (Eisenberger, Armeli, & Pretz, 1998). Reinforcing variability produces greater variability in behavior (Neuringer, 2004). Indeed, reinforcement can influence the process of selective attention (Libera & Chelazzi, 2006, 2009). Thus, reinforcement can change not just particular behaviors but abstract qualities of behavior. This idea broadens considerably the ways in which reinforcement principles may act on human beings. It suggests that reinforcers act at many levels of abstraction. In fact, many aspects of behavior at many different levels may be reinforced simultaneously when a person experiences a more satisfying state of affairs. This possibility creates a far more complex picture of change through conditioning than one might initially imagine. 10.3: Social and Cognitive Variations 10.3 Analyze cognitive and social learning theories as new generation of learning theories The basic principles of conditioning are powerful tools for analyzing behavior. They account for large parts of human experience. They explain how attitudes and preferences can derive from emotional reactions, and they explain how behavior tendencies strengthen and fade as a result of good and bad outcomes. Powerful as these ideas are, however, many came to believe that they were insufficient to account for the learning exhibited by humans. Conditioning theories seem to ignore aspects of behavior that seem obvious outside the lab. For example, people often learn by watching others, without experiencing outcomes. Moreover, people often decide whether to do something by thinking about what would happen if they did it. Existing theories didn't seem wrong, exactly, but they seemed incomplete. From these dissatisfactions (and the work they prompted) came what might be seen as another generation of learning theories. They emphasize mental events more than the earlier ones do. For this reason, they're often called cognitive learning theories. They also emphasize social aspects of learning. For this reason, they're often called social learning theories. One aspect of this second generation of theories was some elaborations on conditioning principles. 10.3.1: Social Reinforcement As learning theory evolved, some researchers began to think more carefully about human learning. This led, in part, to a different view of reinforcement. Many came to believe that reinforcement in humans (beyond infancy, at least) has little or nothing to do with reduction of physical needs. Rather, people are most affected by social reinforcers: acceptance, smiles, hugs, praise, approval, interest, and attention from others (Bandura, 1978; Kanfer & Marston, 1963; Krach, Paulus, Bodden, & Kircher, 2010; Rotter, 1954, 1982). The idea that the important reinforcers for people are social is one of several ways in which these learning theories are social (Brokaw & McLemore, 1983; A. H. Buss, 1983; Turner, Foa, & Foa, 1971). A description of social reinforcement should also mention self-reinforcement. This term has two meanings. The first is the idea that people may give themselves reinforcers after doing something they've set out to do (Bandura, 1976; Goldiamond, 1976; Heiby, 1982). For example, you might give yourself a pizza for studying six straight hours, or you may get yourself a new set of headphones after a semester of good grades. Many of the important reinforcers affecting human behavior are social in nature. The second meaning derives from the concept of social reinforcement. It's the idea that you react to your own behavior with approval or disapproval, much as someone else reacts to your behavior. In responding to your actions with approval, you reinforce yourself. In responding with disapproval, you punish yourself. This sort of internal self-reinforcement and self-punishment plays a role in social--cognitive learning theories of behavior and behavior change (Bandura, 1977a, 1986; Kanfer, 1977; Kanfer & Hagerman, 1981; Mischel, 1973, 1979). 10.3.2: Vicarious Emotional Arousal Another elaboration on conditioning comes from the fact that people can experience events vicariously---through someone else. Vicarious processes represent a second sense in which human learning is social. That is, vicarious processes involve two people: one to experience something directly, another to experience it indirectly. One type of vicarious experience is vicarious emotional arousal, or empathy. This occurs when you observe someone feeling an intense emotion and experience the same feeling yourself (usually less intensely). Empathy isn't the same as sympathy, which is a feeling of concern for someone else's suffering (Gruen & Mendelsohn, 1986; Wispé, 1986). When you empathize, you feel the same feeling, good or bad, as the other person. Everyone has this experience, but people differ in how intensely they empathize (Eisenberg et al., 1994; Levenson & Ruef, 1992; Marangoni, Garcia, Ickes, & Teng, 1995). ![](media/image10.png) Empathy causes us to experience other's emotions. For example, others' grief elicits sadness from us, and their happiness elicits our joy. As you look at this picture, you are probably beginning to feel the same emotions that the people in the picture are experiencing. Examples of empathy are easy to point to. When something wonderful happens to a friend, putting her in ecstasy, you feel happy, as well. Being around someone who's frightened makes most people feel jumpy. Laughter is often contagious, even when you don't know what the other person is laughing at. There's even evidence that being around someone who's embarrassed can make you feel embarrassed too (Miller, 1987). The concept of empathy helps explain why particular emotional states tend to be shared in friendship networks, even electronic networks (Kramer, Guillory, & Hancock, 2014). Experiencing vicarious emotional arousal doesn't constitute learning, but it creates an opportunity for learning. Recall emotional conditioning, from earlier in the chapter. Feeling an emotion in the presence of a neutral stimulus can cause that stimulus to become capable of evoking a similar emotion (Olsson, Nearing, & Phelps, 2007). The emotion that starts this process can be caused by something you experience directly, but it can also arise vicariously. Thus, vicarious emotional arousal creates a possibility for classical conditioning. Such an event is called vicarious classical conditioning. 10.3.3: Vicarious Reinforcement Another vicarious process may be even more important. This one, called vicarious reinforcement, is very simple: If you observe someone do something that's followed by reinforcement, you become more likely to do the same thing yourself (Kanfer & Marston, 1963; Liebert & Fernandez, 1970). If you see a person punished after doing something, you're less likely to do it. The reinforcer or punishment went to the other person, not to you. But your own behavior will be affected as though you'd received it yourself. How do vicarious reinforcement and punishment influence people? Presumably, seeing someone reinforced after a behavior leads you to infer that you'd get the same reinforcer if you acted the same way (Bandura, 1971). If someone else is punished, you conclude the same thing would happen to you if you acted that way (Bandura, 1973; Walters & Parke, 1964). (See also Box 10.3.) Box 10.3 Modeling and Delay of Gratification Social--cognitive learning theories emphasize that people's acts are determined by cognitions about potential outcomes of their behavior (Kirsch, 1985). This emphasis returns us to the concept of self-control, the idea that people sometimes restrain their own actions. People often face the choice of getting a desired outcome immediately or getting a better outcome later on. The latter choice---delay of gratification---isn't all that easy to make. Imagine that after saving for four months, you have enough money to go to an oceanside resort for two weeks. You know that if you saved for another 10 months, you could take the trip to Europe you've always wanted. One event is closer in time. The other is better, but getting it requires more self-control. Ten more months with no vacation is a long time. Many variables influence people's ability to delay. Especially relevant to this chapter is the role played by modeling (Mischel, 1974, 2014). Consider a study by Bandura and Mischel (1965) of fourth- and fifth-graders who (according to a pretest) preferred either immediate or delayed reward. Children of each preference were put into one of three experimental conditions. In one, the child saw an adult model make a series of choices between desirable items that had to be delayed and less desirable items that could be had immediately. The model consistently chose opposite to the child's preference. Children in the second condition read about the same model's choices. In the third condition, there was no modeling. All the children had a series of delay-of-gratification choices just afterward and again a month later. Seeing a model choose an immediate reward made delay-preferring children more likely to choose an immediate reward. Seeing a model choose a delayed reward made immediate-preferring children more likely to delay. These effects still held a month later. How do models exert this influence on self-control? One possibility is through vicarious reinforcement. In the Bandura and Mischel (1965) study, the model vocalized reasons for preferring one choice over the other. The model's statements implied that he felt reinforced by his choices (see also Bandura, Grusec, & Menlove, 1967; Mischel & Liebert, 1966; Parke, 1969). Thus, people obtain information from seeing how others react to experiences and use that information to guide their own actions. 10.3.4: What Really Is Reinforcement? Note that vicarious reinforcement (just described) seems to develop an expectancy---a mental model of links between actions and reinforcers. Such a mental model of a link from action to expected outcome is called an outcome expectancy (Bandura, 1977a). The idea that people hold expectancies and that expectancies influence action wasn't new when it was absorbed into social learning theory (e.g., Brunswik, 1951; Lewin, 1951b; Postman, 1951; Tolman, 1932). But an emphasis on expectancies became a cornerstone of this view of personality (Rotter, 1954; see also Bandura, 1977a, 1986; Kanfer, 1977; Mischel, 1973). In fact, this concept became important enough to raise questions about what direct reinforcement does. We said earlier in the chapter that reinforcers strengthen the tendencies to do the behaviors that preceded them. Yet (Albert Bandura 1976, 1977a), a prominent social learning theorist, explicitly rejected this sense of the reinforcement concept, while continuing to use the term (see also Bolles, 1972; Brewer, 1974; Rotter, 1954). If reinforcers don't strengthen action tendencies, what do they do? Bandura said they do two things: First, by providing information about outcomes, reinforcers lead to expectancies about what actions are effective in what settings. In addition, reinforcers provide the potential for future motivational states through anticipation of their recurrence in the future. Many people would agree that these functions are important. But they clearly represent a very different view of what reinforcement is, compared to the view discussed earlier in the chapter. 10.3.5: Efficacy Expectancies Another variation on the theme of expectancies derives partly from clinical experience. Bandura (1977b) argued that people with problems generally know exactly what actions are needed to reach the outcomes they want. Just knowing what to do, however, isn't enough. You also have to be confident of being able to do the behavior. This confidence in having the ability to carry out a desired action is what Bandura termed efficacy expectancy, or self-efficacy. To Bandura, when therapy works, it's because the therapy restores the person's sense of efficacy about being able to carry out actions that were troublesome before. Research on this idea began by focusing on changes in therapy, but the work quickly expanded to examine a wide range of other topics (Bandura, 1986, 1997, 2006). Here are some examples: Wood and Bandura (1989) found that self-efficacy influenced how well business students performed in a management task. Bauer and Bonanno (2001) found that efficacy perceptions predicted less grief among persons adapting to bereavement. Efficacy expectancies predict whether drug users stay clean during the year after treatment (Ilgen, McKellar, & Tiet, 2005). There's even evidence that acquiring a sense of efficacy can have a positive influence on immune function (Wiedenfeld et al., 1990). Beyond these direct associations, perceptions of efficacy may underlie the positive effects found for other variables. For example, efficacy perceptions may be a pathway by which social support gives people a sense of well-being (Major et al., 1990). There's also evidence that self-esteem and optimism operate through perceptions of efficacy (Major, Richards, Cooper, Cozzarelli, & Zubek, 1998). 10.3.6: Role of Awareness A final elaboration on conditioning principles comes from considering the role of awareness in conditioning. It's long been assumed that conditioning happens whether you're paying attention or not. There's reason to believe, though, that this assumption is wrong. Several old studies found that people show little or no classical conditioning from repeated pairings of stimuli unless they realize the stimuli are correlated (Chatterjee & Eriksen, 1962; Dawson & Furedy, 1976; Grings, 1973). Newer studies have found that people are conditioned only if they are aware of the US (Dawson, Rissling, Schell, & Wilcox, 2007) or at least its valence (Stahl, Unkelbach, & Corneile, 2009). There's also evidence that people change their behavior after reinforcers only when they're aware of what's being reinforced (Dulany, 1968; Spielberger & DeNike, 1966). Moreover, sometimes just expecting an aversive event (as a US) can produce what look like conditioned responses to other stimuli (Bridger & Mandel, 1964; Spacapan & Cohen, 1983). After classical conditioning of a fear response, a statement that the painful US will no longer occur sometimes eliminates fear of the CS (Bandura, 1969; Grings, 1973). All of these findings suggest that conditioning is about learning rule-based regularities (recall Box 10.1). There is also a viewpoint that may take something of a middle ground on this issue. In this view, experiences are processed in two different ways in different parts of the nervous system. The result is learning that creates records that take two different forms (Daw, Niv, & Dayan, 2005). One mode of learning acquires what might be thought of as an "actuarial" record of experiences, a totaling of all the associations across all instances of experience. The other mode, in contrast, tries to develop a predictive model. Instead of just piling things up, it tries to generate expectancies. It might learn that a whole plan of action was useful (or useless) rather than just learning about pieces of the action. Presumably, the second mode is more advanced than the first one. Consistent with that, toddlers seem to operate only according to the first mode of learning (Thomason-Schill, Ramscar, & Chrysikou, 2009), whereas adults use both (Otto, Gershman, Markman, & Daw, 2013). It seems likely that awareness matters more in the second way of learning than it does in the first. 10.4: Observational Learning 10.4 Summarize the process of acquiring a behavior potential through observational learning Although many aspects of the social--cognitive learning approach can be viewed as elaborations on classical and instrumental conditioning, there is one part of this approach that leaves those concepts behind. This part is called observational learning. Two people are involved in this process, providing yet another basis for the term social learning theory. Observational learning takes place when one person performs an action, and another person observes it and thereby acquires the ability to repeat it (Bandura, 1986; Flanders, 1968). For such an event to represent observational learning unambiguously, the behavior should be one the observer doesn't already know. At a minimum, the behavior should be one the observer had not previously done in the context in which it's now occurring. Observational learning allows people to pack huge amounts of information into their minds quickly. This makes it very important. Observational learning occurs as early as the first year of life (Jones, 2007; Meltzoff, 1985). Interestingly, some other animals can do it too, including marmosets (Gunhold, Whiten, & Bugnyar, 2014). What's most remarkable about it is how simple it is. It seems to require little more than the observer's noticing and understanding what's going on. 10.4.1: Attention and Retention This last statement requires several qualifications, which help to give a better sense of what observational learning is (see Table 10.2). Observational learning requires the observer to pay attention to the model (the person being observed). If the person doesn't pay attention to the right aspect of the model's behavior, the behavior won't be encoded well enough to be remembered. Table 10.2 Four categories of variables (and specific examples of each) that influence observational learning and performance Source: Based on Bandura, 1977a, 1986. This principle has several implications. For one, it means that observational learning will work better with some models than others. Models that draw attention for some reason---for example, from their power or attractiveness---will be most effective. The role of attention also means that some acts will more likely be encoded than others. Acts that are especially salient will have more impact than acts that aren't (cf. McArthur, 1981; Taylor & Fiske, 1978). Other variables that matter here are the observer's capabilities and concentration. For instance, an observer who's distracted by music while viewing a model may miss entirely what he or she is doing. A second important set of processes in observational learning concerns retention of what's observed. In some way or other, what's been observed has to be represented in memory (which makes this a cognitive as well as a social sort of learning). Two strategies of coding predominate. One is imaginal coding, creating images or mental pictures of what you're observing. The other is verbal coding, creating a description to yourself of what you're observing. Either can produce a memory that can later be used to repeat the behavior (Bandura & Jeffery, 1973; Bandura, Jeffery, & Bachicha, 1974; Gerst, 1971). Of course, the better you are at doing the strategy, the more you are able to learn (Lawrence, Callow, & Roberts, 2013). 10.4.2: Production Once an action is in memory, one more thing is needed for it to occur. Specifically, you have to translate what you observed into a form you can produce. How well you can do this depends partly on whether you already know some of the components of the act. It's easier to reproduce a behavior if you have skills that underlie it or know bits of action involved in it. That's why it's often so easy for experienced athletes to pick up a new sport. They often already know movements similar to those the new sport requires. The importance of having components available also applies to the encoding process (see Johnson & Kieras, 1983). For example, if you already know names (or have good images) for components of the modeled activity, you'll have less to put freshly into memory. If you have to remember every little thing, it will be harder to keep things straight. Think of the difference in complexity between the label "Sauté one onion" (or "Remove the brake pad assembly") and the set of physical acts the label refers to. Now think about how much easier it is to remember the label than the sequence of acts. Using the label as mental shorthand simplifies the task for memory. But you can do this only if you know what the label refers to. 10.4.3: Acquisition versus Performance Observational learning permits fast learning of complicated behaviors. Given what we've just discussed, it also seems to be a case of "the more you already know, the easier it is to learn." There's an important distinction to be made, however, between acquisition of a behavioral potential and performance of the behavior. People don't always repeat the actions they see. People learn a great many things that they never do. Many complex behaviors are acquired by children through observational learning. To know whether observational learning will result in behavior, we need to know something else. We need to know what outcome the person expects the behavior to lead to (Bandura, 1977a, 1986). An illustration of this comes from an early study by Bandura (1965). Children saw a five-minute film in which an adult model performed a series of distinctive aggressive acts toward an inflated doll. The model accompanied each act with a verbalization. For example, while pounding the doll on the head with a mallet, the model said, "Sockeroo---stay down." At this point, three experimental conditions were created, using three versions of the film. In one condition, another adult entered the picture, praised the model, and gave the model a candy treat. In a second condition (the no-consequence group), this final scene was omitted. In a third condition, this scene was replaced by one in which the second adult came in and punished the model verbally and with a spanking. After seeing one of these three films, the child in the study was taken to an observation room that contained a wide range of toys. Among the toys was an inflated doll identical to the one in the film. The child was left alone for 10 minutes. Hidden assistants noted whether the child performed any of the previously modeled aggressive acts. The number of acts the child did was the measure of spontaneous performance. Ten minutes later, the experimenter returned. At this point, the child was offered incentives (juice and stickers) to show the experimenter as many of the previously viewed acts as he or she could remember. The number of behaviors shown was the measure of acquisition. The results of this study are very instructive. The top line in Figure 10.5 shows how many acts children reproduced correctly in the three conditions, when given an incentive to do so (the measure of acquisition). It's obvious that there isn't a trace of difference in acquisition. Reinforcement or punishment for the model had no impact here. Spontaneous performance, though, shows a different picture. The outcome for the model influenced what the observers did spontaneously. As in many studies (Thelen & Rennie, 1972), the effect of punishment was greater than the effect of reward, although other evidence shows that both can be effective (e.g., Kanfer & Marston, 1963; Liebert & Fernandez, 1970; Rosekrans, 1967). Figure 10.5 Acquisition and Performance. Participants observed a model display a series of aggressive acts that led to reward, no consequences, or punishment. Participants then had an opportunity to imitate the model spontaneously (performance). Finally, they were asked to demonstrate what they could remember of the model's behavior (acquisition). The study showed that reinforcement of the model played no role in acquisition but did influence spontaneous performance. ![](media/image12.png) The vertical axis of the graph is labeled "Imitative Responses Produced" and ranges from 0 to 4 in increments of 1. The horizontal axis lists three categories as "Model rewarded," "No consequences," and "Model punished." In case of acquisition, the graph shows almost same value of imitative responses produced for all three categories. However, in case of performance (spontaneous) the value for model punished declines considerably. In conclusion, vicarious reinforcement influences whether people spontaneously do behaviors they've acquired by observation. This effect is the same as any instance of vicarious reinforcement. It thus reflects vicarious instrumental learning. In contrast, reinforcement to the model has no influence on acquisition of the behavioral potential. Thus, observational learning and vicarious instrumental learning are distinct processes. 10.5: Modeling of Aggression and the Issue of Media Violence 10.5 Examine three processes that influence the impact of media violence on real-life aggression The processes described in this chapter provide a set of tools for understanding behavior. To indicate how broadly they can be used, this section describes one area in which the processes play a key role. The processes tend to get tangled up with one another. Nevertheless, they can be distinguished conceptually, and we'll do so as we go along. There's a great deal of concern in the United States about the impact of media violence on real-life aggression. Social--cognitive learning theories have been applied to this issue for some time. Observational learning occurs with symbolic models as well as live models. Indeed, the influence of symbolic models is pervasive. Symbolic models are on TV and in movies, magazines, books, video games, and so on. The actions they portray---and the patterns of reinforcement around the actions---can have a big impact on both acquisition and performance of observers. All the ways models influence observers seem to be implicated here, to one degree or another (Anderson et al., 2003, 2010). At least three processes occur. First, people who observe innovative aggressive techniques acquire the techniques as behavior potentials by observational learning. Wherever observational learning can occur, it does occur (Geen, 1998; Heller & Polsky, 1975; recall Figure 10.5). This principle looms large, as producers strive to make movies new and different every year. A common source of novelty in movies is new methods for inflicting pain. A second process is that observing violence that's condoned or even rewarded helps create the sense that aggression is an appropriate way to deal with disagreements. Vicarious reinforcement thus increases the likelihood that viewers will use such tactics themselves. (By implication, this is also why some people worry about sex on TV and in movies.) When the suggestion is made that violence is reinforced in the media, a common reply is that the "bad guys" in TV and movie stories get punished. Two things to note, however. First, the punishment usually comes late in the story, after a lot of short-term reinforcement. As a result, aggression is linked more closely to reinforcement than to punishment. Second, the actions of the heroes usually are also aggressive, and they are highly reinforced. From that, there's a clear message that aggression is a good way to deal with problems. Does viewing so-called acceptable aggression make people more likely to use aggression in their own lives when they're annoyed? Yes. Whether the model is live (e.g., Baron & Kempner, 1970) or symbolic (e.g., Bandura, 1965; Liebert & Baron, 1972), exposure to aggressive models increases the aggression of observers. The final point here is more diffuse: Repeated exposure to violence desensitizes observers to human suffering. The shock and upset that most people would associate with acts of extreme violence are extinguished by repeated exposure to it. In 1991, the police chief of Washington, DC, said, "When I talk to young people involved with violence, there's no remorse,... no sense that this is morally wrong." Exposure to violence in video games creates a similar desensitizing effect (Bartholow, Bushman, & Sestir, 2006; Bartholow, Sestir, & Davis, 2005; Carnagey, Anderson, & Bushman, 2007). The long-term consequences of this desensitizing process are profoundly worrisome. As people's emotional reactions to violence diminish, being victimized (and victimizing others) is coming to be seen as an ordinary part of life. It's hard to study the impact of this process in its full scope, but the effects are pervasive enough that they represent a real threat to society. Indeed, there's a growing awareness across the nation that bullying in schools is on the rise. Does this mean that all video games are bad? No. In fact, prosocial video games seem to increase prosocial behavior (Gentile et al., 2009). What matters is entirely the content that people are exposed to during the game. This is what would be predicted by the learning approach to personality. 10.6: Assessment from the Learning Perspective 10.6 Evaluate behavioral assessment As described throughout the chapter, conditioning theories and social--cognitive theories tend to focus on different aspects of the learning process. It should not come as a surprise, then, to learn that their approaches to assessment are also slightly different from each other. These approaches are described in the next two sections. 10.6.1: Conditioning-Based Approaches From the view of conditioning theories, personality is largely the accumulation of a person's conditioned tendencies (Ciminero, Calhoun, & Adams, 1977; Hersen & Bellack, 1976; Staats, 1996). By adulthood, you've acquired a wide range of emotional responses to various stimuli, which you experience as attitudes and preferences. Many assessment techniques from the conditioning approach measure the affective quality of people's experience. Two techniques are common. One focuses on assessment of emotional responses through physiological assessment. Physiological assessment (which also relates to biological process views of personality, Chapter 7) follows from the fact that emotional responses are partly physiological. When you experience an intense emotion, changes take place in your body: changes in muscle tension, heart rate, blood pressure, brain waves, sweat gland activity, and more. Some think the measurement of such responses is useful in assessing problems such as posttraumatic stress disorder (Keane et al., 1998; Orr et al., 1998). A second technique that can be used to assess emotional responses is called behavioral assessment (Barlow, 1981; Haynes & O'Brien, 2000; Staats, 1996). It entails observing overt behavior in specific situations. It can be used widely. Emotions such as fear can be assessed by behaviors---trembling, paleness, avoidance, and so on. This technique can also be applied to assess what kinds of activities people undertake, for how long, and in what patterns. Behavioral assessment varies widely in how it's actually done. Sometimes, the observer simply counts acts of specific types, checks possibilities from a prearranged list, or watches how far into a sequence of action a person goes before stopping (Lang & Lazovik, 1963; O'Leary & Becker, 1967; Paul, 1966). In other cases, the procedure is more elaborate---for instance, using automated devices to record how long a person engages in various behaviors. 10.6.2: Social--Cognitive Approaches In considering the social--cognitive approach to assessment, two characteristics stand out. First, the social--cognitive approach tends to use self-report devices, rather than behavioral observation. Given that the cognitive learning view emphasizes the role of thoughts, it's only natural that useful sources of information would be people's reports of their tendencies to act in various ways and to have various kinds of thoughts and feelings. The second issue concerns what variables are measured. Assessment from this view tends to focus on experiential variables. Instead of charting overt actions, assessments frequently ask people how they feel or what kinds of thoughts go through their minds, in certain situations. Particularly important are expectancies: expectancies of coping and expectancies of personal efficacy. This should be no surprise, because expectations are regarded as so important in this view of behavior. Assessment in the social--cognitive learning view tends to emphasize responses to specific categories of situations, as does the rest of the learning perspective. This reflects the fact that behavior varies greatly from one situation to another. The social--cognitive learning view differs from the conditioning view, however, in its emphasis on personal views of situations, rather than objective definitions of situations. According to this approach, people's representations determine how they act. This must be taken into account in assessment. 10.7: Problems in Behavior, and Behavior Change, from the Learning Perspective 10.7 Assess how behavior problems can be treated through conditioning procedures of behavior therapy or behavior modification If personality can derive from learning, so can problems. People sometimes learn things that interfere with their lives, and they sometimes fail to learn things that would make their lives easier. These phenomena suggest a basis for several kinds of problems, along with ways of treating them. As a group, the techniques are termed behavior modification or behavior therapy. These terms reflect the fact that the emphasis is on changing the person's actual behavior. 10.7.1: Classical Conditioning of Emotional Responses One class of problems is emotional reactions that interfere with effective functioning. People sometimes have intense anxiety when exposed to specific stimuli. This is called a phobia. Although a phobic reaction can become tied to virtually any stimulus, some are more common than others. Common focal points for phobias are animals such as dogs, snakes, and spiders; closed-in spaces such as elevators; open or exposed spaces such as railings on high balconies; and germs and the possibility of infection. The conditioning view is that phobic reactions are classically conditioned. This view also leads to ideas about how to treat them. One technique is systematic desensitization. People are first taught to relax thoroughly. That relaxation response is then used to counteract or replace fear in the presence of the phobic stimulus, a process termed counterconditioning. Once the person has learned to relax, he or she can work with a therapist to create an anxiety hierarchy---a list of situations involving the feared stimulus, ranked by how much anxiety each creates (see Table 10.3). Table 10.3 An anxiety hierarchy, such as the example below, might be used in systematic desensitization for one type of fear of heights. Each scene is carefully visualized while the person relaxes completely, working from the least-threatening scene (at the bottom) to those that produce greater anxiety (toward the top) In the desensitization process, you relax fully. Then you visualize a scene from the least-threatening end of the hierarchy. The anxiety aroused by this image is allowed to dissipate. Then, while you continue to relax, you imagine the scene again. You do this repeatedly, until the scene provokes no anxiety at all (i.e., until your fearful reaction to the stimulus has been extinguished). Then you move to the next level on the anxiety hierarchy. Gradually, you're able to imagine increasingly threatening scenes without anxiety. Eventually, the imagined scenes are replaced by the actual feared stimulus. As the anxiety is countered by relaxation, you're able to interact more and more effectively with the stimulus that previously produced intense fear. Systematic desensitization has proven very effective in reducing fear reactions, particularly for fears that focus on a specific stimulus (e.g., Brady, 1972; Davison & Wilson, 1973). More recently, desensitization has been taken in a different direction, called exposure. Many therapists now use treatments in which the person is exposed to a more intense dose of the feared stimulus and endures it---while anxiety rises then gradually falls. Exposure to the feared stimulus is maintained well after the physical aspects of the anxiety have subsided. It seems that extinction occurs more quickly when the anxiety arises then falls off. Such exposure treatments for phobias can sometimes be done in as little as one session (Öst, Ferebee, & Furmark, 1997). This sort of treatment has also proven to be superior for severe posttraumatic disorders (Foa & Meadows, 1997; Powers, Halpern, Ferenschak, Gillihan, & Foa, 2010). 10.7.2: Conditioning and Context The purpose of procedures based on extinction and counterconditioning is to replace an undesired response with a neutral one. Often, however, the response will disappear in the treatment setting but return when the person is in his or her everyday environment. How can that be made less likely? Context plays an important role in this effect. That is, the context of the original conditioning often differs from the context of the therapy. In effect, each context is a set of discriminative stimuli. In the therapy room, people acquire a neutral response (via extinction) to the target stimulus. But when they return to the setting where the response was learned, the old response may reappear (Bouton, 1994, 2000). Why? Because the stimuli in the original setting weren't there during the extinction. As a result, they still serve as cues for behavior. There are a couple of ways to get the new response to carry over to the person's life outside the therapy room. First, the person can acquire the new response in a setting that resembles the setting where the old response was acquired. This will cause the new response to generalize to the original setting. Alternatively, the person can avoid the original setting altogether. That's why many approaches to avoiding relapse emphasize staying away from settings that resemble those where the original response was acquired and maintained. As a concrete example, consider work on smoking relapse. Withdrawal from nicotine isn't the sole problem in quitting (Perkins, 1999). Relapse rates are as high as 60% even if smokers get nicotine other ways (Kenford, Fiore, Jorenby, & Smith, 1994). Many who quit smoking return to it well after the end of nicotine withdrawal (Brandon, Tiffany, Obremski, & Baker, 1990). Why? The smoking is linked by conditioning to particular contexts (after meals, after sex, being at a bar, and so on). The context itself is a discriminative stimulus for smoking long after the craving for nicotine is gone (Carter & Tiffany, 1999; Conklin, 2006). Contexts can create cravings even when no specific smoking cues are present, such as cigarettes or a lighter (Conklin, Robin, Perkins, Salkeld, & McClernon, 2008). Programs to quit smoking now emphasize efforts to extinguish responses to the cues linked to smoking. The contextual cues are presented alone, with no smoking. The hope is that the nonsmoking response will condition to those cues, and the person will thereby become more resistant to relapse. Such programs have had only limited success (Conklin & Tiffany, 2002), perhaps because they've used "normative" smoking cues, rather than personalized ones. Because everyone has a unique smoking history, individualizing the cues may promote better success (Conklin, Perkins, Robin, McClernon, & Salkeld, in press; Conklin & Tiffany, 2001). 10.7.3: Instrumental Conditioning and Maladaptive Behaviors Another set of problems in behavior relates to the principles of instrumental conditioning. The reasoning here is that undesirable behavioral tendencies are built in by reinforcement. Furthermore, they can be acquired in ways that make them resistant to extinction. Imagine that a certain class of behavior---for instance, throwing tantrums when you don't get your way---was reinforced at one period of your life, because your parents gave in to them. The reinforcement strengthened the tendency to repeat the tantrum. If reinforced often enough and with the right pattern of partial reinforcement, the behavior becomes frequent and persistent. 10.8: Problems and Prospects for the Learning Perspective 10.8 Recall two strengths and problems of the learning perspective on personality The learning perspective on personality has been particularly influential among two groups: researchers involved in the experimental analysis of behavior in the laboratory and clinicians trained when behavior therapies were at the height of their popularity. The learning view is attractive to these groups for two different reasons, which in turn represent two strengths of this view. First, the learning viewpoint emerged---as had no other perspective before it---from the crucible of research. The ideas that form this approach to behavior were intended to be given close scrutiny, to be either upheld or disconfirmed through investigation. Many of the ideas have been tested thoroughly, and the evidence that supports them is substantial. Having a viewpoint on the nature of personality that can be verified by careful observation is very satisfying to researchers. A second reason for the impact of learning ideas is the effectiveness of behavioral and cognitive--behavioral therapy techniques. Research has shown that several kinds of problems can be treated with fairly simple procedures. With this realization, clinicians began to look closely at the principles behind the procedures. The learning perspective has an aura of credibility among some psychologists because of its good fit with these effective techniques of behavior change. Although many find this viewpoint congenial, it also has its critics. One criticism concerns researchers' tendency to simplify the situations they study. Simplification ensures experimental control, and having control helps clarify cause and effect. Yet sometimes, the simplification results in situations that offer very few options for behavior. There can be a nagging suspicion that the behavior occurred because there were so many pressures in its direction and so little chance to do anything else. But what happens to behavior when the person leaves the laboratory? This concern is far less applicable to the social--cognitive learning approach. People working from it have examined behavior in very diverse settings and contexts. Another problem with the learning view is that it isn't really so much a theory of personality as a view of the determinants of behavior. The processes of learning presumably operate continuously in a piecemeal and haphazard fashion. The human experience, on the other hand, seems highly complex and orderly. How do the haphazard learning processes yield such an orderly product? To put it another way, conditioning theories tell us a lot about how a specific behavior becomes more or less probable, but they don't tell us so much about the person who's doing the behavior. The processes are very mechanistic. There seems little place for the subjective sense of personhood, little focus on the continuity and coherence that characterize the sense of self. In sum, to many, this analysis of personality doesn't convey the subjective experience of what it means to have a personality. Again, this criticism is less applicable to the social--cognitive learning theories. Concepts such as the sense of personal efficacy have a great deal to do with the sense of personhood, even if the focus is on only a limited part of the person at any given time. Another problem for the learning perspective concerns the relationship between conditioning ideas and social--cognitive ideas about learning. The two approaches to learning are split by a core disagreement. We minimized this issue while presenting the theoretical principles, but it deserves mention. The issue is this: Conditioning theories tend to focus on observable events. They explain behavioral tendencies by patterns of prior experiences and present cues. Nothing else is needed. Cognitions are irrelevant. The social--cognitive learning approach is quite different. Expectations cause behavior. Actions follow from thinking. Treating cognitions as causes of behavior may mean rejecting fundamental tenets of the conditioning approach. In the more cognitive view, classical and instrumental conditioning aren't necessarily incremental processes occurring outside awareness; they depend on expectancies and mental models. Reinforcement is seen as providing information about future incentives, instead of acting directly to strengthen behavioral tendencies. How are we to think about this situation? Are the newer theories extrapolations from the previous theories, or are they quite different? Can they be merged, or are they competitors for the same theoretical niche? Some people would say the newer version of the learning perspective should replace the conditioning version---that the conditioning view was wrong, that human learning simply doesn't occur that way. Some people have abandoned any effort at integration and simply stepped away from the issue altogether. For example, years ago, Bandura dropped the word learning from the phrase he used to characterize his theory. He began calling it social--cognitive theory (Bandura, 1986). This raises the question of whether his ideas about efficacy expectancies should be seen as belonging to the learning perspective at all. Bandura's change of label reflects a more general trend among people who started out within the social learning framework. Over the past 40 years, many of these people have been influenced by the ideas of cognitive psychology. Many people who used to call their orientation to personality a social learning view would hedge in using that term today. Some would now give a different label. There has been a gradual fraying of the edge of the social learning approach, such that it has tended to combine with the cognitive and self-regulation theories, discussed in later chapters. This blurring and blending between bodies of thought raise a final question for the learning approach: Will this approach retain its identity as an active area of work in the years to come, or will it disperse and have its themes absorbed by other viewpoints? Summary: The Learning Perspective Conditioning approaches emphasize two types of learning. In classical conditioning, a neutral stimulus (CS) is presented along with another stimulus (US) that already elicits a reflexive response (UR). After repeated pairings, the CS itself comes to elicit a response (CR) that's similar to the UR. The CR appears to be an anticipatory response that prepares for the US. This basic phenomenon is elaborated by discrimination (different stimuli leading to different responses) and extended by generalization (different stimuli leading to similar responses). CRs fade if the CS is presented repeatedly without the US, a process termed extinction. Classical conditioning is important to personality primarily when the responses being conditioned are emotional reactions (emotional conditioning). Classical conditioning thus provides a basis for understanding people's unique preferences and aversions, and it provides a way of analyzing certain psychological problems, such as phobias. In instrumental conditioning, a behavior is followed by an outcome that's either positively valued or aversive. If the outcome is positively valued, the tendency to perform the behavior is strengthened. Thus, the outcome is called a reinforcer. If the outcome is aversive (a punisher), the tendency to perform the behavior is reduced. Discrimination in instrumental conditioning means responding in different ways to different situational cues; generalization is responding in a similar way to different cues; extinction is the reduction of a behavioral tendency through nonreinforcement of the behavior. Reinforcers can occur in many patterns, termed schedules. An important effect of variations in reinforcement schedules is that behavior learned by intermittent (partial) reinforcement is more persistent (under later conditions of nonreinforcement) than is a behavior learned by continuous reinforcement. Another generation of learning theories has evolved. They are called cognitive because they emphasize the role of thought processes in behavior and social because they emphasize the idea that people often learn from one another. Several aspects of these theories represent elaborations on conditioning principles, including an emphasis on social reinforcement over other sorts of reinforcement in shaping behavior. Because humans have the capability for empathy (vicariously aroused emotions), we can experience classical conditioning vicariously. We can also experience reinforcement and punishment vicariously, causing shifts in action tendencies on the basis of someone else's outcomes. This view also holds that humans often learn expectancies and then apply them to new situations. The idea that expectancies about outcomes play an important part in determining behavior is a central part of social--cognitive learning models. Another important idea is that perceptions of personal efficacy determine whether a person will persist when in stressful circumstances. Another part of this approach to personality stands as distinct from conditioning principles: the process of acquiring behavior potentials through observational learning. This process requires that an observer attend to a model (who is displaying a behavior), retain some memory of what was done (usually a visual or verbal memory), and have component skills to be able to reproduce what was modeled. This process of acquisition isn't directly influenced by reinforcement contingencies. On the other hand, spontaneous performance of the acquired behavior is very much influenced by perceptions of reinforcement contingencies. Assessment, from a conditioning point of view, emphasizes observation of various aspects of behavior as they occur in specific situations. Assessment can focus on people's physiological responses, their overt behaviors, or their reports of emotional reactions in response to different kinds of stimuli. Assessment from a social--cognitive learning point of view is more reliant on self-reports. The conditioning approach assumes that problems in behavior are the result of the same kinds of processes as result in normal behavior. Classical conditioning can produce intense and irrational fears, called phobias; instrumental conditioning can produce behavior tendencies that persist even when they are no longer adaptive. These various problems can be treated by means of conditioning procedures, collectively termed behavior therapy or behavior modification. Systematic desensitization replaces fear reactions with relaxation. Exposure treatments keep people focused on distressing situations until long after the burst of anxiety calms down. Problems in behavior can also develop through vicarious learning, or when people haven't had the opportunity to learn needed behaviors from models. Therapy based on the social--cognitive learning approach often involves modeling, whether as an attempt to remedy skill deficits through observational learning or as an attempt to show the utility of coping skills through vicarious reinforcement.