Psychology Chapter 15-17 PDF
Document Details
Uploaded by iiScholar
Arizona State University
Tags
Related
- Sportpsychologie Grundlagen und Anwendung Schüler et al_Kapitel 2_Kognitio PDF
- Lecture 2 - Human Cognition PDF
- Unit-3 Cognitive Process- Learning PDF
- Cognitive Process - Learning, Attention & Perception PDF
- Intro-Psych Module 2 Test Review PDF
- Cognition And Emotion - BAU Psychology Fall 2024-2025 PDF
Summary
This psychology chapter covers the concepts of selective and divided attention, as well as different types of learning. The text introduces the cocktail party effect, describes the processes of habituation and sensitization, and explains the fundamental principles of classical conditioning.
Full Transcript
**Selective and Divided Attention** 15.1.01 Selective and Divided Attention Only a small fraction of the sensory information from the environment is consciously processed. **Attention** refers to the cognitive processes that filter some sensory inputs to focus on others. Attention can be classifie...
**Selective and Divided Attention** 15.1.01 Selective and Divided Attention Only a small fraction of the sensory information from the environment is consciously processed. **Attention** refers to the cognitive processes that filter some sensory inputs to focus on others. Attention can be classified as selective or divided. **Selective attention** refers to focusing on one stimulus in the environment while ignoring others. The **cocktail party effect** is a selective attention process that occurs when an unconsciously processed stimulus triggers a person\'s attention, bringing it into conscious awareness (Figure 15.1). For example, when in a crowded cafeteria, an individual must tune out competing noise to focus on a conversation. But if they hear their name mentioned in the background (an unconsciously processed stimulus), their conscious attention quickly shifts to that conversation. **Figure 15.1** Example of the cocktail party effect. Conversely, **divided attention** (sometimes referred to as multitasking) describes when an individual attends to more than one stimulus or task simultaneously. However, individuals generally cannot attend to multiple stimuli at the same time, so \"divided attention\" actually refers to rapidly switching one\'s attention among different stimuli or tasks. Some tasks are easier to perform simultaneously because they can be executed via [automatic](javascript:void(0)) [processing](javascript:void(0)). For example, it is easier to perform two tasks at the same time if the tasks are: *Dissimilar*: Driving, which requires visual attention, is easier to do while engaging in a hands-free call (auditory attention) than while texting because both texting and driving require visual attention. *Less difficult*: When preparing a simple, familiar dish, it is easy for a parent to simultaneously interact with their family. However, if preparing a complicated, unfamiliar dish, the parent may ask their children to play in another room. *Well-practiced*: A dance instructor will find it easy to simultaneously demonstrate a dance move and describe it to the class, whereas a novice dancer will find it difficult to perform the move and describe it at the same time. In general, research shows that subjects of all ages perform poorly on divided attention tasks. **Habituation and Dishabituation, Sensitization and Desensitization** 16.1.01 Habituation and Dishabituation, Sensitization and Desensitization One of the simplest forms of learning, **non-associative learning**, occurs when an organism changes their pattern of behavioral responding to repeated presentations of a stimulus over time. Repeated exposure to a stimulus can result in a *decreasing* behavioral response over time, known as **habituation** (Figure 16.1). For example, a student might initially notice flickering overhead lights (ie, stimulus) at school but notice them less over time (ie, diminished response). **Figure 16.1** Habituation example. Alternatively, **dishabituation** occurs when there is a renewed (ie, increased) response to a *previously habituated* stimulus. For example, a student returns from spring break to find that the flickering overhead lights, which they had learned to ignore before the break (ie, previously habituated stimulus), are noticeable once again (ie, renewed response). Whereas habituation refers to a diminished behavioral response after repeated exposure to a stimulus, **sensitization** occurs when repeated exposure to a stimulus produces an *increase* in a behavioral response over time. For example, after putting on an itchy sweater, an individual might scratch at it A couple of children writing in a classroom Description automatically generated Chapter 16: Non-associative Learning 89 occasionally. However, over the course of the day, the individual may increasingly scratch at the sweater. Conversely, **desensitization** occurs when the behavioral response to a *previously sensitized* stimulus decreases over time. For example, over time, an individual scratches less (ie, diminished response) at an itchy sweater that was previously unbearable (ie, previously sensitized stimulus). These four forms of non-associative learning---habituation, dishabituation, sensitization, and desensitization. **Classical Conditioning** 17.1.01 Components in Classical Conditioning In contrast to the simpler non-associative learning, **associative learning** occurs when an organism learns the connection (ie, association) between two stimuli (as in classical conditioning) or the association between a behavior and an outcome (as in operant conditioning, see Lesson 17.2). **Classical conditioning** is a type of associative learning that occurs when an organism associates a stimulus that did not previously elicit a meaningful response with a stimulus that naturally elicits a response. In the early 20th century, Ivan Pavlov, a physiologist, accidentally \"discovered\" this form of learning during his experiments on gustatory reflexes in dogs. In his experiments, Pavlov noticed that the dogs, after becoming accustomed to the lab routine, started to salivate to stimuli other than their food (meat), such as a tone or bell. In other words, the dogs learned an association between a stimulus that initially produced no meaningful response, such as a bell, with a stimulus that is innately arousing, such as meat. Pavlov found that the naturally occurring response to the meat, salivation, was then produced by the bell. See Figure 17.1 for an overview of the classical conditioning process. Several terms are key in understanding classical conditioning. The stimulus that initially did not produce a meaningful response (eg, bell) is known as a **neutral stimulus** (NS). **Unconditioned stimuli** (UCS) (eg, meat) are physiologically arousing, which means they elicit an innate (unlearned) reaction called an **unconditioned response** (UCR) (eg, salivating). After being paired with an unconditioned stimulus, a neutral stimulus becomes a **conditioned stimulus** (CS) when it alone elicits the **conditioned response** (CR), a learned reaction (eg, salivating). The conditioned response is typically similar to, but not always exactly the same as, the unconditioned response, as is discussed in regard to conditioned taste aversions (Concept 17.1.03). **Figure 17.1** Classical conditioning overview. ![A diagram of a dog Description automatically generated](media/image2.png) Chapter 17: Associative Learning 91 17.1.02 Processes in Classical Conditioning **Acquisition, Extinction, and Spontaneous Recovery** The first phase of the classical conditioning process is known as acquisition. In **acquisition**, an association is formed between the unconditioned stimulus (eg, meat) and the neutral stimulus (eg, bell). In many cases of classical conditioning, repeated pairings are needed for the organism to associate the neutral stimulus with the unconditioned stimulus. Across these repetitions, the association between the unconditioned stimulus and the neutral stimulus becomes stronger. During this phase, the previously neutral stimulus will eventually take on the properties of the unconditioned stimulus and elicit the now-conditioned response (eg, salivation). The neutral stimulus becomes known as the conditioned stimulus at this point. A graph of the strength of the conditioned response across all phases of the classical conditioning process is shown in Figure 17.2. **Figure 17.2** Graph showing the strength of the conditioned response. A diagram of a dog Description automatically generated Chapter 17: Associative Learning 92 The next phase of classical conditioning involves presentations of the conditioned stimulus (eg, bell) alone in the absence of the unconditioned stimulus (eg, meat). At first, in the absence of the unconditioned stimulus, the conditioned stimulus will initially elicit a strong conditioned response (eg, salivation). However, over repeated presentations, the conditioned response will decrease and eventually cease altogether, a process known as **extinction**. The graph in Figure 17.2 depicts a \"pause\" or rest period following extinction in which the conditioned stimulus is not presented. If the conditioned stimulus is reinstated after this, it can lead to **spontaneous recovery**, in which the organism responds to the conditioned stimulus with the conditioned response once again. **Discrimination and Generalization** In classical conditioning, **discrimination** (also called stimulus discrimination) occurs when an organism responds to certain conditioned stimuli but ignores similar stimuli. For example, discrimination is demonstrated by a dog who has been conditioned to salivate in response to a bell salivating only to the sound of that specific bell, not in response to other similar bell tone sounds, such as a cell phone alert. Similarly, **generalization** (also called stimulus generalization) occurs when a stimulus similar to the original stimulus evokes the same, conditioned response. For example, a dog who has learned to salivate in response to a specific bell tone would demonstrate generalization when it salivates in response to a similar-sounding tone, such as a cell phone alert. See Concept 17.2.03 for discrimination and generalization in operant conditioning. 17.1.03 Special Types of Classical Conditioning **Conditioned Taste Aversions** A **conditioned taste aversion** (also called learned taste aversion) is a specific and powerful type of classical conditioning that occurs after an organism becomes ill following the consumption of a food or beverage (see Figure 17.3). For example, if an individual eats a donut (neutral stimulus) and then gets a stomach virus (unconditioned stimulus) and becomes sick (unconditioned response), the individual will avoid (conditioned response) donuts (conditioned stimulus) in the future. A learned taste aversion can cause an individual to feel sick when even just thinking about a particular food, and the aversion may generalize to related types of foods (eg, all types of cake). Conditioned taste aversions almost always link illness with foods (or smells), which is thought to be an evolutionary adaptation. Conditioned taste aversions occur because of **biological preparedness**, the tendency to readily learn associations that promote survival. **Figure 17.3** Conditioned taste aversion example overview. ![A diagram of a dog Description automatically generated](media/image4.png) Chapter 17: Associative Learning 93 Conditioned taste aversions possess several characteristics that render them a unique form of classical conditioning. Unlike typical classical conditioning, which usually requires two stimuli to be paired together repeatedly before the organism learns to associate the two, a conditioned taste aversion develops after just one pairing. In other words, an organism needs to become ill *only once* to associate the food or beverage consumed with the illness. Furthermore, conditioned taste aversions differ from typical cases of classical conditioning in the time frame needed between the presentation of the neutral stimulus and unconditioned stimulus for the organism to form an association. Whereas typical classical conditioning requires the two stimuli to both be presented within a very short time frame for the organism to learn to associate them (with the presentation of the neutral stimulus ideally occurring just slightly prior to the unconditioned stimulus), taste aversions can be learned despite *hours* passing between the consumption of a food and subsequent illness. Whereas typical classical conditioning rapidly extinguishes when the two stimuli are no longer paired, conditioned taste aversions have long durations. In other words, after becoming ill, the organism may never consume the associated food again. Last, whatever was consumed prior to illness can become associated with the illness (even if it did not cause the illness) and is avoided afterward. For example, if an individual experiences nausea and vomiting in the afternoon because they contracted the flu, they may develop a conditioned taste aversion to any of the foods or beverages they consumed earlier that day, even though the food and beverages did not cause the illness. **Classically Conditioned Phobias** John Watson, often considered the founder of behaviorism, took inspiration from the work of Pavlov to study classically conditioned emotions in humans. In an experiment wrought with ethical concerns, John Watson and colleagues classically conditioned an infant known as \"Little Albert\" to fear white rats. See Figure 17.4 for an overview of this study. In the Little Albert experiment, a white rat (NS) was paired with a loud noise (UCS) that caused fear (UCR) in Little Albert. After the loud noise was paired with the white rat, the white rat alone (CS) provoked fear (CR). Furthermore, Little Albert\'s fear of the white rat [generalized](javascript:void(0)) to other fuzzy, white stimuli such as white rabbits and even white beards. **Figure 17.4** John Watson\'s Little Albert experiment. A chart of baby and rat Description automatically generated with medium confidence The results of the Little Albert experiment demonstrated that fear can be classically conditioned. This type of fear response is considered analogous to the psychological disorder known as specific phobia. Specific phobia, covered in Concept 29.1.01, is an anxiety disorder characterized by excessive, irrational fear of a specific situation or animal/object. Some specific phobias are hypothesized to result from the classical conditioning of fear through pairing a negative experience (eg, nearly drowning) with a specific object (eg, pool) or situation (eg, swimming). These same classical conditioning principles can be used in behavioral therapy to decrease a conditioned fear response (see Lesson 31.1) **conditioning** occurs when an organism associates a behavior with a consequence. In operant conditioning, the likelihood of an organism repeating the behavior is influenced by the outcome of that behavior (ie, reward or punishment). For example, when a rat receives a food pellet (ie, reward) after pushing a lever, the rat is more likely to push the lever again. Behaviors *increase* due to reinforcement: [positive reinforcement](javascript:void(0)) occurs when a desirable stimulus is applied, leading to an increased likelihood of behavior. For example, an individual compliments her boyfriend and smiles (ie, applies a desirable stimulus) after he cooks her dinner (ie, a behavior), which encourages him to cook more often. Similarly, [negative reinforcement](javascript:void(0)) occurs when an undesirable stimulus is withdrawn, leading to an increased likelihood of behavior. For example, an individual\'s car stops making an annoying beeping sound (ie, removes an undesirable stimulus) after she buckles her seatbelt (ie, a behavior), which increases her buckling behavior. Behaviors *decrease* due to punishment: [positive punishment](javascript:void(0)) occurs when an undesirable stimulus is applied, resulting in a decreased likelihood of behavior. For example, an individual yells at her puppy (ie, applies an undesirable consequence) for jumping on guests (ie, a behavior), which decreases her puppy\'s jumping. Additionally, [negative punishment](javascript:void(0)) occurs when a desirable stimulus is withdrawn, resulting in a decreased likelihood of behavior. For example, a parent takes away a child\'s video games (ie, removes a desirable stimulus) in response to the child acting out (ie, a behavior), which leads to the child acting out less in the future. See Figure 17.5 for a summary of positive and negative reinforcement and punishment. **Figure 17.5** Reinforcement and punishment in operant conditioning. 17.2.02 Schedules of Reinforcement **Schedules of reinforcement** (also called reinforcement schedules) are used in operant conditioning to train and/or maintain learned behaviors. Schedules of reinforcement reward (ie, reinforce) an organism either based on the frequency of responses (ie, *ratio*), or time (ie, *interval*). The schedules are either ![A close-up of several words Description automatically generated](media/image6.png) Chapter 17: Associative Learning 96 unchanging (ie, *fixed*) and are therefore predictable or based on an average (ie, *variable*) and are therefore unpredictable. Examples of reinforcement schedules include: **Fixed-ratio** schedules, which provide rewards after a predictable number of responses (eg, receiving a free sandwich after every 10 purchases). [Continuous schedules](javascript:void(0)), which reward every response (eg, petting a dog every time it puts its chin on its owner\'s lap), are a type of fixed-ratio schedule. Continuous schedules are often used when initially training a behavior. [Variable-ratio schedules](javascript:void(0)), which provide rewards after an unpredictable number of responses (eg, only occasionally winning money by playing slot machines), are the type of reinforcement schedule most resistant to extinction (see Concept 17.2.03 for information on extinction). **Fixed-interval** schedules, which provide rewards after a predictable amount of time regardless of how many behaviors have occurred (eg, being paid a biweekly salary). **Variable-interval** schedules, which provide rewards after an unpredictable amount of time regardless of how many behaviors have occurred (eg, checking the front door frequently even though this has no impact on when a package will be delivered). Aside from the continuous schedule, which rewards every response, the other types of reinforcement schedules listed are partial reinforcement schedules, which means they do not reward based on every response or a specific unit of time. See Figure 17.6 for a summary of these schedules. **Figure 17.6** Types of partial reinforcement schedules. Each reinforcement schedule produces characteristic behavioral response patterns (see Figure 17.7). The ratio schedules, which provide reinforcement after a consistent (fixed) or inconsistent (variable) number of behavioral responses, both produce rapid response rates. The interval schedules, which provide reinforcement after a consistent (fixed) or inconsistent (variable) amount of time, both produce slower response rates. A screenshot of a computer Description automatically generated Chapter 17: Associative Learning 97 **Figure 17.7** Reinforcement schedule responses. Furthermore, although the term *reinforcement schedule* and the examples above use *reinforcement*, these schedules can also be applied via punishment. For example, if a child is grounded at each instance of being rude to his parents, this is punishment on a *fixed-ratio* schedule. 17.2.03 Processes in Operant Conditioning Several additional terms are critical to understanding operant conditioning processes. One way to train a new behavior is through shaping (see Figure 17.8). **Shaping** is a technique used in operant conditioning in which successive approximations (ie, behaviors that progressively resemble the desired behavior) are reinforced (ie, rewarded). For example, when teaching her dog to go into a kennel (desired behavior), an individual reinforces the dog first for sitting near the kennel, second for touching the kennel, third for putting a paw into the kennel, and so on until the dog goes completely into the kennel. ![A diagram of a graph Description automatically generated with medium confidence](media/image8.png) Chapter 17: Associative Learning 98 **Figure 17.8** Shaping example. In contrast, **extinction** occurs in operant conditioning when a behavior decreases or stops because it is no longer reinforced (see Concept 17.1.02 for extinction in classical conditioning). Consider the example of an individual who often bakes cookies for her friend. At first, the friend always remembers to thank the cookie baker for the cookies and praises her. However, over time, the friend forgets to praise the baker, which eventually causes the baker to stop baking the cookies, resulting in extinction. Concept 17.1.02 on classical conditioning discusses stimulus discrimination (also called discrimination) and stimulus generalization (also called generalization) in the context of classically conditioned stimuli. In operant conditioning, **generalization** occurs when a stimulus similar to the original evokes the same response. For example, a young child has been reinforced for saying \"flowers\" when she sees colorful flowers. When the child sees colorful leaves (ie, new stimulus) that are similar to the original stimulus (ie, colorful flowers), she points and says \"flowers\" (ie, gives the same response). Conversely, **discrimination** occurs in operant conditioning when an organism responds to certain stimuli but ignores similar stimuli. For example, a dog lies down (ie, responds) *only* when she hears her owner say the word \"down\" but ignores similar stimuli, such as her owner saying the word \"done.\" Lastly, reinforcers can be classified as primary or secondary (Figure 17.9). A **primary reinforcer** is a stimulus that is innately rewarding to an organism, such as food; an organism does not need to learn that a primary reinforcer is rewarding. However, a **secondary reinforcer** (also known as a conditioned reinforcer or a conditional reinforcer) (eg, money) is a stimulus that has been associated with a primary reinforcer. Secondary reinforcers can sometimes be used to acquire a primary reinforcer (eg, money can buy food). 17.2.04 Escape and Avoidance Learning Concept 17.2.01 on operant conditioning discusses negative reinforcement, which is the withdrawal of an unpleasant stimulus following a behavior, resulting in an increase in the likelihood that the behavior will occur again. Negative reinforcement can lead to escape learning and/or avoidance learning (see Figure 17.10). **Figure 17.10** Escape learning and avoidance learning. **Escape learning** occurs when an organism learns how to terminate an ongoing unpleasant stimulus (eg, a dog jumps over a partition to flee from or stop a continuous electric shock). Escape learning becomes **avoidance learning** when an organism prevents coming into contact with an unpleasant stimulus (eg, a dog jumps over a partition to avoid the electric shock before it occurs). A person in a cage with a tiger Description automatically generated **The Cognitive Underpinnings of Associative Learning** 17.3.01 The Cognitive Underpinnings of Associative Learning Cognition plays an important role in associative learning. In classical conditioning, an **expectancy** refers to an organism\'s awareness that the unconditioned stimulus (eg, shock) will likely follow the neutral stimulus (eg, tone). For example, if a tone comes before a shock on several occasions, a rat will begin to expect the upcoming shock upon hearing the tone and consequently display distress (Figure 17.11). Research has demonstrated that animals can learn about the degree to which the tone predicts the shock (ie, its predictive value); a tone that precedes a shock *every* time will elicit a stronger response than a tone that precedes a shock only *some* of the time. **Figure 17.11** Example of expectancy in classical conditioning. Studies have also shown that there is a cognitive component to learning in an operant conditioning task. In one experiment, rats passively learned the layout of a maze while exploring in the absence of reinforcement. The rats later demonstrated their learning by quickly completing the maze for a food reward (Figure 17.12). This study suggests that, even in the absence of a reward, it is possible for organisms to develop **cognitive maps** (ie, mental images of physical space) that can be used when needed, such as when a food reward is suddenly presented. ![A white mouse in a cage Description automatically generated](media/image10.png) A dog in a cage Description automatically generated **The Biological Underpinnings of Associative Learning** 17.4.01 The Biological Underpinnings of Associative Learning An organism\'s biology can hinder or facilitate associative learning. Classical conditioning (Lesson 17.1) is aided by **biological preparedness**, the tendency of people or animals to readily learn associations that promote survival (eg, taste aversions) (Figure 17.13). For example, animals more readily develop specific phobias to situations (eg, confined spaces) or animals that are potentially harmful (eg, snakes). **Figure 17.13** Example of biological preparedness in monkeys. In contrast, operant conditioning (Lesson 17.2) can be limited by biological factors; one example is instinctive (or instinctual) drift. An instinct is an innate, fixed pattern of behavior that is more complex than a reflex, which is a simple response to a stimulus (eg, jerking one's hand away from a hot stove). Chapter 17: Associative Learning 103 Instincts are not based on prior experience or learning. For example, newly hatched sea turtles instinctively know to move toward the ocean and swim. **Instinctive** **drift** describes when an animal\'s innate behaviors overshadow a learned behavior. Animals trained using operant conditioning (whereby a desired behavior is reinforced) will often revert to innate behaviors even when reinforcement is provided (Figure 17.14). For example, researchers successfully used food rewards to train pigs to pick up wooden coins and deposit them into a piggy bank. Over time, the pigs began dropping the coins before reaching the piggy bank and pushing them along the ground with their snouts, an innate behavior known as rooting. **Figure 17.14** Example of instinctive drift in pigs. ![A monkey sitting in a cage Description automatically generated](media/image12.png)