🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Anderson Jr. Cognitive Psychology: Attention and Performance PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

MomentousSaxhorn

Uploaded by MomentousSaxhorn

Anderson, Jr.

Tags

Cognitive Psychology Attention Information Processing Psychology

Summary

This document discusses attention and performance in cognitive psychology, examining how humans attend to information in a complex world. It details serial bottlenecks in information processing and how they affect parallel activities.

Full Transcript

3 Attention and Performance C hapter 2 described how the human visual system and other perceptual systems simultaneously process information from all over their sensory fields. However, we have limits on how much we can do in parallel. In many situations, we can attend to only one spok...

3 Attention and Performance C hapter 2 described how the human visual system and other perceptual systems simultaneously process information from all over their sensory fields. However, we have limits on how much we can do in parallel. In many situations, we can attend to only one spoken message or one visual object at a time. This chapter explores how higher level cognition determines what to attend to. We will consider the following questions: In a busy world filled with sounds, how do we select what to listen to? How do we find meaningful information within a complex visual scene? What role does attention play in putting visual patterns together as recognizable objects? How do we coordinate parallel activities like driving a car and holding a conversation? ◆◆ Serial Bottlenecks Psychologists have proposed that there are serial bottlenecks in human in- formation processing, points at which it is no longer possible to continue processing everything in parallel. For example, it is generally accepted that there are limits to parallelism in the motor systems. Although most of us can perform separate actions simultaneously when the actions involve different motor systems (such as walking and chewing gum), we have difficulty in getting one motor system to do two things at once. Thus, even though we have two hands, we have only one system for moving our hands, so it is hard to get our two hands to move in different ways at the same time. Think of the familiar problem of trying to pat your head while rubbing your stomach. It is hard to prevent one of the movements from dominating—if you are like me, you tend to wind up rubbing or patting both parts of the body.1 The many human motor systems—one for moving feet, one for moving hands, one for moving eyes, and so on—can and do work independently and simultaneously, but it is difficult to get any one of these systems to do two things at the same time. One question that has occupied psychologists is how early do the bot- tlenecks occur: before we perceive the stimulus, after we perceive the stimu- lus but before we think about it, or only just before motor action is required? Common sense suggests that some things cannot be done at the same time. 1 Drummers (including my son) are particularly good at doing this—I definitely am not a drummer. This suggests that the real problem might be motor timing. 53 54 / Chapter 3 A tt e n ti o n a n d P e r f o r m a n c e For instance, it is basically impossible to add two Brain Structures digits and multiply them simultaneously. Still, there Dorsolateral prefrontal Motor cortex: Parietal cortex: attends remains the question of just where the bottlenecks in cortex: directs central controls hands to locations and objects information processing lie. Various theories about cognition when they happen are referred to as early-selection theories or late-selection theories, depending on where they propose that bottlenecks take place. Wher- ever there is a bottleneck, our cognitive processes must select which pieces of information to attend to and which to ignore. The study of attention is con- cerned with where these bottlenecks occur and how information is selected at these bottlenecks. A major distinction in the study of attention is be- tween goal-directed factors (sometimes called endog- Anterior cingulate: enous control) and stimulus-driven factors (sometimes (midline structure) monitors conflict Auditory cortex: Extrastriate cortex: called exogenous control). To illustrate the distinction, processes auditory processes visual Corbetta and Shulman (2002) ask us to imagine our- information information selves at Madrid’s El Prado Museum, looking at the right panel of Bosch’s painting The Garden of Earthly Delights (see Color Plate 3.1). Initially, our eyes will FIGURE 3.1 A representation of probably be drawn to large, salient objects like the instrument in the center some of the brain areas involved of the picture. This would be an instance of stimulus-driven attention—it is in attention and some of the per- ceptual and motor regions they not that we wanted to attend to this; the instrument just grabbed our atten- control. The parietal regions are tion. However, our guide may start to comment on a “small animal playing a particularly important in directing musical instrument.” Now we have a goal and will direct our attention over the perceptual resources. The pre- picture to find the object being described. Continuing their story, Corbetta and frontal regions (dorsolateral Shulman ask us to imagine that we hear an alarm system starting to ring in the prefrontal cortex, anterior cingu- late) are particularly important in next room. Now a stimulus-driven factor has intervened, and our attention will executive control. be drawn away from the picture and switch to the adjacent room. Corbetta and Shulman argue that somewhat different brain systems control goal-directed attention versus stimulus-driven attention. For instance, neural imaging evi- dence suggests that the goal-directed attentional system is more left lateralized, whereas the stimulus-driven system is more right lateralized. The brain regions that select information to process can be distinguished (to an approximation) from those that process the information selected. Figure 3.1 highlights the parietal cortex, which influences information processing in regions such as the visual cortex and auditory cortex. It also highlights prefrontal regions that influence processing in the motor area and more posterior regions. These prefrontal regions include the dorsolateral prefrontal cortex and, well below the surface, the anterior cingulate cortex. As this chapter proceeds, it will elaborate on the research concerning the various brain regions in Figure 3.1. Attentional systems select information to process at serial bottle- necks where it is no longer possible to do things in parallel. ◆◆ Auditory Attention Some of the early research on attention was concerned with auditory attention. Much of this research centered on the dichotic listening task. In a typical di- chotic listening experiment, illustrated in Figure 3.2, participants wear a set of headphones. They hear two messages at the same time, one in each ear, and are asked to “shadow” one of the two messages (i.e., repeat back the words from that message only). Most participants are able to attend to one message and tune out the other. A u d it o r y A tt e n ti o n / 55... and then John turned rapidly toward... ran house ox cat and, um, John turned... FIGURE 3.2 A typical dichotic listening task. Different messages are presented to the left and right ears, and the participant attempts to “shadow” the message entering one ear. (Research from Lindsay & Norman, 1977.) Psychologists (e.g., Cherry, 1953; Moray, 1959) have discovered that very little information about the unattended message is processed in a dichotic listening task. All that participants can report about the unattended message is Dichotic Listening whether it was a human voice or a noise; whether the human voice was male or female; and whether the sex of the speaker changed during the test. They cannot tell what language was spoken or remember any of the words, even if the same word was repeated over and over again. An analogy is often made between performing this task and being at a party, where a guest tunes in to one message (a conversation) and filters out others. This is an example of goal- directed processing—the listener selects the message to be processed. However, to return to the distinction between goal-directed and stimulus-driven processing, important stimulus information can disrupt our goals. We have probably all experienced the situation in which we are listening intently to one person and hear our name mentioned by someone else. It is very hard in this situation to keep your attention on what the original speaker is saying. The Filter Theory Broadbent (1958) proposed an early-selection theory called the filter theory to account for these results. His basic assumption was that sensory infor- mation comes through the system until some bottleneck is reached. At that Attentional Filtering point, a person chooses which message to process on the basis of some phys- ical characteristic. The person is said to filter out the other information. In a dichotic listening task, the theory proposed that the message to each ear was registered but that at some point the participant selected one ear to listen with. At a busy party, we pick which speaker to follow on the basis of physical characteristics, such as the pitch of the speaker’s voice. A crucial feature of Broadbent’s original filter model is its proposal that we select a message to process on the basis of physical characteristics such as ear or pitch. This hypothesis made a certain amount of neurophysiological sense. Messages entering each ear arrive on different nerves. Nerves also vary in which frequencies they carry from each ear. Thus, we might imagine that the brain, in some way, selects certain nerves to “pay attention to.” People can certainly choose to attend to a message on the basis of its physical characteristics, but they can also select messages to process on the 56 / Chapter 3 A tt e n ti o n a n d P e r f o r m a n c e basis of their semantic content. In one study, Gray and Wed- dogs six fleas... derburn (1960), who at the time were undergraduate stu- dents at Oxford University, demonstrated that participants... eight scratch two can use meaningfulness to follow a message that jumps back and forth between the ears. Figure 3.3 illustrates the par- ticipants’ task in their experiment. In one ear they might be hearing the words dogs six fleas, while at the same time hear- ing the words eight scratch two in the other ear. Instructed to shadow the meaningful message, participants would report dogs scratch fleas. Thus, participants can shadow a message on the basis of meaning rather than on the basis of what each ear physically hears. Treisman (1960) looked at a situation in which par- ticipants were instructed to shadow a particular ear dogs scratch fleas... (Figure 3.4). The message in the ear to be shadowed was meaningful up to a certain point; then it turned into a random sequence of words. Simultaneously, the mean- FIGURE 3.3 An illustration of the ingful message switched to the other ear—the one to which the participant shadowing task in the Gray and had not been attending. Some participants switched ears, against instruc- Wedderburn (1960) experiment. tions, and continued to follow the meaningful message. Others contin- The participant follows the mean- ued to follow the shadowed ear. Thus, it seems that sometimes people use ingful message as it moves from a physical characteristic (e.g., a particular ear) to select which message to ear to ear. (Adapted from Klatzky, 1975.) follow, and sometimes they choose semantic content. Broadbent’s filter model proposes that we use physical features, such as ear or pitch, to select one message to process, but it has been shown that people can also use the meaning of the message as the basis for selection. The Attenuation Theory and the Late-Selection Theory To account for these kinds of results, Treisman (1964) proposed a modification FIGURE 3.4 An illustration of the of the Broadbent model that has come to be known as the attenuation theory. Treisman (1960) experiment. The This model hypothesized that certain messages would be attenuated (weak- meaningful message moves to ened) but not filtered out entirely on the basis of their physical properties. Thus, the other ear, and the participant in a dichotic listening task, participants would minimize the signal from the un- sometimes continues to shadow it against instructions. (Adapted attended ear but not eliminate it. Semantic selection criteria could apply to all from Klatzky, 1975.) messages, whether they were attenuated or not. If the message were attenuated, it would be harder to apply these selection criteria, but it would still be possible. Treisman (personal com- I SAW THE GIRL/Song was wishing... munication, 1978) emphasized that in her experiment in Figure 3.4, most participants actually continued to... me that bird shadow the prescribed ear. It was easier to follow the JUMPING IN THE STREET. message that is not being attenuated than to apply se- mantic criteria to switch attention to the attenuated message. An alternative explanation was offered by J. A. Deutsch and D. Deutsch (1963) in their late- selection theory, which proposed that all the infor- The to-be-shadowed ear mation is processed completely without attenuation. Their hypothesis was that the capacity limitation is in the response system, not the perceptual system. They claimed that people can perceive multiple messages I SAW THE GIRL JUMPING... but that they can say only one message at a time. Thus, people need some basis for selecting which message to A u d it o r y A tt e n ti o n / 57 1 2 Responses 1 2 Responses FIGURE 3.5 Treisman and Geffen’s illustration of attentional limita- tions produced by (a) Treisman’s (1964) attenuation theory and Selection and Selection and (b) Deutsch and Deutsch’s (1963) organization of organization of late-selection theory. (Data from responses responses Treisman & Geffen, 1967.) Response filter Analysis of verbal content Analysis of verbal content Perceptual filter 1 2 Input messages 1 2 Input messages (a) (b) shadow. If they use meaning as the criterion (either according to or in contra- diction to instructions), they will switch ears to follow the message. If they use the ear of origin in deciding what to attend to, they will shadow the chosen ear. The difference between this late-selection theory and the early-selection attenuation theory is illustrated in Figure 3.5. Both models assume that there is some filter or bottleneck in processing. Treisman’s theory (Figure 3.5a) as- sumes that the filter selects which message to attend to, whereas Deutsch and Deutsch’s theory (Figure 3.5b) assumes that the filter occurs after the percep- tual stimulus has been analyzed for verbal content. Treisman and Geffen (1967) tested the difference between these two theories using a dichotic listening task in which participants had to shadow one message while also processing both messages for a target word. If they heard the target word, they were to signal by tapping. According to the Deutsch and Deutsch late-selection theory, mes- sages from both ears would get through and participants should have been able to detect the critical word equally well in either ear. In contrast, the attenuation theory predicted much less detection in the unshadowed ear because the mes- sage would be attenuated. In the experiment, participants detected 87% of the target words in the shadowed ear and only 8% in the unshadowed ear. Other evidence consistent with the attenuation theory was reported by Treisman and Riley (1969) and by Johnston and Heinz (1978). There is neural evidence for a version of the attenuation theory that as- serts that there is both enhancement of the signal coming from the attended ear and attenuation of the signal coming from the unattended ear. The pri- mary auditory area of the cortex (see Figure 3.1) shows an enhanced response to auditory signals coming from the ear the listener is attending to and a de- creased response to signals coming from the other ear. Through ERP recording, Woldorff et al. (1993) showed that these responses occur between 20 and 50 ms after stimulus onset. The enhanced responses occur much sooner in auditory processing than the point at which the meaning of the message can be identi- fied. Other studies also provide evidence for enhancement of the message in the auditory cortex on the basis of features other than location. For instance, Za- torre, Mondor, and Evans (1999) found in a PET study that when people attend 58 / Chapter 3 A tt e n ti o n a n d P e r f o r m a n c e to a message on the basis of pitch, the auditory cortex shows enhancement (reg- istered as increased activation). This study also found increased activation in the parietal areas that direct attention. Although auditory attention can enhance processing in the primary audi- tory cortex, there is no evidence of reliable effects of attention on earlier stages of auditory processing, such as in the auditory nerve or the brain stem (Picton & Hillyard, 1974). The various results we have reviewed suggest that the pri- mary auditory cortex is the earliest area to be influenced by attention. It should be stressed that the effects at the auditory cortex are a matter of attenuation and enhancement. Messages are not completely filtered out, and so it is still possible to select them at later points of processing. Attention can enhance or reduce the magnitude of response to an auditory signal in the primary auditory cortex. ◆◆ Visual Attention The bottleneck in visual information processing is even more apparent than the one in auditory information processing. As we saw in Chapter 2, the retina varies in acuity, with the greatest acuity in a very small area called the fovea. Although the human eye registers a large part of the visual field, the fovea reg- isters only a small fraction of that field. Thus, in choosing where to focus our FIGURE 3.6 The results of an vision, we also choose to devote our most powerful visual processing resources experiment to determine how to a particular part of the visual field, and we limit the resources allocated to people react to a stimulus that processing other parts of the field. Usually, we are attending to that part of the occurs 7° to the left or right of visual field on which we are focusing. For instance, as we read, we move our the fixation point. The graph shows participants’ reaction times eyes so that we are fixating the words we are attending to. to expected, unexpected, and The focus of visual attention is not always identical with the part of the visual neutral (no expectation) signals. field being processed by the fovea, however. People can be instructed to fixate on (Data from Posner et al., 1978.) one part of the visual field (making that part the focus of the fovea) while attend- ing to another, nonfoveal region of the visual field.2 In one experiment, Posner, Nissen, and Ogden (1978) had par- 320 ticipants focus on a constant point and then presented them with a stimulus 7° to the left or the right of the fixation point. In some trials, participants were told on which side the stim- 300 ulus was likely to occur; in other trials, there was no such warning. The warning was correct 80% of the time, but 20% of the time the stimulus appeared on the unexpected side. Reaction time (ms) 280 The researchers monitored eye movements and included only those trials in which the eyes had stayed on the fixation point. Figure 3.6 shows the time required to judge the stimu- 260 lus if it appeared in the expected location (80% of the time), if the participant had not been given a neutral cue (50% of the time on both sides), and if it appeared in the unexpected 240 location (20% of the time). Participants were faster when the stimulus appeared in the expected location and slower when it appeared in the unexpected location. Thus, they were able 220 to shift their attention from where their eyes were fixated. Unexpected No expectation Expected Condition Posner, Snyder, and Davidson (1980) found that peo- ple can attend to regions of the visual field as far as 24° 2 This is what quarterbacks are supposed to do when they pass the football, so that they don’t “give away” the position of the intended receiver. V i s u a l A tt e n ti o n / 59 (a) (b) (c) FIGURE 3.7 Frames from the two films used by Neisser and Becklen in their visual analog of the auditory shadowing task. (a) The “hand-game” film; (b) the basketball film; and (c) the two films superimposed. (Neisser, U., & Becklen, R. (1975). Selective looking: Attending to visually specified events. Cognitive Psychology, 7, 480–494. Copyright © 1975 Elsevier. Reprinted by permission.) from the fovea. Although visual attention can be moved without accompanying eye movements, people usually do move their eyes, so that the fovea processes the portion of the visual field to which they are attending. Posner (1988) Spatial Cueing pointed out that successful control of eye movements requires us to attend to places outside the fovea. That is, we must attend to and identify an interesting nonfoveal region so that we can guide our eyes to fixate on that region to achieve the greatest acuity in processing it. Thus, a shift of attention often precedes the corresponding eye movement. To process a complex visual scene, we must move our attention around in the visual field to track the visual information. This process is like shadowing a conversation. Neisser and Becklen (1975) performed the visual analog of the audi- tory shadowing task. They had participants observe two videotapes superimposed FIGURE 3.8 An example of a picture used in the study of over each other. One was of two people playing a hand-slapping game; the other O’Craven et al. (1999). When the was of some people playing a basketball game. Figure 3.7 shows how the situation face is attended, there is activa- appeared to the participants. They were instructed to pay attention to one of the tion in the fusiform face area, two films and to watch for odd events such as the two players in the hand-slapping and when the house is attended, game pausing and shaking hands. Participants were able to monitor one film there is activation in the parahip- pocampal place area. (Downing, successfully and reported filtering out the other. When asked to monitor both films Liu, & Kanwisher, 2001. Reprinted for odd events, the participants experienced great difficulty and missed many of the with permission from Elsevier.) critical events. As Neisser and Becklen (1975) noted, this situ- ation involved an interesting combination of the use of physical cues and the use of content cues. Partici- pants moved their eyes and focused their attention in such a way that the critical aspects of the monitored event fell on their fovea and the center of their atten- tive spotlight. The only way they could know where to move their eyes to focus on a critical event was by making reference to the content of the event. Thus, the content of the event facilitated their processing of the film, which in turn facilitated extracting the content. Figure 3.8 shows examples of the overlapping stimuli used in an experiment by O’Craven, Downing, and Kanwisher (1999) to study the neural conse- quences of attending to one object or the other. Partici- pants in their experiment saw a series of pictures that consisted of faces superimposed on houses. They were instructed to look for either repetition of the same face 60 / Chapter 3 A tt e n ti o n a n d P e r f o r m a n c e in the series or repetition of the same house. Recall from Chapter 2 that there is a region of the temporal cortex, the fusiform face area, which becomes active when people are observing faces. There is another area within the temporal cor- tex, the parahippocampal place area, that becomes more active when people are observing places. What is special about these pictures is that they consisted of both faces and places. Which region would become active—the fusiform face area or the parahippocampal place area? As the reader might suspect, the answer de- pended on what the participant was attending to. When participants were looking for repetition of faces, the fusiform face area became more active; when they were looking for repetition of places, the parahippocampal place area became more active. Attention determined which region of the temporal cortex was engaged in the processing of the stimulus. People can focus their attention on parts of the visual field and move their focus of attention to process what they are interested in. The Neural Basis of Visual Attention It appears that the neural mechanisms underlying visual attention are very similar to those underlying auditory attention. Just as auditory attention directed to one ear enhances the cortical signal from that ear, visual attention directed to a spatial location appears to enhance the cortical signal from that location. If a person attends to a particular spatial location, a distinct neural response (detected using ERP records) in the visual cortex occurs within 70 to 90 ms after the onset of a stimulus. On the other hand, when a person is attending to a particular object (attending to a chair rather than a table, say) rather than to a particular location in space, we do not see a response for more than 200 ms. Thus, it appears to take more effort to direct visual attention on the basis of content than on the basis of physical features, just as is the case with auditory attention. Mangun, Hillyard, and Luck (1993) had participants fixate on the center of a computer screen, then judge the lengths of bars presented in positions dif- ferent from the fixation location (upper left, lower left, upper right, and lower right). Figure 3.9 shows the distribution of scalp activity detected by ERP when FIGURE 3.9 Results from an experiment by Mangun, Hillyard, and Luck. Distribution of scalp activity was recorded by ERP when a participant was attending to one of the four different regions of the visual array depicted in the bottom row while fixating on the center of the screen. The greatest activity was recorded over the side of the scalp opposite the side of the visual field where the object appeared, confirming that there is enhanced neural processing in portions of the visual cortex corresponding to the location of visual attention. (Mangun, G. R., Hillyard, S. A., & Luck, S. J. (1993). Electrocortical substrates of visual selective attention. In D. Meyer & S. Kornblum (Eds.), Attention and performance (Vol. 14, Figure 10.4 from pp. 219–243). © 1993 Massachusetts Institute of Technology, by permission of The MIT Press.) P1 attention effect (current density) P1 P1 P1 P1 Stimulus + + + + V i s u a l A tt e n ti o n / 61 FIGURE 3.10 The experimental Fixation point procedure in Roelfsema et al. (a) (b) (c) (1998): (a) The monkey fixates the start point (the star). (b) Two curves are presented, Fixation (300 ms) Stimulus (600 ms) Saccade one of which links the start point to a target point (a blue circle). (c) The monkey saccades to the target point. The experimenter records from a neuron whose receptive field is along the curve to the target point. Receptive field a participant was attending to one of these four different regions of the visual ar- ray (while fixating on the center of the screen). Consistent with the topographic organization of the visual cortex, there was greatest activity over the side of the scalp opposite the side of the visual field where the object appeared. Recall from Chapters 1 and 2 (see Figure 2.5) that the visual cortex (at the back of the head) is topographically organized, with each visual field (left or right) represented in the opposite hemisphere. Thus, it appears that there is enhanced neural processing in the portion of the visual cortex corresponding to the location of visual attention. A study by Roelfsema, Lamme, and Spekreijse (1998) illustrates the im- pact of visual attention on information processing in the primary visual area of the macaque monkey. In this experiment, the researchers trained monkeys to perform the rather complex task illustrated in Figure 3.10. A trial would begin with a monkey fixating on a particular stimulus in the visual field, the star in part (a) of the figure. Then, as shown in Figure 3.10b, two curves would appear that ended in blue dots. Only one of these curves was connected to the fixation point. The monkey had to keep looking at the fixation point for 600 ms and then perform a saccade (an eye movement) to the end of the curve that con- nected the fixation (part c). While a monkey performed this task, Roelfsema et al. recorded from cells in the monkey’s primary visual cortex (where cells with receptive fields like those in Figure 2.8 are found). Indicated by the square in Figure 3.10 is a receptive field of one of these cells. It shows increased re- sponse when a line falls on that part of the visual field and so responds when the curve appears that crosses it. The cell’s response also increased during the 600-ms waiting period, but only if its receptive field was on the curve that con- nected to the fixation point. During the waiting period the monkey was shifting its attention along this curve to find its end point and thus determine the des- tination of the saccade. This shift of attention across the receptive field caused the cell to respond more strongly. When people attend to a particular spatial location, there is greater neural processing in portions of the visual cortex correspond- ing to that location. Visual Search People are able to select stimuli to attend to, either in the visual or auditory domain, on the basis of physical properties and, in particular, on the basis of location. Although selection based on simple features can occur early and 62 / Chapter 3 A tt e n ti o n a n d P e r f o r m a n c e quickly in the visual system, not everything people look for can be defined in TWLN terms of simple features. How do people find more complex objects, such as XJBU the face of a friend in a crowd? In such cases, it seems that they must search UDXI HSFP through the faces in the crowd, one by one, looking for a face that has the de- XSCQ sired properties. Much of the research on visual attention has focused on how SDJU people perform such searches. Rather than study how people find faces in a PODC crowd, however, researchers have tended to use simpler material. Figure 3.11, ZVBP for instance, shows a portion of the display that Neisser (1964) used in one of PEVZ SLRA the early studies. Try to find the first K in the set of letters displayed. JCEN Presumably, you tried to find the K by going through the letters row by ZLRD row, looking for the target. Figure 3.12 graphs the average time it took partici- XBOD pants in Neisser’s experiment to find the letter as a function of which row it ap- PHMU peared in. The slope of the best-fitting function in the graph is about 0.6, which ZHFK PNJW implies that participants took about 0.6 s to scan each line. When people engage CQXT in such searches, they appear to be allocating their attention intensely to the GHNR search process. For instance, brain-imaging experiments have found strong ac- IXYD tivation in the parietal cortex during such searches (see Kanwisher & Wojciulik, QSVB 2000, for a review). GUCH OWBN Although a search can be intense and difficult, it is not always that way. BVQN Sometimes we can find what we are looking for without much effort. If we FOAS know that our friend is wearing a bright red jacket, it can be relatively easy to ITZN find him or her in the crowd, provided that no one else is wearing a bright red jacket. Our friend will just pop out of the crowd. Indeed, if there were just one red jacket in a sea of white jackets, it would probably pop out even if we were FIGURE 3.11 A representation of not looking for it—an instance of stimulus-driven attention. It seems that if lines 7–31 of the letter array used there is some distinctive feature in an array, we can find it without a search. in Neisser’s search experiment. Treisman studied this sort of pop-out. For instance, Treisman and (Data from Neisser, 1964.) Gelade (1980) instructed participants to try to detect a T in an array of 30 I’s and Y’s (Figure 3.13a). They reasoned that participants could do this simply by looking for the crossbar feature of the T that distinguishes it from all I’s and Y’s. Participants took an average of about 400 ms to perform this task. Serial vs. Parallel Treisman and Gelade also asked participants to detect a T in an array of I’s Search and Z’s (Figure 3.13b). In this task, they could not use just the vertical bar or just the horizontal bar of the T; they had to look for the conjunction of these features and perform the feature combination required in pattern rec- FIGURE 3.12 The time required to find a target letter in the array ognition. It took participants more than 800 ms, on average, to find the let- shown in Figure 3.11 as a function ter in this case. Thus, a task requiring them to recognize the conjunction of of the line number in which it ap- features took about 400 ms longer than one in which perception of a single pears. (Data from Neisser, 1964.) feature was sufficient. Moreover, when Treisman and Gelade varied the num- ber of letters in the array, they found that participants 40 were much more affected by the number of objects in the task that required recognition of the conjunction of features (see Figure 3.14). 30 It is necessary to search through a visual array for an object only when a unique visual feature Time (s) 20 does not distinguish that object. 10 The Binding Problem As discussed in Chapter 2, there are different types of neurons in the visual system that respond to dif- 0 0 10 20 30 40 50 ferent features, such as colors, lines at various ori- Position of critical item (line number) entations, and objects in motion. A single object in our visual field will involve a number of features; for V i s u a l A tt e n ti o n / 63 FIGURE 3.13 Stimuli used by Treisman and Gelade to determine how people identify objects in the visual field. They found that it is easier to pick out a target letter (T) from a group of distracters if (a) the target letter has a feature that makes it easily distinguishable from the distracter letters (I’s and (a) Y’s) than if (b) the same target letter is in an array of distracters (I’s and Z’s) that offer no obvious distinctive features. (Data from Treisman & Gelade, 1980.) (b) instance, a red vertical line combines the vertical feature and the red fea- ture. The fact that different features of the same object are represented by different neurons gives rise to a logical question: How are these features put back together to produce perception of the object? This would not be much of a problem if there were just a single object in the visual field. We could assume that all the features belonged to that object. But what if there were multiple objects in the field? For instance, suppose there were just two ob- jects: a red vertical bar and a green horizontal bar. These two objects might result in the firing of neurons for red, neurons for green, neurons for verti- cal lines, and neurons for horizontal lines. If these firings were all that oc- curred, though, how would the visual system know it saw a red vertical bar and a green horizontal bar rather than a red horizontal bar and a green ver- tical bar? The question of how the brain puts together various features in FIGURE 3.14 Results from the the visual field is referred to as the binding problem. Treisman and Gelade experiment. The graph plots the average re- Treisman (e.g., Treisman & Gelade, 1980) developed her feature-integration action times required to detect theory as an answer to the binding problem. She proposed that people must focus a target letter as a function of their attention on a stimulus before they can synthesize its features into a pattern. the number of distracters and For instance, in the example just given, the visual system can first direct its atten- whether the distracters contain separately all the features of the tion to the location of the red vertical bar and synthesize that object, then direct its target. (Data from Treisman & attention to the green horizontal bar and synthesize that object. According to Tre- Gelade, 1980.) isman, people must search through an array when they need to syn- thesize features to recognize an object (for instance, when trying to identify a K, which consists of a vertical line and two diagonal lines). 1200 In contrast, when an object in an array has a single unique feature, such as a red jacket or a line at a particular orientation, we can attend Reaction time (ms) to it without search. 800 The binding problem is not just a hypothetical dilemma— it is something that humans actually experience. One source of 400 evidence comes from studies of illusory conjunctions in which T in I, Z people report combinations of features that did not occur. For T in I, Y instance, Treisman and Schmidt (1982) looked at what happens 0 to feature combinations when the stimuli are out of the focus of 1 5 15 30 attention. Participants were asked to report the identity of two Array size (number of items) black digits flashed in one part of the visual field, so this was 64 / Chapter 3 A tt e n ti o n a n d P e r f o r m a n c e where their attention was focused. In an unattended part of the visual field, letters in various colors were presented, such as a pink T, a yellow S, and a blue N. After they reported the numbers, participants were asked to report any letters they had seen and the colors of these letters. They reported see- ing illusory conjunctions of features (e.g., a pink S) almost as often as they reported seeing correct combinations. Thus, it appears that we are able to combine features into an accurate perception only when our attention is fo- cused on an object. Otherwise, we perceive the features but may well com- bine them into a perception of objects that were never there. Although rather special circumstances are required to produce illusory conjunctions in an or- dinary person, there are certain patients with damage to the parietal cortex who are particularly prone to such illusions. For instance, one patient stud- ied by Friedman-Hill, Robertson, and Treisman (1995) confused which let- ters were presented in which colors even when shown the letters for as long as 10 s. A number of studies have been conducted on the neural mechanisms involved in binding together the features of a single object. Luck, Chelazzi, Hillyard, and Desimone (1997) trained macaque monkeys to fixate on a cer- tain part of the visual field and recorded neurons in a visual region called V4. The neurons in this region have large receptive fields (several degrees of visual angle). Therefore, multiple objects in a display may be within the visual field of a single neuron. They found neurons that were specific to particular types of objects, such as a cell that responded to a blue verti- cal bar. What happens when a blue vertical bar and a green horizontal bar are presented both within the receptive field of this cell? If the monkey at- tended to the blue vertical bar, the rate of response of the cell was the same as when there was only a blue vertical bar. On the other hand, if the mon- key attended to the green horizontal bar, the rate of firing of this same cell was greatly depressed. Thus, the same stimulus (blue vertical bar plus green FIGURE 3.15 This shows a single horizontal bar) can evoke different responses depending on which object is frame from the movie used by attended to. It is speculated that this phenomenon occurs because attention Simons and Chabris to demon- strate the effects of sustained suppresses responses to all features in the receptive field except those at the attention. When participants were attended location. Similar results have been obtained in fMRI experiments intent on tracking the ball passed with humans. Kastner, DeWeerd, Desimone, and Ungerleider (1998) meas- among the players dressed in ured the fMRI signal in visual areas that responded to stimuli presented in white T-shirts, they tended not one region of the visual field. They found that when attention was directed to notice the black gorilla walking through the room. (Adapted from away from that region, the fMRI response to stimuli in that region de- Simons & Chabris, 1999.) creased; but when attention was focused on that region, the fMRI response was maintained. These experiments indicate en- hanced neural processing of attended objects and locations. A striking demonstration of the effects of sustained attention was reported by Simons and Chabris (1999). They asked participants to watch a video in which a team dressed in black tossed a basketball back and forth and a team dressed in white did the same (Figure 3.15). Participants were instructed to count either the number of times the team in black tossed the ball or the number of times the team in white did so. Presumably, in one condition participants were looking for events involving the team in black and in the other for events involving the team in white. Because the players were intermixed, the task was difficult and required sustained attention. In the middle of V i s u a l A tt e n ti o n / 65 the game, a person in a black gorilla suit walked through the room. Participants searching the video for events involving 1400 Cued for right field team members dressed in white were so fixed on their search Cued for left field that they completely missed an event involving a black object. 1200 When participants were tracking the team in white, they no- ticed the black gorilla only 8% of the time; when they were tracking the team in black, they noticed it 67% of the time. 1000 People passively watching the video never miss the black Latency (ms) gorilla. (You should be able to find a version of this video by searching with the keywords “gorilla” and “Simons.”) 800 For feature information to be synthesized into a pat- tern, the information must be in the focus of attention. 600 Neglect of the Visual Field 400 We have discussed the evidence that visual attention to a spa- Left Right tial location results in enhanced activation in the appropriate Field of presentation portion of the primary visual cortex. The neural structures that control the direction of attention, however, appear to be lo- cated elsewhere, particularly in the parietal cortex (Behrmann, Geng, & Shom- FIGURE 3.16 The attention stein, 2004). Damage to the parietal lobe (see Figure 3.1) has been shown to re- deficit shown by a patient with sult in deficits in visual attention. For instance, Posner, Walker, Friederich, and right parietal lobe damage when switching attention to the left Rafal (1984) showed that patients with parietal lobe injuries have difficulty in visual field. (Data from Posner, disengaging attention from one side of the visual field. Cohen, & Rafal, 1982.) Damage to the right parietal region produces distinctive patterns of deficit, as can be seen in a study of one such patient by Posner, Cohen, and Rafal (1982). Like the participants in the Posner, Nissen, and Ogden (1978) experiment discussed earlier, the patient was cued to expect a stimulus to the left or right of the fixation point (i.e., in the left or right visual field). As in that experiment, 80% of the time the stimulus appeared in the expected field, FIGURE 3.17 The performance but 20% of the time it appeared in the unexpected field. Figure 3.16 shows of a patient with damage to the right hemisphere who had been the time required to detect the stimulus as a function of which visual field asked to put slashes through all it was presented in and which field had been cued. When the stimulus was the circles. Because of the dam- presented in the right field, the patient showed only a little disadvantage if age to the right hemisphere, she inappropriately cued. If the stimulus appeared in the left field, however, the ignored the circles in the left part patient showed a large deficit if inappropriately cued. Because the right pa- of her visual field. (From Ellis & Young, 1988. Reprinted by permis- rietal lobe processes the left visual field, damage to the right lobe impairs its sion of the publisher. © 1988 by ability to draw attention back to the left visual field once attention is focused Erlbaum.) on the right visual field. This sort of one-sided attentional deficit can be temporarily created in normal individuals by presenting TMS to the parietal cortex (Pascual-Leone et al., 1994—see Chapter 1 for discussion of TMS). A more extreme version of this attentional disorder is called unilateral visual neglect. Patients with damage to the right hemisphere completely ignore the left side of the visual field, and patients with damage to the left hemisphere ignore the right side of the visual field. Figure 3.17 shows the performance of a patient with damage to the right hemisphere, which caused her to neglect the left visual field (Albert, 1973). She had been instructed to put slashes through all the circles. As can be seen, 66 / Chapter 3 A tt e n ti o n a n d P e r f o r m a n c e she ignored the circles in the left part of her visual field. Such patients will often behave peculiarly. For instance, one patient failed to shave half of his face (Sacks, 1985). These effects can also show up in nonvisual tasks. For instance, a study of patients with neglect of the left visual field showed a systematic bias in making judgments about the midpoint in sequences of numbers and letters (Zorzi, Priftis, Meneghello, Marenzi, & Umiltà, 2006). When asked to judge what number is midway between 1 and 5, they showed a bias to respond 4. They showed a similar tendency with letter sequences— asked to judge what letter was midway between P and T, they showed tendency to respond S. In both cases this can be interpreted as a tendency to ignore the items that were to the left of the point in the middle of the sequence. It seems that the right parietal lobe is involved in allocating spatial attention in many modalities, not just the visual (Zatorre et al., 1999). For instance, when one attends to the location of auditory or visual stimuli, there is increased activation in the right parietal region. It also appears that the right parietal lobe is more responsible for the spatial allocation of at- tention than is the left parietal lobe and that this is why right parietal dam- age tends to produce such dramatic effects. Left parietal damage tends to produce a subtler pattern of deficits. Robertson and Rafal (2000) argue that the right parietal region is responsible for attention to such global features as spatial location, whereas the left parietal region is responsible for direct- ing attention to local aspects of objects. Figure 3.18 is a striking illustra- tion of the different types of deficits associated with left and right parietal damage. Patients were asked to draw the objects in Figure 3.18a. Patients with right parietal damage (Figure 3.18b) were able to reproduce the spe- cific components of the picture but were not able to reproduce their spatial configuration. In contrast, patients with left parietal damage (Figure 3.18c) were able to reproduce the overall configuration but not the detail. Simi- larly, brain-imaging studies have found more activation of the right parietal region when a person is responding to global patterns and more activation of the left parietal region when a person is attending to local patterns (Fink et al., 1996; Martinez et al., 1997). FIGURE 3.18 (a) The pictures presented to patients with parietal damage. (b) Examples of drawings made by patients with right-hemisphere damage. These patients could reproduce the specific components of the picture but not their spatial configuration. (c) Examples of drawings made by patients with left-hemisphere damage. These patients could reproduce the overall configuration but not the detail. (After Robertson & Lamb, 1991.) (a) (b) (c) V i s u a l A tt e n ti o n / 67 Parietal regions are responsible for the allocation of attention, with the right hemisphere more concerned with global features and the left hemisphere with local features. (a) (d) Object-Based Attention So far we have talked about space-based attention, where people allocate their attention to a region of space. There is also evidence, for object-based attention, where people focus their attention on particular objects rather (b) (e) than regions of space. An experiment by Behrmann, Zemel, and Mozer (1998) is an example of research demonstrating that people sometimes find it easier to attend to an object than to a location. Figure 3.19 illustrates some of the stimuli used in the experiment, in which participants were asked (c) (f) to judge whether the numbers of bumps on the two ends of objects were the same. The left column shows instances in which the numbers of bumps were the same, the right column instances in which the numbers were not the same. Participants made these judgments faster when the bumps were on the same ob- FIGURE 3.19 Stimuli used in an ject (top and bottom rows in Figure 3.19) than when they were on different objects experiment by Behrmann, Zemel, and Mozer to demonstrate that it (middle row). This result occurred despite the fact that when the bumps were on is sometimes easier to attend to different objects, they were located closer together, which should have facilitated an object than to a location. The judgment if attention were space based. Behrmann et al. argue that participants left and right columns indicate shifted attention to one object at a time rather than one location at a time. There- same and different judgments, respectively; and the rows from fore, judgments were faster when the bumps were all on the same object because top to bottom indicate the participants did not need to shift their attention between objects. Using a variant single-object, two-object, and of the paradigm in Figure 3.19, Chen and Cave (2008) either presented the stimu- occluded conditions, respectively. lus for 1 s or for just 0.12 s. The advantage of the within-object effect disappeared (Behrmann, M., Zemel, R. S., & Mozer, M. C. (1998). Object-based when the stimulus was present for only the brief period. This indicates that it takes attention and occlusion: Evidence time for object-based attention to develop. from normal participants and Other evidence for object-centered attention involves a phenomenon computational model. Journal of Experimental Psychology: Human called inhibition of return. Research indicates that if we have looked at a par- Perception and Performance, 24, ticular region of space, we find it a little harder to return our attention to that 1011–1036. Copyright © 1988 region. If we move our eyes to location A and then to location B, we are slower American Psychological Association. Reprinted by permission.) to return our eyes to location A than to some new location C. This is also true when we move our attention without moving our eyes (Posner, Rafal, Chaote, & Vaughn, 1985). This phenomenon confers an advantage in some situations: If we are searching for something and have already looked at a location, we would prefer our visual system to find other locations to look at rather than return to an already searched location. Tipper, Driver, and Weaver (1991) performed one demonstration of the inhibition of return that also provided evidence for object-based attention. In their experiments, participants viewed three squares in a frame, similar to what is shown in each part of Figure 3.20. In one condition, the squares did not move (unlike the moving condition illustrated in Figure 3.20, which we will discuss in the next paragraph). The participants’ attention was drawn to one of the outer squares when the experimenters made it flicker, and then, 200 ms later, attention was drawn back to the center square when that square flickered. A probe stimulus was then presented in one of the two outer positions, and par- ticipants were instructed to press a key indicating that they had seen the probe. On average, they took 420 ms to see the probe when it occurred at the outer square that had not flickered and 460 ms when it occurred at the outer square that had flickered. This 40-ms advantage is an example of a spatially defined in- hibition of return. People are slower to move their attention to a location where it has already been. 68 / Chapter 3 A tt e n ti o n a n d P e r f o r m a n c e (e) (d) (c) (b) (a) FIGURE 3.20 Examples of frames used in an experiment by Tipper, Driver, and Weaver to determine whether inhibition of return would attach to a particular object or to its location. Arrows represent motion. (a) Display onset, with no motion for 500 ms. After two moving frames, the three filled squares were horizontally aligned (b), whereupon the cue appeared (one of the boxes flickered). Clockwise motion then continued, with cueing in the center for the initial three frames (c–e). The outer squares continued to rotate clockwise (d) until they were horizontally aligned (e), at which point a probe was presented, as before. (© 1991 from Tipper, S. P., Driver, J., & Weaver, B. (1991). Short re- port: Object-centered inhibition of return of visual attention. Quarterly Journal of Experimental Psychology, 43(Section A), 289–298. Reproduced by permission of Taylor & Francis LLC, http://www.tandfonline.com.) Figure 3.20 illustrates the other condition of their experiment, in which the objects were rotated around the screen after the flicker. By the end of the mo- tion, the object that had flickered on one side was now on the other side—the two outer objects had traded positions. The question of interest was whether participants would be slower to detect a target on the right (where the flicker- ing had been—which would indicate location-based inhibition) or on the left (where the flickered object had ended up—which would indicate object-based inhibition). The results showed that they were about 20 ms slower to detect an object in the location that had not flickered but that contained the object that had flickered. Thus, their visual systems displayed an inhibition of return to the same object, not the same location. It seems that the visual system can direct attention either to locations in space or to objects. Experiments like those just described indicate that the visual system can track objects. On the other hand, many experiments indicate that people can direct their attention to regions of space where there are no ob- jects (see Figure 3.6 for the results of such an experiment). It is interesting that the left parietal regions seem to be more involved in object-based attention and the right parietal regions in location-based attention. Patients with left parietal damage appear to have deficits in focusing attention on objects (Egly, Driver, & Rafal, 1994), unlike the location-based deficits that I have described C e n t r a l A tt e n ti o n : S e l e cti n g Li n e s o f T h o u ght t o P u r s u e / 69 in patients with right parietal damage. Also, there is greater activation in the left parietal regions when people attend to objects than when they attend to locations (Arrington, Carr, Mayer, & Rao, 2000; Shomstein & Behrmann, 2006). This association of the left parietal region with object-based attention is consistent with the earlier research we reviewed (see Figure 3.18) showing that the right parietal region is responsible for attention to global features and the left for attention to local features. Visual attention can be directed either toward objects independent of their location or toward locations independent of what objects are present. ◆◆ Central Attention: Selecting Lines of Thought to Pursue So far, this chapter has considered how people allocate their attention to pro- cess stimuli in the visual and auditory modalities. What about cognition after FIGURE 3.21 The results of the stimuli are attended to and encoded? How do we select which lines of an experiment by Byrne and thought to pursue? Suppose we are driving down a highway and encode the Anderson to see whether fact that a dog is sitting in the middle of the road. We might want to figure people can overlap two tasks. out why the dog is sitting there, we might want to consider whether there is The bars show the response something we should do to help the dog, and we certainly want to decide how times required to solve two problems—one of addition and best to steer the car to avoid an accident. Can we do all these things at once? one of multiplication—when done If not, how do we select the most important problem of deciding how to steer by themselves and when done and save the rest for later? It appears that people allocate central attention to together. The results indicate that competing lines of thought in much the same way they allocate perceptual at- the participants were not able to overlap the addition and multipli- tention to competing objects. cation computations. (Data from In many (but not all) circumstances, people are able to pursue only one Byrne & Anderson, 2001.) line of thought at a time. This section will describe two laboratory experiments: one in which it appears that people have no ability to overlap two tasks and 2250 another in which they appear to have almost total Stimulus: 3 4 7 ability to do so. Then we will address how people can 2000 Mean time to complete both tasks develop the ability to overlap tasks and how they se- lect among tasks when they cannot or do not want to 1750 overlap them. The first experiment, which Mike Byrne and I 1500 did (Byrne & Anderson, 2001), illustrates the claim made at the beginning of the chapter about it being Latency (ms) impossible to multiply and add two numbers at the 1250 same time. Participants in this experiment saw a string of three digits, such as “3 4 7.” Then they were 1000 asked to do one or both of two tasks: 750 Task 1: Judge whether the first two digits add up to the third and press a key with the right index finger if they do and another key with the left 500 index finger if they do not. Task 2: Report verbally the product of the first 250 Single task and third numbers. In this case, the answer is 21, because 3 × 7 = 21. 0 Dual task Verify Generate Figure 3.21 compares the time required to do addition multiplication each task in the single-task condition versus the 70 / Chapter 3 A tt e n ti o n a n d P e r f o r m a n c e time required for each task in the dual-task condition. Participants took al- most twice as long to do either task when they had to perform the other as well. In the dual task they sometimes gave the answer for the multiplication task first (59% of the time) and sometimes the addition task first (41%). The bars in Figure 3.21 for the dual task reflect the time to answer the problem whether the task was answered first or second. The horizontal black line near the top of Figure 3.21 represents the time they took to give the both answers. This time (1.99 s) is greater than the sum of the time for the verifi- cation task by itself (0.88 s) and the time for the multiplication task by itself (1.05 s). The extra time probably reflects the cost of shifting between tasks (for reviews, see Monsell, 2003; Kiesel et al., 2010). In any case, it appears that the participants were not able to overlap the addition and multiplication computations at all. The second experiment, reported by Schumacher et al. (2001), illustrates what is referred to as perfect time-sharing. The tasks were much simpler than the tasks in the Byrne and Anderson (2001) experiment. Participants simul- taneously saw a single letter on a screen and heard a tone and, as in the first experiment, had to perform two tasks, either individually or at the same time:  ask 1: Press a left, middle, or right key according to whether the letter T occurred on the left, in the middle, or on the right. Task 2: Say “one,” “two,” or “three” according to whether the tone was low, middle, or high in frequency. Figure 3.22 compares the times required to do each task in the single-task condition and the dual-task condition. As can be seen, these times are nearly un- affected by the requirement to do the two tasks at once. There are many differ- ences between this task and the Byrne and Anderson task, but the most apparent is the complexity of the tasks. Participants were able to do the individual tasks in the second experiment in a few hundred milliseconds, whereas the individual tasks in the first experiment took around a second. Significantly more thought FIGURE 3.22 The results of 500 an experiment by Schumacher et al. illustrating near perfect 450 time-sharing. The bars show the times required to perform two simple tasks—a location 400 discrimination task and a tone discrimination task—when done 350 by themselves and when done together. The times were nearly Response time (ms) 300 unaffected by the requirement to do the two tasks at once, 250 indicating that the participants achieved almost perfect time- sharing. (Data from Schumacher 200 et al., 2001.) 150 100 50 Single task 0 Dual task Location Tone discrimination discrimination C e n t r a l A tt e n ti o n : S e l e cti n g Li n e s o f T h o u ght t o P u r s u e / 71 Streams of processing: Encode Vision letter location Task 1 Manual action Program key press Central cognition Select Select action action Task 2 Generate speech Speech Audition Detect and encode tone 0 100 200 300 400 Time (ms) FIGURE 3.23 An analysis of the timing of events in five streams of processing during execution of the dual task in the Schumacher et al. (2001) experiment: (1) vision, (2) manual action, (3) central cognition, (4) speech, and (5) audition. was required in the first experiment, and it is hard for people to engage in both streams of thought simultaneously. Also, participants in the second experiment achieved perfect time-sharing only after five sessions of practice, whereas par- ticipants in the first experiment had only one session of practice. Figure 3.23 presents an analysis of what occurred in the Schumacher et al. (2001) experiment. It shows what was happening at various points in time in five streams of processing: (1) perceiving the visual location of a letter, (2) gen- erating manual actions, (3) central cognition, (4) perceiving auditory stimuli, and (5) generating speech. Task 1 involved visually encoding the location of the letter, using central cognition to select which key to press, and then performing the actual finger movement. Task 2 involved detecting and encoding the tone, using central cognition to select which word to say (“one,” “two,” or “three”), and then saying it. The lengths of the boxes in Figure 3.23 represent estimates of the duration of each component based on human performance studies. Each of these streams can go on in parallel with the others. For instance, during the time the tone is being detected and encoded, the location of the letter is be- ing encoded (which happens much faster), a key is being selected by central cognition, and the motor system is starting to program the action. Although all these streams can go on in parallel, within each stream only one thing can happen at a time. This could create a bottleneck in the central cognition stream, because central cognition must direct all activities (e.g., in this case, it must serve both task 1 and task 2). In this experiment, however, the length of time devoted to central cognition was so brief that the two tasks did not contend for the resource. The five days of practice in this experiment played a critical role in reducing the amount of time devoted to central cognition. Although the discussion here has focused on bottlenecks in central cognition, there can be bottlenecks in any of the processing streams. Earlier, we reviewed evidence that people cannot attend to two locations at once; they must 72 / Chapter 3 A tt e n ti o n a n d P e r f o r m a n c e ▼ United States each year. Strayer and contrast, listening to a radio or books I m p l i c at i o n s Drews (2007) review the evidence on tape does not interfere with driv- that people are more likely to miss ing. Strayer and Drews suggest that traffic lights and other critical informa- the demands of participating in a Why is cell phone use tion while talking on a cell phone. conversation place more require- and driving a dangerous Moreover, these problems are not ments on central cognition. When combination? any better with hands-free phones. In someone says something on the cell phone, they expect an answer and Bottlenecks in information processing are unaware of the current driving can have important practical implica- conditions. Strayer and Drews note tions. A study by the Harvard Center that participating in a conversation for Risk Analysis (Cohen & Graham, with a passenger in the car is not as 2003) estimates that cell phone distracting because the passenger Tom Grill/Corbis. distraction results in 2,600 deaths, will adjust the conversation to driv- 330,000 injuries, and 1.5 million ing demands and even point out instances of property damage in the potential dangers to the driver. ▲ shift their attention across locations in the visual array serially. Similarly, they can process only one speech stream at a time, move their hands in one way at a time, or say one thing at a time. Even though all these peripheral processes can have bottlenecks, it is generally thought that bottlenecks in central cogni- tion can have the most significant effects, and they are the reason we seldom find ourselves thinking about two things at once. The bottleneck in central cognition is referred to as the central bottleneck. People can process multiple perceptual modalities at once or execute actions in multiple motor systems at once, but they cannot process multiple things in a single system, including central cognition. Automaticity: Expertise Through Practice The near perfect time-sharing in Figure 3.22 only emerged after 5 days of prac- tice. The general effect of practice is to reduce the central cognitive component of information processing. When one has practiced the central cognitive com- ponent of a task so much that the task requires little or no thought, we say that doing the task is automatic. Automaticity is a matter of degree. A nice example is driving. For experienced drivers in unchallenging conditions, driving has become so automatic that they can carry on a conversation while driving with little difficulty. Experienced drivers are much more successful at doing secondary tasks like changing the radio (Wikman, Nieminen, & Summala, 1998). Experienced drivers also often have the experience of traveling long stretches of highway with no memory of what they did. There have been a number of dramatic demonstrations in the psycho- logical literature of how practice can enable parallel processing. For instance, Underwood (1974) reports a study on the psychologist Neville Moray, who had spent many years studying shadowing. During that time, Moray practiced shad- owing a great deal, and unlike most participants in experiments, he was very good at reporting what was contained in the unattended channel. Through a great deal of practice, the process of shadowing had become partially automatic for Moray, and he had capacity left over to attend to the unshadowed channel. Spelke, Hirst, and Neisser (1976) provided an interesting demonstration of how a highly practiced skill ceases to interfere with other ongoing behaviors. (This was a follow-up of a demonstration pioneered by the writer Gertrude Stein when C e n t r a l A tt e n ti o n : S e l e cti n g Li n e

Use Quizgecko on...
Browser
Browser