Principles of Cognitive Neuroscience Vision PDF
Document Details
Uploaded by Deleted User
Tags
Summary
This document discusses principles of cognitive neuroscience related to vision, particularly face recognition and prosopagnosia. It explains the processes involved in object recognition, using examples like L.H. It details the roles of different brain regions in this process. Additionally, it describes the pre-neural elements of the eye, receptor cells (rods and cones), and sensory adaptation.
Full Transcript
56 CHAPTER 3 INTRODUCTORY BOX PROSOPAGNOSIA R ecognizing objects, especially those that convey biologically important information, is critical for well-being and ultimately survival. A good example is the recognition of...
56 CHAPTER 3 INTRODUCTORY BOX PROSOPAGNOSIA R ecognizing objects, especially those that convey biologically important information, is critical for well-being and ultimately survival. A good example is the recognition of L.H. could not recognize familiar faces, report that they were familiar, or answer questions about faces from memory. He could, however, identify other common objects, discriminate subtle shape dif- occur in some otherwise normal individu- als for reasons that are not understood. An especially engaging account of a person who has difficulty with object rec- ognition that extends far beyond faces faces. A deficiency in recognizing faces ferences of other objects, and recognize is Oliver Sacks’s essay “The Man Who is termed prosopagnosia (prosopo in the sex, age, and even the “likability” Mistook His Wife for a Hat.” Greek refers to “face” or “person,” and of faces. Moreover, he could identify agnosia means “inability to know”). Fol- particular people by nonfacial cues such lowing damage to the inferior temporal as voice, body shape, and gait. The only References cortex, typically on the right, patients are other category of visual stimuli that L.H. KANWISHER, N. (2006) What’s in a face? often unable to identify familiar individu- had trouble recognizing was animals and Science 311: 617–618. als by their facial characteristics, and their expressions, though these impair- SACKS, O. (1985) The Man Who Mistook His in some cases they cannot recognize a ments were not as severe as for human Wife for a Hat, and Other Clinical Tales. New face at all. Nonetheless, such individu- faces. Noninvasive brain imaging (see York: Summit. als are perfectly aware that some sort of Chapter 2) showed that L.H.’s prosop- TSAO, D. Y. AND M. S. LIVINGSTONE (2008) visual stimulus is present and can de- agnosia was the result of damage to the Mechanisms of face perception. Annu. scribe particular aspects or elements of it right ventral temporal lobe. Rev. Neurosci. 31: 411–437. without difficulty. More recently, brain imaging and di- An example is the case of L.H., a rect electrophysiological recording stud- patient described by neuropsychologist ies in normal subjects have confirmed N. L. Etcoff and colleagues. L.H. (the use that the inferior temporal cortex, particu- Functional MRI activation during a face of initials to identify neurological patients larly the fusiform gyrus, mediates face recognition task. (A) A face stimulus was in published reports is standard practice) recognition (Figures A and B); and that presented to a normal subject at the time was a 40-year-old minister and social nearby regions are responsible for cat- indicated by the arrow. The graph shows a worker who sustained a severe head egorically different recognition functions change in activation in the region shown in injury as the result of an automobile ac- (object recognition will be discussed later (B), which is located in the right inferior tem- cident when he was 18. After recovery, in this chapter). Prosopagnosia can also poral lobe. (Courtesy of Greg McCarthy.) (A) (B) 1.00 White matter 0.80 Face area MR signal change (%) 0.60 0.40 0.20 0.00 –0.20 –5 –4 –3 –2 –1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 Time (s) R L The Initiation of Vision The events that lead from stimuli to perception begin with the pre-neural ele- ments of the eye that collect and filter light energy in the environment. These elements are the cornea, lens, and ocular media that focus and filter light before it reaches the retina (Figure 3.1). The next step in sensory processing—and the SENSORY SYSTEMS AND PERCEPTION: VISION 57 Lens Neural retina (including photo- receptor cells) Cornea Object Fovea Light Pupil Figure 3.1 Pre-neural processing by the eye These structures usefully modify stimulus energy before it reaches sensory receptor neurons. The optical structures of the eye—the cornea, pupil, and lens—filter and focus the light that eventually reaches photoreceptor cells (rods and cones) in the retina. As a result of projective geometry, images on the retina are upside down and shifted left for right. first step that entails the nervous system—is the transformation of light energy into neural signals by specialized receptor cells whose pigment molecules ab- sorb photons of specific wavelengths. This transformation is mediated by two types of receptor cells in the retina— rods and cones—that ultimately link to overlapping but different processing systems in the retina and the rest of the brain (Figure 3.2). The processing initiated by rods is concerned primarily with perception at very low levels of light; cones respond only to greater light intensities and are responsible for the detail and color percepts. Thus, the rod system is essential at night, whereas processing by the cone system is dominant in daylight. An important principle evident in vision and all other sensory systems Pigment is sensory adaptation, the continual resetting of sensitivity according discs to ambient conditions. The primary purpose of adaptation is to ensure that, despite the signaling limitations of nerve cells, sensory processing occurs with maximum efficiency over the full range of environmental conditions that are relevant (Figure 3.3A). The need for resetting arises from the discrepancy between this environmental range and the much more limited firing rate of visual and other neurons. The firing rate, which conveys information about stimulus intensity (the more action Rod outer segment Rod potentials [see Appendix] per unit time, the more intense the stimulus), Rod inner segment has a maximum of only a few hundred action potentials per second, which is inadequate to generate finely graded percepts in response to light intensities that range over 10 orders of magnitude or more. Thus 0 the sensitivity of the system is continually adapted to match different ongoing levels of light intensity in the environment (Figure 3.3B). Another property of the visual and other sensory systems is their degree of precision, or acuity. Acuity is different from sensitivity to stimulus intensity. Rather than setting an appropriate level of ampli- fication (gain), acuity measures the fineness of discrimination, as in Figure 3.2 Transduction by sensory receptor cells in the retina Ab- sorption of photons by pigments in a photoreceptor cell (here, a rod cell) changes the photoreceptor’s membrane potential and initiates a neural sig- nal that will eventually be conveyed to the rest of the visual system. The ion movements involved in setting up the signal are indicated. (See Appendix.) Blue dots, Na+ ions; red dots, Ca+ ions; yellow dots, K+ ions. 58 CHAPTER 3 (A) Luminance of white paper in: Starlight Moonlight Indoor lighting Sunlight –6 –4 –2 0 2 4 6 8 Luminance (log candelas/m2) Rod function Cone function Scotopic Mesopic Photopic (rods active) (rods and cones (cones active) active) Absolute Cone Rod Damage threshold threshold saturation possible begins No color vision Good color vision Poor acuity Good acuity (B) 400 Action potentials per second 300 200 100 0 Dim Bright Light level Figure 3.3 The need for visual adaptation (A) The light sensitivity of the human visual system, based on the rod and cone photoreceptor systems, spans more than 10 orders of magnitude. (B) To faithfully convey information about intensity over this broad range using the relatively small range of action potentials per second that neurons can generate, the sensitivity of the system must continually adapt to ambi- ent levels of light, so that firing rates of the relevant neurons report light intensity over a new portion of the full range of sensitivity. In the experiment shown, the firing rate of retinal ganglion cells (the neurons that carry information from the eye to the processing centers in the brain) was examined for six different levels of light intensity (luminance). The full range of neural signaling is apparent at each level as a result of adaptation. (After Sakmann and Creutzfeldt 1969.) distinguishing two nearby points (the purpose of the standard eye chart used by an optometrist). Visual acuity depends on the different distribution of receptor cells (and receptor types) across the retina. Although we seem to see the visual world quite clearly, visual acuity in humans actually falls off rapidly as a func- tion of eccentricity (the distance, in degrees of visual angle, away from the line SENSORY SYSTEMS AND PERCEPTION: VISION 59 1 2 3 4 5 Blind spot 160 3 Receptor density (mm– 2 × 103) 140 120 5 Optic disk 100 1 Rods 4 Rods 80 60 2 40 20 Fovea Cones Cones 0 F 80 60 40 20 0 20 40 60 80 Temporal Eccentricity (degrees) Nasal Figure 3.4 The acuity of sensory systems is determined initially by the distribu- tion of retinal receptors The density of rods (purple) and cones (green), plotted as a function of distance from the cone-rich fovea. The poor resolution of vision a few degrees off the line of sight is a result of the paucity of closely packed cones at ec- centricities greater than a few degrees. of sight; a degree is 1/360th of a circle drawn around the head and corresponds to approximately the width of the thumb held at arm’s length). Consequently, vision beyond the central few degrees of the visual field is extremely poor. Lessened acuity outside the central retina means that we must frequently move our eyes—and therefore the direction of gaze—to different positions in visual space. Such eye movements are called saccades, and they occur 3 to 4 times a second; this easily measured visual behavior is widely used in studies of attention (see Chapters 6 and 7). The reason for this difference in acuity according to where an image falls on the retina is the distribution of photoreceptors (Figure 3.4). Cones, which are responsible for detailed vision in daylight, greatly predominate in the central region of the retina, being most dense in a specialized region called the fovea. The prevalence of cones falls off sharply in all directions as a function of distance from the fovea, and as a result, high-acuity vision is limited to the fovea and its immediate surround. Conversely, rods are sparse in the fovea and absent altogether in the middle of it. In consequence, sensitivity to a dim stimulus is greater off the line of sight—because of the paucity of rods in the fovea and their preponderance a few degrees away—even though acuity is lower at this eccentricity. Subcortical Visual Processing Subcortical processing consists of neural interactions in the stations of the central nervous system before the activity elicited by a stimulus reaches the cerebral cortex. The first stage of central processing takes place in the five lay- ers of the retina. This information converges onto retinal ganglion cells whose axons leave the retina via the optic nerve, the first component of the primary visual pathway, which is the major route from the eye to the visual cortex in the 60 CHAPTER 3 Figure 3.5 The primary visual path- way The route (solid red and blue lines) that carries information centrally from the retina to regions of the brain that are es- pecially pertinent to what we see com- prises the optic nerves, the optic tracts, the dorsal lateral geniculate nuclei in the thalamus, the optic radiations, and the pri- mary (striate) and secondary (extrastriate) visual cortices in the occipital lobes. The partial crossing of the optic nerve axons at the optic chiasm means that information in the right visual field is conveyed to the left occipital lobe, while information from the left visual field goes to right occipital lobe. Other pathways to targets in the brain- stem (dashed red and blue lines) deter- mine the pupil’s diameter as a function of retinal light levels, help organize and effect eye movements, and influence circadian rhythms. Right retina Left retina Optic nerve Optic tract Optic chiasm Hypothalamus: circadian rhythm Dorsal lateral geniculate Edinger-Westphal nucleus nucleus: pupillary light reflex Optic radiation Superior colliculus: orients the movements of head and eyes Primary visual cortex brain’s occipital lobe (Figure 3.5). The pathway conveys the information in light stimuli that we end up perceiving as visual scenes. Retinal circuitry modulates the information that is eventually sent forward by retinal ganglion cells. The theme of modulation of the information that goes forward in a sensory system is a general one in sensory processing, the presumed purpose being to filter or otherwise improve the information received by the next processing stage. The major target of the retinal ganglion cells is the dorsal lateral geniculate nucleus in the thalamus (Figure 3.6A). Unlike other thalamic nuclei, the lateral geniculate is layered, consisting of two magnocellular system layers (so named because the neurons that populate them are relatively large; magno means “large”), and four parvocellular system layers that appear less dense with smaller SENSORY SYSTEMS AND PERCEPTION: VISION 61 (A) Figure 3.6 The dorsal lateral geniculate nucleus of the thalamus (A) Cross section of the human thala- mus showing the six layers of this distinctive nucleus. Each layer receives input from only one eye or the other (indicated by R or L, for right or left) and is further categorized by the size of the neurons it contains. The two layers shown in blue contain larger neurons and L 6 are therefore called the magnocellular layers; the four layers shown in green are the parvocellular layers be- R 5 cause of their smaller constituent cells. (B) Tracings of Parvo- representative magnocellular and parvocellular retinal cellular L 4 ganglion cells, as seen in flat mounts of the retina after layers tissue staining; these neurons innervate, respectively, R 3 the magno and parvo cells in the thalamus. (A after Andrews et al. 1997; B after Watanabe and Rodieck 1989.) Magno- R 2 cellular layers L 1 1 mm (B) Parvocellular Magnocellular ganglion cell ganglion cell neurons (parvo means “sparse”). The “parvo” and “magno” layers, as they are generally referred to, are innervated by distinct classes of retinal ganglion cells (Figure 3.6B). The neuronal associations between specific classes of retinal ganglion cells and lateral geniculate neurons reflect different functions that have different perceptual consequences. The smaller P ganglion cells in the retina, and the related parvocellular neurons in the lateral geniculate nucleus, are concerned primarily with the spatial detail underlying the perception of form, as well as perceptions of brightness and color. The larger M retinal ganglion cells, and the magnocellular neurons they innervate in the thalamus, process information about changes in stimuli that lead to motion perception. Neurons in both the magno and parvo layers are extensively innervated by axons descending from the cortex and other brain regions, as well as those arising from retinal ganglion cells. Although the function of this descending information is not known, the geniculate nucleus is clearly more than a relay station. Cortical Visual Processing The target of the lateral geniculate neurons is the primary visual cortex, also known as the striate cortex or V1 (the word striate, meaning “striped,” refers to the appearance of cortical layer 4 in this region). The neurons in cortical layer 62 CHAPTER 3 (A) Lateral (C) Brain “inflated” to reveal buried cortex V3a MST MT VP Flattened (B) Medial occipital V1 lobe V3 V2 V3a V3 V3a Calcarine sulcus MT+ /V5 V2 MST MT V1 Sulci V4 V2 VP V2 V1 Gyri V4 VP Figure 3.7 The primary visual cortex and higher-order cortical association areas in the human brain Localization of multiple visual areas in the brain using fMRI. (A,B) Lateral and medial views show the location of the primary visual cortex (V1) and additional visual areas V2, V3, VP (ventral posterior), V4, MT (middle temporal), and MST (middle superior temporal). (C) Unfolded and computationally flattened view of the visual areas shows the relationships more clearly. (After Sereno et al. 1995.) 4 receive the axons from neurons in the thalamus, while neurons in layers 1 and 5 of the primary visual cortex project to extrastriate visual cortical areas in the occipital, parietal, and temporal lobes, as well as back to the thalamus (Figure 3.7). The extrastriate cortex is generally considered a component of the cortical association areas (a general term that refers to all cortical regions that are not primarily sensory or motor). The cortical association areas occupy the vast majority of the cortical surface, and together with their subcortical components, they are critical determinants of all of the perceptual qualities and cognitive functions discussed in all subsequent chapters. Because these regions integrate the qualities of a given modality (e.g., color, brightness, form, and motion), as well as information from other sensory modalities and from brain regions carrying out other functions (e.g., attention and memory), the processing carried out by the association cortices is often referred to as “higher-order.” The extrastriate visual cortical areas adjacent to V1 tend to process one or more of the qualities that define visual perception. Thus, in humans and non-human primates the area called V4 is especially important SENSORY SYSTEMS AND PERCEPTION: VISION 63 Figure 3.8 The dorsal and ventral visual information WHERE? Dorsal stream: streams These pathways have been well document- Analysis of ed in the human brain with fMRI and other methods. motion and They are often referred to as the “where” and “what” spatial relations pathways. The ventral stream conveys information to regions of the inferior temporal lobe whose activity in fMRI studies indicates a role in the recognition of objects. Primary and extrastriate visual cortex WHAT? Ventral stream: Analysis of form and color for processing information pertinent to color vision, and areas MT (for middle temporal) and MST (for middle superior temporal) are especially important for the generation of motion percepts. An important generalization about the organization of higher-order visual cortices is the anatomical flow of different information streams. Cognitive neu- roscientists Leslie Ungerleider and Mortimer Mishkin were the first to show that extrastriate cortical areas in humans and non-human primates are organized into two largely separate pathways that feed information into the temporal and parietal lobes, respectively (Figure 3.8). One of these broad paths, called the ventral stream (the “what” pathway) leads from the visual cortex to the inferior part of the temporal lobe. Information carried in this pathway appears to be responsible for high-resolution form vision and object recognition, a finding that conforms to other evidence about functions of the temporal lobe. The dorsal stream (the “where” pathway) leads from striate cortex and other visual areas into the parietal lobe. This pathway appears to be responsible for spatial aspects of vision, such as the analysis of motion and positional relationships between objects. Melvyn Goodale and others who have worked on this issue refer to these two streams as the ventral “vision for perception” and dorsal “vision for action” pathways to indicate the idea that the temporal lobe is concerned mainly with perception whereas the parietal lobe is concerned more with attention and doing something about whatever is perceived. Thus, as described later in the chapter, electrophysiological recordings from neurons in the temporal lobes tend to exhibit properties that are important for object recognition, such as selectivity for shape, color, and texture. Conversely, neurons in the dorsal stream show selectivity for direction and speed of move- ment. In keeping with this finding, lesions of the parietal cortex severely impair the ability to distinguish the position of objects—or attending to them, as de- scribed in Chapter 7—while having little effect on the ability to perform object recognition tasks. Lesions of the inferior temporal cortex, on the other hand, produce profound impairments in the ability to perform recognition tasks but do not impair an individual’s ability to carry out spatial tasks. The segregation of visual information into ventral and dorsal streams should not be interpreted too rigidly, however; recent evidence indicates that there is a good deal of cross talk between these broadly defined sensory pathways. As sensory information moves through the higher-order processing areas, infor- mation specific to a given sensation must be integrated with information being processed by the other sensory systems to improve the efficacy of behavior. 64 CHAPTER 3 (A) (B) 1 2 1 2 No sound Sound 2 1 1 2 Figure 3.9 Examples of cross-modal influences in perception (A) In ventriloquism, what we see affects what we hear: because we see the dummy’s mouth moving while the ventriloquist’s lips are still, we perceive the sound as coming from the dum- my’s mouth. (B) What we hear also affects what we see. The apparent trajectory of moving balls is profoundly affected by what we hear, or do not hear. In the absence of a sound, typically the two balls appear to proceed past each other, as indicated; when a sound occurs coincident with the point at which they would collide, the balls are more likely to be seen as bouncing off each other. The result of sensory integration is evident in many aspects of perception. For example, what we see quite literally affects what we hear (Figure 3.9A), and what we hear affects what see (Figure 3.9B). An engaging demonstration of such perceptual consequences is the McGurk effect, which illustrates how readily speech sounds can be affected by visual stimuli arising from simultaneous lip movements. Multisensory integration can also have odd consequences, such as the phenomenon of synesthesia, described in Box 3A. Other Key Characteristics of the Visual Cortex Topography An important feature of the primary visual pathway is that the organization of the receptors in the retina is reflected in the corresponding regions of both the thalamus and the visual cortex—a relationship described as topographical. Topography is particularly apparent in (but not limited to) the primary sensory cortices, and it has been most thoroughly documented in the visual and somatic sensory systems, where the location of stimuli on the retinal surface or the body surface, respectively, is important for spatial localization. The experimental approach in topographical mapping is to stimulate a specific location on the retina or body surface while recording centrally. In this way one can assess how the location of peripheral stimulation is reflected in the location of corresponding central nervous system activity. Thus, when electrophysiological recordings are made from neurons in the lateral geniculate nucleus, activity at adjacent thalamic sites is elicited by stimulating adjacent retinal sites. Moreover, when a recording electrode is passed from one geniculate layer to another, the position on the retina (or in visual space) determined by recording in one layer is in register with the position determined from the neurons in the subjacent layer. The same phenomenon is apparent in electrophysiological mapping of the primary visual cortex. Clearly, the topography of the retina—and therefore the topography of the retinal image—is reestablished in both the thalamus and SENSORY SYSTEMS AND PERCEPTION: VISION 65 BOX 3A SYNESTHESIA A remarkable sensory anomaly is evident in individuals who conflate experiences in one sensory do- main with those in another—a phenom- enon called synesthesia. Synesthesia they can segregate targets from back- grounds (Figures A–C), they can group targets in apparent motion displays, and they show the Stroop effect (the slowed reaction time that everyone exhibits becomes literate, numerate, or musically trained, and it may be during this period of plasticity that “miswiring” occurs. was named and described by Francis when the printed ink and the spelling of References Galton in the nineteenth century, and the a color word are at odds, as in yellow; BARON-COHEN, S. AND J. E. HARRISON (1997) phenomenon received a good deal of at- see Chapter 13). Synesthesia: Classic and Contemporary Read- tention among those in Galton’s scientific The cause of synesthesia is not ings. Malden, MA: Blackwell Scientific. circle in England. The term means liter- known, but the phenomenon is clearly BLAKE, R., T. J. PALMIERI, R. MAROIS AND ally “mixing of the senses,” and its best of considerable interest to researchers C.-Y. KIM (2005) On the perceptual reality understood expression is in individuals trying to sort out how information inputs of synesthetic color. In Synesthesia, L. C. who see specific numerals, letters, or from different sensory modalities are Robertson and N. Sagiv (eds.). New York: similar shapes printed in black and white integrated. A number of cognitive neu- Oxford University Press, pp. 47–73. as being differently colored; this condi- roscientists have used fMRI and other BRIDGEMAN, B., D. WINTER AND P. TSENG tion is known specifically as color-gra- modern methods to study synesthesia, (2010) Dynamic phenomenology of phemic synesthesia. Other, less common but, so far, without leading to any definite grapheme-color synesthesia. Perception 39: synesthesias include the experience of conclusions. The influence of synes- 671–676. colors in response to musical notes, and thetic color perception on the various specific tastes elicited by certain words psychophysical tasks shows that the RAMACHANDRAN, V. S. AND E. M. HUBBARD and/or numbers. The list of famous syn- phenomenon occurs at the level of the (2001) Psychophysical investigations into esthetes includes painter David Hockney, cerebral cortex. Numerous neurobiologi- the neural basis of synaesthesia. Proc. R. novelist Vladimir Nabokov, composer cal theories have been put forward, the Soc. Lond. B 368: 979–983. and musician Duke Ellington, and physi- most plausible of which entail some form RAMACHANDRAN, V. S. AND E. M. HUBBARD cist Richard Feynman. of aberrant wiring during early develop- (2005) Neurocognitive mechanisms in The experience of synesthetes is not ment. A good deal of novel synaptic synesthesia. Neuron 48: 509–520. in any sense metaphorical. Nor do they connectivity is required as a person consider it “abnormal”; it is simply the way they experience the world. People who experience color-graphemic syn- Improved performance on a visual search task by a color-grapheme synesthete, “subject esthesia (the form that has been most W.O.” (A) The physical stimulus presented to W.O. and to a nonsynesthete control subject. thoroughly studied) perceive numbers The task was to find the numeral 2 among the multiple numeral 5’s. (B) The same stimulus as being differently colored; the reality of with synesthetic colors assigned to the two numbers tested, which presumably shows how their ability has been demonstrated in a W.O. perceives the physical stimulus. (C) The graph reveals that W.O.’s reaction time in the variety of psychophysical studies. On the task was faster than that of the control subject. W.O.’s performance is presumably better basis of the synesthetic colors they see, because the differently colored 2 “pops out” for him, whereas it doesn’t for the control subject. (After Blake et al. 2005.) (A) Physical stimulus as presented (B) Presumed synesthete perception (C) 2.0 1.8 Response time (s) Control 1.6 1.4 1.2 Synesthete W.O. 1.0 Smaller Larger Display size the cortex (Figure 3.10). Topographical maps can also be discerned in some adjacent secondary visual cortices, although these tend to be less clear as the distance away from the primary sensory cortex increases. 66 CHAPTER 3 Figure 3.10 Topographical represen- Parieto-occipital tation and magnification of peripheral sulcus receptor surfaces The regions of the Calcarine retina are color coded to show their cor- sulcus responding representation in the primary visual cortex. The area of central vision 3 2 1 Right corresponding to the fovea is represented 3 2 1 occipital by much more cortical space than are the lobe eccentric retinal regions. This disproportion is referred to as cortical magnification. Pupil Retina Lens Left-eye visual field Right-eye visual field Left Right 3 3 2 2 1 1 1 1 2 2 3 3 The reason for the topographical layout of the primary visual system is not clear. A simple intuition is that to perceive an integrated visual scene requires a cortical layout that corresponds to the image. But there is no principled reason for this assumption. It seems more likely that cortical topography has mainly to do with minimizing the neuronal wiring that is needed for efficient processing, and thus more to do with metabolic economy than with image representation. Cortical magnification Another feature of the primary visual cortex is that the size of each unit area of the retinal surface is disproportionately represented at the level of the cortex (see Figure 3.10). Thus, a square degree of visual space in the fovea (which is concerned with visual detail and is greatly enriched in cone cells; see Figure 3.4) is represented by much more cortical area (and therefore more visual process- ing circuitry) than the same unit area in the peripheral retina. Referred to as cortical magnification, this disproportion makes good neurobiological sense: the visual detail that we perceive in response to stimulation of the fovea presum- ably requires more neuronal machinery in the cortex than do the less acutely resolved portions of the visual scene generated by the stimulation of eccentric retinal regions. The idea that more complex neural processing requires more cortical (or subcortical) space is another general principle in the organization of sensory systems. Cortical modularity Still another organizational feature of the primary visual cortex and some secondary visual cortical areas is their arrangement in iterated groups of neu- rons with similar functional properties (Figure 3.11). Each of these iterated units consists of hundreds or thousands of nerve cells, and together they are referred to as cortical modules or cortical columns. The result is that sometimes SENSORY SYSTEMS AND PERCEPTION: VISION 67 (A) (B) (C) Figure 3.11 Iterated, modular patterns in mammalian sensory cortices All of (D) these examples illustrate the modular organization commonplace in many sensory cortices. The patterns were revealed with one of several different histological tech- niques in sections in the plane of the cortical surface; the size of the units in each different pattern is on the order of several hundred micrometers across. (A) Stripes called ocular dominance columns in layer 4 of the primary visual (striate) cortex of a rhesus monkey. The cells in each column share a preference for stimuli presented to one eye or the other. (B) Repeating units, called blobs, in layers 2 and 3 of the striate cortex of a squirrel monkey. (C) Repeating stripes in layers 2 and 3 in the extrastriate cortex of a squirrel monkey. (D) Patterns of cortical activity in response to differ- ently oriented stimuli observed by optical imaging. In this technique a video camera records light absorption by the primary visual cortex as an experimental animal views stimuli on a video monitor. The images of the cortex obtained in this way are digitized and stored to subsequently construct and compare the patterns associated with the different stimuli. The example shown here is the orientation preference pattern in the primary visual cortex of a tree shrew. Each color represents the orientation of the stimulus that was most effective in activating the cortical neurons at that site. (A–C from Purves et al. 1992; D courtesy of Len White and David Fitzpatrick.) striking patterns of these columns are superimposed on the topographical maps already described. Despite their highly regular structure, the function of cortical columns remains unclear. Although cortical columns such as the ocular dominance stripes in Figure 3.11A are readily apparent in the brains of some species, they have not been found in other, sometimes closely related, animals with similar cognitive and behavioral abilities. Moreover, many regions of the mammalian cortex are not organized in this modular fashion. Finally, no clear rationale of such columns has been discerned, despite considerable effort and speculation. Like topography, cortical columns may arise simply from the efficiency of wiring and the way that neuronal connections form during development. Visual receptive fields The receptive field of a visual neuron is defined as the region of the retina that, when stimulated, elicits a response in the neuron being examined. As described in Chapter 2, the data obtained from single cells are called single-unit record- ings and are collected by an extracellular microelectrode placed near the cell of interest to monitor its generation of action potentials (Figure 3.12A). At the level of the retinal output and thalamus, visual neurons respond to spots of light. Thus, the receptive fields of retinal ganglion cells or lateral geniculate neurons are excited or inhibited by light going on or off in the center of the retinal area they respond to (typically the area surrounding the receptive-field center has an opposite effect, presumably to enhance information arising from contrast boundaries in visual stimuli). At the level of the cortex, however, the responses become more complex. In the example shown in Figure 3.12B, the recorded neuron responds to a moving bar when the bar is oriented at some angles but not at others. Through testing of the neuron’s responsiveness to a range of differently oriented stimuli, an orientation 68 CHAPTER 3 (A) (B) Stimulus Stimulus (C) orientation presented Light bar stimulus projected on screen Spike rate Record Recording from visual cortex Neurons fire Stimulus orientation 0 1 2 3 Time (s) Figure 3.12 Neuronal receptive fields (A) An experimental setup for studying visual receptive fields. (B) The receptive field of a typical neuron tuning curve can be defined that indicates the sort of stimulus to which the cell in the primary visual cortex. As differ- is maximally responsive (Figure 3.12C). The receptive fields of cortical neurons ent stimuli are presented in different locations, the neuron being recorded serving foveal vision in the primary visual cortex generally measure less than from fires in a variable way that de- a degree of visual angle, as do the receptive fields of the corresponding retinal fines both the location of the neuron’s ganglion cells and lateral geniculate neurons. Even for cells serving peripheral receptive field and the properties of vision, the receptive fields in the primary visual cortex measure only a few degrees. the stimulus. The neuron does not In higher-order extrastriate cortical areas, however, receptive fields often fire at all if the stimulus is elsewhere on the scene, and even when it is in cover a substantial fraction of the entire visual field (which extends about 180 the appropriate location the stimu- degrees horizontally and 130 degrees vertically). The location of retinal activ- lus activates the neuron only when it ity—and the corresponding topographical relationships in the primary visual is presented in certain orientations. cortex—cannot be conveyed by neurons that respond to stimuli anywhere in such Although these factors are not shown a large region of space, at least not in any simple way. In short, the topography here, direction of motion, which eye is stimulated, and other stimulus that is apparent in the primary cortices is less apparent in the higher-order properties are also important. (C) This regions, where visual percepts are supposedly generated. The loss of retinal orientation tuning curve, which corre- topography in higher-order visual cortical areas (which presumably entails a sponds to the neuron illustrated in (B), correspondingly diminished ability to identify the location and qualities of shows that the highest rate of action visual stimuli) presents a problem for any rationalization of vision in terms potential discharge occurs for verti- cal edges—the neuron’s “preferred” of images and percepts based on image representation in these higher-order orientation. extrastriate cortical areas. Thus, up to the level of the primary visual cortex, the organization of the visual system is hierarchical in the sense that lower-order stations lead ana- tomically and functionally to higher-order ones, albeit with much modulation and feedback at each stage. At each of the initial stations in the primary visual pathway—the retina, the dorsal lateral geniculate nucleus, and the neurons in the primary visual cortex—the receptive-field characteristics of the relevant neurons can be understood reasonably well in terms of the “lower-order” cells that provide their input. Beyond these initial levels, however, rationalizing the organization of the visual system in terms of lower-order neurons shaping the response properties of higher-order neurons becomes increasingly difficult. The “higher” the order of the nerve cells in the system, the less they depend on visual input, and the more they are influenced by information that is not strictly visual (see Figure 3.9 for examples). SENSORY SYSTEMS AND PERCEPTION: VISION 69 Visual Perception With this summary of visual system structure and function in mind, the fol- lowing sections consider the end product of visual processing—that is, what we actually see. The primary visual qualities that describe visual perception are lightness, brightness, color, form, depth, and motion. These qualities are the foundation that allows us to make the associations needed to recognize objects and conditions in the world. Lightness and brightness A good place to begin a consideration of visual percepts is with lightness and bright- ness, the terms used to describe our visual experience of light and dark elicited by different light intensities. Lightness refers to the appearance of a surface such as a piece of paper; brightness refers to the appearance of a light source, such as the sun, or a lightbulb. Vision is impossible without these perceptual qualities, whereas some other qualities, such as color, are expendable (some animals have good vision generally, but little or no color vision). Like all percepts, lightness and brightness are not subject to direct measurement, and can be evaluated indirectly only by asking observers to report the appearance of one object or surface relative to that of another (Box 3B). The physical correlate of brightness is luminance, a measure of light intensity made by a photometer and expressed in units such as candelas per square meter. As will be apparent, however, the relationship between luminance and lightness/brightness is deeply puzzling. A logical assumption would be that luminance and lightness/brightness are directly proportional, since increasing the luminance of a stimulus increases the number of photons captured by photoreceptors. Another intuition is that two objects in a scene that return the same amount of light to the eye should appear equally light or bright. It has long been known, however, that perceptions of lightness/brightness fail to meet these expectations. For example, a patch on a background of relatively low luminance appears lighter or brighter than the same patch on a background of higher luminance—a phenomenon called simultane- ous lightness/brightness contrast (Figure 3.13A). This effect becomes even more dramatic when the stimulus includes more detailed information (Figure 3.13B). (A) (B) Figure 3.13 Simultaneous lightness/ brightness contrast (A) In this stan- dard presentation of the effect, the two circular patches have exactly the same luminance (see the key), but the one in the dark surround looks some- what lighter/brighter. (B) Simultaneous lightness/brightness contrast effects Luminance Key Luminance Key can be much greater when the scene contains more information; as the key shows, the patches whose lightness/ brightness appears very different again have the same luminance. Thus, the amount of light returned to the eye does not determine the lightness/ brightness seen. (B from Purves and Lotto 2003.) 70 CHAPTER 3 BOX 3B MEASURING PERCEPTION (A) (B) 1.0 100 0.8 80 Brightness (arbitrary units) Relative sensitivity 0.6 60 0.4 40 0.2 20 0.0 0 20 40 60 80 100 400 450 500 550 600 650 700 Luminance (arbitrary units) Wavelength (nm) Examples of psychophysical assessments. (A) The human luminosity function, determined by assessing the sensitivity of normal subjects to light as a function of stimulus wavelength. This T determination can be made by measuring either threshold responses or just-noticeable differ- he physical properties of stimuli ences at suprathreshold levels. The results show that humans are far more sensitive to stimuli can be measured with arbitrary in the middle of the light spectrum (i.e., between approximately 480 and 630 nanometers). (B) precision. Measuring percepts, Magnitude scaling, showing that the relationship between a subject’s perception of brightness however, is quite another matter. The and the intensity of a light stimulus is a power function (the exponent in this case is approxi- perceptual consequences of stimuli are mately 0.5). subjective, and as such they can’t be measured in any direct way. They can, however, be reported in terms of thresh- olds, least discernible differences, or jects. By varying the amount of energy measure—at any level of stimulus in- other paradigms in which subjects state delivered, a psychophysical function can tensity—how much physical change whether a percept is brighter or darker, be obtained that defines the stimu- is needed to generate a perceptual larger or smaller, slower or faster than lus threshold value (Figure A). Since at change. The resulting functions, called some standard of comparison. Such threshold levels of stimulation subjects difference threshold functions, have evaluations of perceptual responses are have difficulty saying whether they saw many practical implications. The Weber- broadly referred to as psychophysics. something or not, such tests are usu- Fechner law is a good example. The law The effort to make the analysis of per- ally carried out using a forced-choice states that the ability to notice a differ- cepts scientifically meaningful dates from paradigm, in which the observer must ence (called a test of just-noticeable 1860, when the German physicist and respond on each trial. Typically, a series or equally noticeable differences) is philosopher Gustav Fechner decided to of trials is presented in which stimuli of determined by a fixed proportion of the pursue the connection between what he different energetic levels are randomly stimulus intensity, not an absolute dif- referred to as the “physical and psycho- interspersed with trials that do not pres- ference. This proportion is referred to as logical worlds” (thus the rather unfortu- ent a stimulus. Because 50 percent the Weber fraction; if, for example, the nate word psychophysics). correct responses (i.e., saying “Yes, I Weber fraction is 1/10, then if a 1-gram In practice, there are only a limited saw something” or “No, I saw nothing” increment to a 10-gram weight can number of ways to assess perception in when a stimulus was or was not present, just be detected, 10 grams will be the relation to physical stimuli, although there respectively) would be the average result minimum detectable increment to a 100- are many permutations of the basic tech- obtained if the subject merely guessed gram weight. niques. A conceptually straightforward on each trial, a 75 percent correct-re- What is now known about the physi- but technically difficult measurement is sponse rate is conventionally taken to be ology of sensory systems indicates that to ascertain the least energetic stimulus the criterion for establishing the threshold the proportional relationship between that elicits a perceptual response in a level of stimulus energy. just-noticeable differences and stimulus particular sensory modality, such as the A technically easier and more gen- magnitude expressed by the Weber- weakest retinal stimulation perceived as erally applicable way of getting at the Fechner law makes good sense. Recall something seen by dark-adapted sub- sensitivity of a sensory modality is to that because neurons can generate only SENSORY SYSTEMS AND PERCEPTION: VISION 71 BOX 3B (continued) (C) (C) A reaction-time task. Reaching a judgment about whether the object on the left is the same as the object on the right is a function of “task difficulty”; people make this judgment more quickly for objects that are closer to the same orientation in space (top pair) than for objects that are differently ori- ented (bottom pair). Reaction time is used in many different paradigms, as will be apparent in later chapters. ing percepts along an with an exponent of approximately 0.5 ordinal scale that covers under the standard conditions he used the full range of a percep- (Figure B). The power functions exhibited tual quality (brightness, for in such magnitude-scaling experiments instance; Figure B). The are sometimes referred to as reflect- most extensive studies of ing Stevens’ law. Rationalizing Stevens’ this sort were carried out results presents another challenge to by psychologist Stanley theories seeking to explain the how vi- Stevens, who worked on sual and other sensory systems generate this issue from about 1950 the percepts they do. until 1975. To take an Finally, another staple of psychophys- example, Stevens asked ics entails measurements of reaction a limited number of action potentials per whether a light stimulus that is made time. A logical assumption is that the second, sensory systems must continu- progressively more intense elicits per- more complex the neural processing en- ally adjust their overall range of opera- ceptions of brightness that linearly track tailed in performing a given task, the lon- tion to provide subjects with information the physical intensity. In making such ger it will take to perform the task (Figure about the energy levels of, say, light, determinations, Stevens simply asked C). This simple paradigm is the basis of where those energy levels pertinent to subjects to rate the relative intensities of many studies in later chapters. humans span many orders of magnitude a series of test stimuli on a number scale (see Figure 3.3). The Weber fraction thus along which 0 represented the least provides an approximate measure of the intense stimulus and 100 the most in- References gain of a sensory system under specified tense (similar in principle to the common LEZAK, M. (2004) Neuropsychological Assess- conditions. practice of rating pain on a scale of 1 to ment. Oxford: Oxford University Press. Another psychophysical approach, 10). In this manner, he determined that STEVENS, S. S. (1975) Psychophysics. New called magnitude scaling, entails order- brightness scales as a power function York: Wiley. Many investigators have supposed that the patch on the dark background looks brighter than the patch on the light background because of a difference in the retinal output. The percepts elicited by other stimulus patterns, however, undermine the idea that simultaneous lightness/brightness contrast effects are an incidental consequence of dark versus light surrounds. In the pattern in Figure 3.14, for example, the target patches on the left are surrounded by a greater area of higher luminance (lighter territory) than lower luminance, and yet Figure 3.14 White’s illusion This stimulus pattern elicits percep- tual effects that cannot be explained in terms of the same local contrast effects illustrated in Figure 3.13A. The pattern and its perceptual consequences are called White’s illusion after the psy- chologist who first described this stimulus more than 30 years ago. (After White 1979.) 72 CHAPTER 3 Figure 3.15 Conflation of illumina- tion, reflectance, and transmittance in a light stimulus An observer must parse these three factors—which the stimulus inevitably conflates—in order to respond appropriately to the pat- tern of luminance values in any visual stimulus. Stimulus Stimulus Illumination Reflectance Transmittance appear brighter than the targets on the right, which are surrounded by a greater area of lower luminance (darker territory) than higher. Although the average luminance values of the surrounds in Figure 3.14 are effectively opposite those in the standard simultaneous lightness/brightness stimulus shown in Figure 3.13A, the brightness differences elicited are about the same in both direction and magnitude as in the standard presentation. If the output of retinal neurons cannot account for the relative lightness/ brightness values seen in response to such stimuli, what then is the explana- tion? A fact fundamental to answering this question is that the sources of luminance values are not specified in retinal images. The reason is not hard to understand. Retinal luminance is determined by three basic aspects of the physical world: the illumination of objects, the reflectance of object surfaces, and the transmittance of the space between the objects and the observer. As indicated in Figure 3.15, these factors are conflated in the retinal image; thus, many different combinations of illumination, reflectance, and transmittance can give rise to the same value of luminance. There is no logical or direct way in which the visual system can determine how these three factors are combined to generate a particular retinal luminance value. Since appropriate behavior requires responses that accord with the physical sources of a stimulus, the inverse optics problem presents a fundamental challenge in the evolution of animal vision (see Box 3C). For example, if we were unable to distinguish the same stimulus luminance arising from a high-reflectance surface in shadow and a low-reflectance surface in light, we would be unable to respond properly. Evidently, our visual system has evolved to solve this problem by inter- preting lightness/brightness according to past experience with the success or failure of our behavior in response to different combinations of illumination, SENSORY SYSTEMS AND PERCEPTION: VISION 73 reflectance, and transmittance in the world. In this framework, lightness/ brightness perceptions would presumably correspond to the relative frequency with which different possible combinations have proved to be the source of the same or similar stimuli in the enormous number of visual scenes witnessed during the course of evolution, as well as by individual observers during their lifetimes. This general idea harks back to the nineteenth-century vision scientist and polymath Hermann von Helmholtz, who suggested that empirical information is needed to augment what he took to be the veridical information supplied by sensory mechanisms. The more radical idea proposed by cognitive neuroscientists today is that vision and the neural connections that underlie it depend entirely on trial-and-error experience. In this conception, the lightness/ brightness values seen by an observer accord with the behavioral significance of stimuli, rather than with the physical intensities of light falling on the retina. Many peculiarities of lightness/brightness, including the percepts elicited by the stimuli in Figures 3.13 and 3.14, can be explained in this way (see Box 3C). Color Lightness and brightness are perceptions elicited by the overall amount of light in a visual stimulus. Color is the perceptual category generated by the distribution of that amount of light across the visible spectrum—that is, the relative amounts of light energy at short, middle, and long wavelengths (Figure 3.16A). The experience of color comprises three perceptual qualities: (1) hue, the perception of the relative redness, blueness, greenness, or yellowness of a stimulus; (2) saturation, the degree to which the percept approaches a neutral gray (e.g., a highly unsaturated red will appear gray, although with an appre- ciable reddish tinge); and (3) color brightness, the perceptual category described in the previous section, but applied to a stimulus that elicits a discernible hue. Taken together, these three qualities describe a perceptual color space (Figure 3.16B). The ability to see colors evolved in humans and many other mammals because perceiving spectral differences allows an observer to distinguish object surfaces more effectively than by basing those distinctions on luminance alone. In humans, seeing color is based on the different absorption properties of three different cone types with different photopigments (called cone opsins). Each cone type thus responds best to a different portion of the visible light spectrum (roughly speaking, to long, middle, and short wavelengths, respectively; see Figure 3.16A). The fact that human color vision is based on the different sensi- tivity of three cone types means that we humans are trichromats; color vision in most other mammals that have significant color vision is based on only two cone types, however, and they are thus referred to as dichromats. A common disorder of human color perception arises from a genetic defect in one or more of the three cone types. The most common form of “color blind- ness” is deficiency of a single cone type and affects about 5 percent of males in the United States. The defective gene is located on the X chromosome, which explains the overwhelming preponderance of this problem in males. Although such individuals cannot distinguish between red and green hues (or, less com- monly, between blue and yellow), most people with this problem have little practical difficulty in daily life, confirming that color perception is expendable, whereas lightness/brightness perception is not. While successfully accounting for many aspects of color perception in the laboratory, explanations of color vision based on retinal output from the three human cone types have long been recognized to be inadequate, in much the same way that retinal output determined by luminance does not adequately explain the lightness or brightness that people see. The comparisons made by the three cone types provide only a partial account of the colors we end up seeing and therefore of how color sensations are generated. Like lightness/brightness 74 CHAPTER 3 (A) Short Rods Medium Long (B) Lightness or brightness 100 Relative spectral absorbance Sat 50 ura tio n Hue 0 400 450 500 550 600 650 Wavelength (nm) Figure 3.16 Color percepts (A) The absorption properties of the three cone types in the human retina. The solid curves indicate the differential sensitivity of the cone types to short-, medium-, and long-wavelength light. (The dashed curve shows rod absorption properties.) (B) Perceptual color space for hu- mans. At any particular level of light intensity (which evokes a es Blu sensation of color brightness), movements around the perimeter e n Hu of the relevant plane, or “color circle,” correspond to changes tio Red ra in hue (i.e., changes in the apparent contribution of red, green, tu Sa s blue, or yellow to the percept). Movements along the radial axis correspond to changes in saturation (i.e., changes in the approxi- mation of the color to the perception of a neutral gray). Each of Gray the four primary color categories (red, green, blue, and yellow) is characterized by a unique hue (indicated by dots) that has no apparent admixture of the other three (i.e., a color experience that cannot be seen or imagined as a mixture of any other col- Hu ors). These four colors are considered primary because of their Gr e ee perceptual uniqueness, which is very different from the “primary” ns paint colors taught in art classes. s Yellow perceptions, the colors we see are strongly influenced by the rest of the scene. For example, a stimulus patch generating exactly the same distribution of light energy at various wavelengths can appear quite different in color depending on its surroundings—a phenomenon called color contrast (Figure 3.17A). Conversely, patches in a scene returning different spectra to the eye can appear to be much the same color—an effect called color constancy (Figure 3.17B). Color contrast and color constancy effects present much the same problem for understanding color processing as do contextual lightness/brightness ef- fects. Together, these phenomena have led to a debate about color percepts that has lasted more than a century. The key issue is how global information about the spectral context in scenes is integrated with local spectral information to produce color percepts. There is as yet no consensus about how central visual processing integrates local and global spectral information to produce the remarkable phenomena apparent in color perception. The answer may again SENSORY SYSTEMS AND PERCEPTION: VISION 75 (A) “Blue” “Yellow” Contrast (B) “Red” “Red” Constancy Figure 3.17 Color contrast and color constancy (A) The four blue patches on the top surface of the cube in the left panel and the seven yellow patches on the cube in the right panel are actually identical gray patches. In this demonstration of color contrast, these identical stimulus patches are made to appear either blue or yellow by changes in the spectral context in which they occur. (B) Patches that have very different spectra can be made to look more or less the same color (in this case, red) by contextual information—a phenomenon that demonstrates color constancy. (From Purves and Lotto 2011.) be that the colors we see are determined empirically for the same reasons that lightness and brightness appear to be generated in this way—that is, to meet the challenge presented by the inherent ambiguity of light stimuli. Information about central color processing has come from studies in non- human primates, clinical observations in patients with cortical lesions, and noninvasive brain imaging in normal subjects. This work suggests that extrastriate area V4 is especially important in color processing (see Figure 3.7). Particularly revealing have been neuropsychological and imaging studies of individuals suffering from a condition called cerebral achromatopsia. In effect, such patients lose the ability to see the world in color, although other aspects of vision, such as lightness, brightness, and form, remain intact. A good example described by the neurologist and essayist Oliver Sacks is a patient who, following lesions 76 CHAPTER 3 (A) Figure 3.18 Damage to the ventral occipital cortex affects color vision A person with brain damage in this region (which includes visual cortical area V4) often suf- fers from an inability to perceive color (achromatopsia), despite being able to see lightness, brightness, and form more or less normally. (A) Degree of overlap in the location of lesions in a series of 46 patients with achromatopsia who also had other visual problems, such as difficulty in face recognition. Given the anatomy of the primary visual pathway (see Figure 3.5), such patients are often blind to stimuli of any sort in the contralateral visual field. (B) Degree of overlap in 11 patients in this series whose primary symptom was achromatopsia. The narrower overlap in these patients is consistent with the conclusion that the integrity of the cortex in the gen- eral vicinity of V4 is important for color vision. The inset in (A) shows the level of the horizontal sections shown. (From Bouvier and Engel 2006.) Percent overlap in the region of V4, saw objects as all being “dirty” shades of gray. When asked to draw from memory, he had no difficulty reproducing 80 relevant shapes or shading, but he was unable to appropriately color the objects he represented. A meta-analysis (see Box 1A) of a large number of cases of cerebral achromatopsia (Figure 3.18) shows that (B) damage over an extensive region of the ventral occipital cortex that 60 includes V4 can give rise to this condition, and that the region of injury typically affects other visual and cognitive functions. Thus, whereas V4 seems important to color vision, a number of related 40 extrastriate areas probably participate as well in generating color percepts. Further support for the conclusion that V4 and surround- ing regions of the extrastriate visual cortex are concerned with color processing comes from functional imaging studies in normal 20 subjects, which show activation of these same regions when subjects undertake color-processing tasks. Form A third fundamental quality of visual perception is form. Perceptions of form entail simple geometrical characteristics such as the length of lines, their apparent orientation, and the angles they make as they intersect other lines, and understanding the responses to such stimuli is a first step toward understanding how complex object shapes are perceived. A starting point in exploring how the visual system generates perceptions of form is examining how we perceive the distance between two points in a stimulus, as in the perceived length of a line, or the dimensions (size) of a simple geometrical shape. It is logical to suppose that the perception of a line drawn on a piece of paper or on a computer screen should correspond more or less directly to the length we see. But, as in the case of lightness, brightness, and color, perceptions of form do not correspond to physical reality. A well- studied example of this discrepancy is the variation in the perceived length of a line as a function of its orientation. As investigators have repeatedly shown over the last 150 years, a line oriented more or less vertically in the retinal image appears to be significantly longer than a horizontal line of exactly the same length; and the maximum length is perceived, oddly enough, when the stimulus is oriented about 30 degrees from vertical. This effect is evidently a particular manifestation of a general tendency to perceive the extent of any spatial interval differently as a function of its orientation in the retinal image. There is a rich literature on other perceptual distortions (“geometrical illu- sions”) elicited by simple stimuli, showing in each case that measurements made with instruments like rulers or protractors are at odds with the corresponding percepts (Figure 3.19). These effects are similar to lightness, brightness, and SENSORY SYSTEMS AND PERCEPTION: VISION 77 (A) (B) (C) (D) (E) Rotate 90˚ Figure 3.19 Examples of some much-studied geometrical illusions (A) The Hering illusion. German physiologist Ewald Hering showed that two parallel lines (red) ap- pear bowed away from each other when presented on a background of converging lines. (B) The Poggendorff illusion. The continuation of a line interrupted by a bar ap- pears to be displaced vertically, even though the two line segments are actually col- linear. (C) The Müller-Lyer illusion. The line terminated by arrow tails looks longer than the same line terminated by arrowheads. (D) In the Ponzo illusion, the upper horizon- tal line appears longer than the lower one, even though, once again, the line lengths are identical. (E) All of the preceding effects are apparent in natural scenes and, as in brightness and color, can be enhanced by more complex contextual information. The tabletop illusion initially created by psychologist Roger Shepard is a good example. The two tabletops are actually identical, as is apparent when the right top is rotated 90 degrees, as shown below on the right.