🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Chapter 6 We can’t perceive everything, and some things are not important so it’s good that we don’t perceive them Attention is the process of selectively concentrating on some aspect of the environment while ignoring other things. Attention involves mult...

Chapter 6 We can’t perceive everything, and some things are not important so it’s good that we don’t perceive them Attention is the process of selectively concentrating on some aspect of the environment while ignoring other things. Attention involves multiple neural processes and mechanisms. Our perceptual system has limited processing power Filtering- we block processing of distracting stimuli. Facilitation- we enhance processing of desired stimuli. Attention is partly under our control (voluntary) and partly beyond our control (involuntary). What directs our attention? Visual Salience ○ Region of a scene that are very different from the rest of the scene Color, size, shape, contrast, movement Green circle demo Attention capture ○ Properties of a stimulus grab attention (against person’s will) Visual Search: looking for a target in a display containing distracting elements. Target: The goal of a visual search Distractor: In visual search, any stimulus other than the target Set size: The number of items in a visual search display Distracting elements → distractor Scanning a Scene Visual scanning – looking from place to place ○ Fixation ○ Saccadic eye movement ○ On average we move our eyes 200,000 times a day! Overt attention involves looking directly at the attended object. Covert attention refers to attention without looking. Scanning eye movements are important to collect details of a visual scene Measuring eye movements – camera based eye trackers show: 1. Saccades - small, rapid eye movements (watching train pass by) 2. Fixations - pauses in eye movements that indicate where a person is attending. Approximately 3 fixations per second Attention! Divided attention - paying attention to more than one thing at one time ○ This ability is limited, which has an impact on how much we can process at once Selective attention - focusing on specific objects and filtering out others ○ Selection is achieved partially through use of the fovea (i.e., it is linked to eye movements) Attention based on cognitive factors What is interesting to the observer ○ People focus on is personally interesting Scene Schemas ○ Observers knowledge about what is contained in a typical scene P’s focus on objects that do not belong in scene If it’s something really out of place or out of the ordinary then we will notice it, but if it’s just mildly different or odd then chances are lower for us to notice it Cognitive Factors One way to show that where we look isn’t determined only by saliency but by checking the eye movements of the subject of the task. ○ Meaning of scene or task changes attention Attention can be influenced by a person’s goals. Attention Speeds Responding Cue: A stimulus that might indicate where (or what) a subsequent stimulus will be Reaction time (RT): A measure of the time from the onset of a stimulus to a response ○ Eyes on “+” ○ Cued on one side of rectangle where target was most likely going to appear ○ RT decreased when cue and target matched but also lower for B than C ○ Why? The Physiological Basis of Attention Attention could alter the tuning of a neural receptive field Receptive fields of neurons are not completely fixed and can change in response to attentional demands Receptive fields of a neuron can shift when a monkey shifts its attention to locations that correspond to different locations on the retina Attention changes the receptive field The binding problem: Any stimulus activates a number of different areas of the cortex. How are these separated signals combined to create a unified object Color, motion, and orientation are represented by separate neurons How do we combine these features when perceiving the ball? We pull all the factors apart first before perceiving the entire picture Feature Integration Theory Preattentive stage: features of objects are separated. Focused attention stage: features are bound into a coherent perception. Where stream = information about location and motion. Illusory conjunctions: features that should be associated with an object become incorrectly associated with another. Experiment by Triesman & Schmidt ○ Stimulus was four shapes flanked by two numbers. ○ Display flashed briefly, followed by a mask. Task was to report numbers first followed by shapes at four locations. Illusory Conjunctions Results showed that: ○ Incorrect associations of features (illusory conjunctions) P’s combine features of objects 18% of the time (small red circle) ○ Features that should be associated with an object become incorrectly associated with another if your attention is disrupted Conclusion: Conjunctions happen at the beginning of the perceptual process ○ Each feature exists independently of the others ○ Illusory conjunctions provide evidence that some features are represented independently and must be correctly bound together with attention Inattentional Blindness Not paying attention to clearly visible stimuli ○ Overlapping scenes make it difficult to attend to all stimuli ○ Gorilla video Change Blindness Change Detection ○ When a change in the scenery goes undetected ○ “Change Blindness” The failure to notice a change between two scenes Demonstrates that we don’t encode and remember as much of the world as we might think we do ○ If P’s are primed (cued) they are much better at picking up change ○ Video of different guys switching places at desk Search experiments can help us determine how the visual system allocates attention Feature search: Search for a target defined by a single attribute, such as a salient color or orientation Conjunction search: Search for a target defined by the presence of two or more attributes ○ (e.g., a red, vertical target among red horizontal and blue vertical distractors). Spatial configuration search: Finding the target requires detecting the correct spatial configuration of a horizontal and vertical line (the letter T among Ls in this case). These searches are very inefficient and have high reaction times. The parietal lobe plays a important role for attention Deficits in this area produce attention issues Damage to the parietal lobe creates attention errors. Unilateral parietal lobe damage (Neglect) Symptoms: Attentional neglect of the contralateral space (vision, body, imagination) Chapter 7 Taking Action Does our perception change due to our actions? Previous research focused on lab studies ○ Are they accurate? ○ Are they ecologically valid? Accuracy is questionable because studies aren’t real world settings Ecological Approach Ecological approach ○ Studying the reason for phenomenon due to real situations All areas of study Humans and animals (can an animal do what you are asking it to do?) Military & Psychophysics ○ WWII – realizing that we need to understand our perception ○ Using psychophysics and perception Gibson (began in late 1950s)– airplane landings of pilots Ecological Approach in To studying how people move through the environment How an individuals own motion creates perceptual information Guides further movement Aids in perceiving environment The Moving Observer Optic flow Your movement to stationary objects causes you to see items moving in your environment Information on speed of our motion 2 characteristics of optic flow 1. Gradient flow a. Different speeds at which the environment (optic flow) is coming at you i. Optic flow is more rapid near the observer b. Longer arrows nearer to observer 2. Focus of expansion a. There is no optic flow at the destination of where the observer is moving to (at green dot) Invariant information Information that remains constant ○ Even when observer is moving ○ FOE is invariant information Movement & Flow The relationship between movement and flow is reciprocal The Moving Observer Self-produced information ○ When a person makes a movement That movement makes information that is used to guide further movement ○ Study: Gymnasts perform better with their eyes open! Allows in air corrections of trajectory Novices who closed eyes-no effect Only affected the experts ○ Learning involved ○ Bardy and Laurent (1998) Using Motion Information Avoiding (or anticipating) imminent collision: How do we estimate the time to contact (TTC) of an approaching object? Tau (t): Information in the optic flow that could signal TTC without the necessity of estimating either absolute distances or rates ○ tau=The ratio of the retinal image size at any moment to the rate at which the image is expanding ○ TTC is proportional to tau This calculation explains the ratio of how the object gets larger on the retina as it approaches and the speed at which it’s moving Ecological Psychology: Gibson stresses that perception and action are dynamically linked. We directly perceive important “invariants” as we interact with the world. This allows for rapid guidance of motion without need for calculus. Senses Do Not Work in Isolation All modalities involved in perception Your ability to stand up straight relies many systems ○ (inner ear structures, muscles, vision! etc…) ○ Vision also very important ○ Provides a frame of reference for the other systems ○ Allows constant self produced information Study: Swinging Room-Lee and Aronson (1974) Moving room back and forth -toddlers (~14 months) swayed with room (walls & ceiling moving –floor stable) Adults had similar response – as little as 6mm of movement was needed to affect P’s Vestibular stays the same while vision notices movement Moving the room toward the observer creates an optic flow pattern associated with moving forward, so the observer sways backward to compensate. Walking Visual direction strategy: observers keep their body pointed toward a target ○ Walkers correct when target drifts to left or right. “Blind walking” experiments show that people can navigate without any visual stimulation from the environment. ○ Philbeck et al., 1997 ○ Red line-P’s asked to walk to target ○ Even when asked to make turns 1st (either at turning points 1 or 2) while remaining blindfolded P’s walked close to the target Wayfinding Landmarks involved taking routes that require making turns. Landmarks are objects on the route that serve as cues to indicate where to turn. Landmark study Janzen and van Turennout 2004 ○ Observers studied a film that moved through a “virtual museum.” ○ They were told that they should be able to act as a guide within the museum. ○ Exhibits appeared both at decision points where turns were necessary and non-decision points. ○ Observers given a recognition task while in an fMRI. P’s presented with objects they had seen as exhibits, as well as ones they had not seen. ○ Results showed the greatest activation for objects at decision points (landmarks) in the parahippocampal gyrus. Even for the “forgotten” objects ○ Topographical agnosia Inability to recognize landmarks ○ Damage to parahippocampal gyrus Cognitive mapping-How we do it? Tolman’s experiments with rat in a maze (1930&40’s) ○ Rat created a cognitive map – not using just memory O’Keefe (1970’s) ○ Place cells Cells fire when rat is in particular place in box Different cells preferred different locations ○ Place field Area of the environment within which a place cell fires Moser and Moser ○ Grid cells Firing depends on position in environment, but have multiple place fields arranged in grid-like patterns Coding for distance and direction information as animal moves Grey line is path taken Dots are place cells firing when rat in that position Positions with the box where 3 grid cells fired Head direction cells (Taube, 2007) ○ Cells firing depends on direction the subject is positioned Boarder cells (Solstad et al., 2008) ○ Cells that fire when an animal is near an edge of their environment Evidence of place cells in humans -Jacobs 2013 ○ Cell reading from human brains found similar results to rats Experience Maguire and colleagues (2006) ○ London bus drivers vs. London taxi drivers ○ –Taxi drivers performed better on visual tasks ○ Had larger hippocampi measured by MRI Mirror Neurons Neurons that fire when a monkey watches another person/monkey perform the same action as they previously did. Neurons are specialized in one type of action ○ Ex: Grasping- particular type of grasping! Experimenter has to perform same (similar) action Found in the motor cortex ○ Rizzolatti (2006) Audiovisual mirror neurons ○ Respond to same action AND to the noise of the action Neuron fires when either hearing a peanut break or seeing a peanut break ○ Neuron is firing to what is happening rather than a specific pattern of movement More research needed! Possible functions of mirror neurons ○ Help understand another animal’s actions & react appropriately ○ To help imitate the observed action Humans-fMRI – Mukamel 2010 ○ Turquoise – movements directed towards objects ○ Orange – tool use ○ Green – movements not directed towards objects ○ Blue – upper limb movements Predicting People’s Intentions Mirror neurons can be influenced by different intentions-(Iacoboni et al., 2005) P’s had higher responses to “intention” video than “non-intention” video ○ Activates that were “expected” to happen had larger responses ○ Guiding social interactions? Imitating Actions Capacity to imitate seems present from birth ○ Meltzoff & Keith 2007 ○ Infants from 12-17 days old imitate facial expressions 18 months watched videos - 3 groups- Meltzoff (1995) ○ Successful demonstration group Watched adults successfully do task ○ Unsuccessful demonstration group Watched adults unsuccessfully do task ○ Control group – no demo Infants then performed action from video Mimicking but able to alter behavior at 18mos! Gibson’s work on taking Action Gibson placed emphasis on 3 aspects: 1. Measured the observer acting on the environment 2. Measured the unchanging properties that the observer was use for perception 3. Measured all senses that were being used as the observer moved through the environment Chapter 8 Perceiving Motion Whenever action takes place It involves motion perception We are not passive observers of the motion of others It’s essential for our ability to move through and work in the environment Akinetopsia-blindness to motion Only sees still pictures of the world Function of motion detection – Motion helps to break camouflage Motion provides Information About Objects – We perceive shapes more quickly and accurately when an object is moving Illusory Motion Apparent motion: The (illusory) impression of smooth motion resulting from the rapid alternation of objects appearing in different locations in rapid succession. (>14 frames per seconds) First demonstrated by Sigmund Exner in 1875 ○ Basis of animation and film. Peripheral drift illusion: Small eye movements or blinking cause local luminance differences to trigger motion detectors in the periphery (keep eyes still –stops working). Induced Motion ○ When the motion of one object (usually a large one) causes a nearby stationary object (usually smaller) to appear to move EX: Clouds moving past moon Ex: the subway next to you starts to move and you feel you are moving Motion after effects (MAE) ○ Viewing a moving stimulus for 30 to 60 seconds causes a stationary stimulus t appear to move ○ Waterfall illusion Works on an opponent system ○ You tire out the neuron that works with downwards movement, so when you look away it triggers the opposite on, so things looks like they are moving up Interocular transfer of motion aftereffects: The transfer of an effect (such as adaptation) from one eye to another ○ Therefore, MAE must occur in neurons that respond to both eyes Input from both eyes is combined in area V1, so MAE must be in V1 or later Recent studies with fMRI confirm that adaptation in MT is responsible for MAE Comparing Real & Apparent Motion It was thought that real and apparent movement were processed differently There is evidence that it isn’t (Larsen et al.) Same areas of the cortex respond to both real and apparent motion Neural Processing of Motion 1. If keep our eyes still, then one receptor is activated after each other and we can perceive motion 2. What if we move our eyes with the walkers? ○ Image will remain stationary on retina ○ And we still will perceive motion 3. What if people were not there… ○ And we scan environment? ○ We know it is not moving! Even though it is moving across different receptive fields in retina Different Types of Motion 1) 2) 3) Neural Processing of Motion Local disturbance in optic array ○ Optic array-the structure created by things in our environment Covering and uncovering the stationary objects in background Provides information that we are moving through the environment Global optic flow ○ Objects in environment moving due to the observer's eyes (or body) moving Allows us to know that the scene is stationary and observer is moving That is no motion is perceived when the entire field moves or remains stationary (Gibson 1950) Motion perception: Retina/Eye information ○ Reichardt detector (1969) Neurons fire to movement in one direction ○ Output unit ○ Delay unit Red indicates motion Delay unit makes sure that signal gets to output at same time and then can fire Allows for motion to perceived What happens if your eye is also moving with an object> It can’t be just neural processing that allows us to detect motion Corollary Discharge Theory When we move our eyes to follow movements Motion is explained by Corollary discharge theory ○ Takes into account both neural signals and eye muscle movement ○ Related to table for 3 movement situations 1. Image displacement theory (IDS) a. When an image moves across receptors in retina 2. Motor Signal (MS) a. Signal is sent from eye muscles to the brain when you move your eyes to follow moving object 3. Corollary discharge theory (CDS) a. A copy of the motor signal is sent to a different place in the brain to be processed MS and CDS usually go together Six muscles are attached to each eye helps up perceive motion Processing different types of motion Situations 1 & 2 can be answered Using IDS and CDS When only one of the signals (IDS or CDS) gets sent to the cortex we know there is motion How do we answer situation 3? your move eyes and images don’t seem to move? If both signals occur – no motion is observed Comparator Part of the brain that receives both IDS and CDS signals Processes if motion has occurred or not Corollary Discharge Theory Push gently on the side of your eye and the world appears to jiggle around. If the visual motion signal is different from the eye movement command, then you see motion. The Movement Area of the Brain Newsome and Pare (1988) conducted a study on motion perception in monkeys Trained monkeys to respond to correlated dot motion displays The MT area of the monkeys was lesioned Result: Monkeys needed about 10 times as many dots to correctly identify direction of motion Moving dot simulator was used to determine the relationship between ○ Monkeys ability to judge direction ○ Response of a neuron As dot coherence increased ○ Monkey judged the direction of motion more accurately ○ MT neuron fired more rapidly Neurons in the medial temporal cortex (MT) are motion sensitive. Motion from a Single Neuron’s Point of View(Also in MT) A single receptor does not get all the information it needs in order to correctly process motion Aperture problem - The direction of motion is ambiguous when we see only parts of an object (as do V1 neurons due to their small receptive fields) ○ Activity of individual complex cell does not provide accurate information about direction of movement MT neurons combine signals from whole image to determine direction of motion ○ Akinetopsia (revisited) Using Motion Information Biological motion: The pattern of movement of all animals. Our knowledge of the degrees of freedom of joints and range of motion are important. Motion allows to recognize familiar behavior Brain mechanisms – Grossman and Blake (2001) ○ Brain areas were measured while watching biological movement (using TMS) ○ Superior temporal sulcus area fired more than when image was masked with extra dots (scrambled) Careful with interpretations! ○ Just because a structure responds to a specific stimulus does not prove that the structure is involved in perceiving that stimulus ○ Correlation not causation! Specific brain areas for motion perception V1 (primary visual cortex) early visual processing, e.g, orientation and local motion MT (medial temporal cortex) integration of local motion signals into global percepts STS (superior temporal sulcus) specialized for recognizing biological motion Motion Induced Blindness (MIB) A moving surface can cause stationary objects to “disappear” ○ No clear explanation ○ Related to Troxler fading: an unchanging target in peripheral vision will slowly disappear while fixating a central target Perceptual filling-Hsu et al. (2004) Motion streak suppression - Wallis and Arnold (2009) Perceptual scotoma - New and Scholl (2008) Troxler's Fading Lilac Chaser 1. A gap running around the circle of lilac discs 2. A green disc running around the circle of lilac discs in place of the gap (opponent color cells kick in) 3. Lilac discs disappearing in sequence-afterimages essentially cancel the original images Brain regions involved with the perception of motion Chapter 9 Importance of Color Vision Color vision is important to humans We associate emotions to it –“green with envy” We take directions from it – Traffic lights We have favorites – Blue is the favored color Allows for better discrimination abilities ○ Foraging, Navigation, Recognition, Identification In the animal kingdom, color can signal, sex, fitness, social rank, & reproductive state. Color Vision Color is not a physical property, but a psychophysical property. (there is no red ina 700 nm light, just as there is no pain in the hooves of a kicking horse) Color and Light Newton thought white light was mixture of many colors Individual colors of the spectrum are not mixtures of other colors (found by the second prism). The degree to which beams from each part of the spectrum were “bent” by the second prism was different. What Colors Do We Perceive? When describing the colors we perceive ○ We can do so with green, red, blue and yellow Considered the pure or unique colors Humans can perceive about 200 different colors across visible color spectrum ○ But can see more when the intensity is changed ○ Also by changing the saturation (desaturation) ○ By adding white ○ Ex: Adding white to red creates pink With these changes we can see around 2million different colors (more than we can discriminate) Light-what we see Light is a form of electromagnetic radiation ○ photons that travels as a wave (186,000 miles/sec) ○ Amplitude: perception of brightness ○ Wavelength: perception of color ○ Purity: mix of wavelengths ○ perception of saturation, or richness of colors. Humans see light roughly between 380-760 nm ○ (400-700) Red = long Violet = short Reflectance & Transmission Selective transmission ○ Only some wavelengths pass through objects The color we see is the wavelength not absorbed (reflected) by the object (or liquid) White does not absorb any wave lengths ○ No wavelengths are absorbed ○ All wavelengths are reflected Reflectance & Transmission Color Mixing Additive color mixture: ○ Adding up the wavelengths ○ Mixing lights of different wavelengths All wavelengths are available for the observer to see. Superimposing blue and yellow lights leads to white. Subtractive color mixture: ○ Mixing paints with different pigments Additional pigments reflect fewer wavelengths. Mixing blue and yellow leads to green Additive & Subtractive Light Subtractive color mixture Mixing paints with different pigments Additional pigments absorb more light and reflect fewer wavelengths Mixing blue and yellow leads to green Additive color mixture: Mixing lights of different wavelengths increases response in all 3 receptors Superimposing blue and yellow lights leads to white ○ Includes all 3 wavelengths ○ If light A and light B are both reflected from a surface to the eye, in the perception of color the effects of those two lights add together Perceptual Dimensions of Colors Spectral colors ○ Red, Orange, Yellow, Green, Blue, Violet No more indigo Nonspectral colors ○ Colors made from mixing spectral colors Hue- actual color of object or light Saturation – amount of white light that object has Desaturation –faded or washed out appearance Value – light dark dimension HSV color solid – Hue, Saturation, Value Determines how colors combine to produce new colors Any mixture of 2 hues will fall on a line that can be calculated Color Vision 2 theories Trichromatic theory ○ Proposed by Young and Helmholtz (1800s) ○ Three different receptor mechanisms are responsible for color vision ○ Behavioral evidence: Color-matching experiments Observers adjusted amounts of three wavelengths in a comparison field to match a test field of one wavelength. Opponent theory ○ Opposing responses from cells Evidence for the Trichromatic Theory Researchers measured absorption spectra of visual pigments in receptors (1960s). They found 3 pigments that absorb waves maximally to: ○ Short wavelengths - 419nm ○ Medium wavelengths - 531nm ○ Long wavelengths - 558nm Absorption spectra of the 3 cones in the retina S-cones are preferentially sensitive to short wavelengths (“blue”cones) M-cones are preferentially sensitive to middle wavelengths (“green”cones) L-cones are preferentially sensitive to long wavelengths (“red” cones) Cone Responding & Color Perception Color perception is based on the response of the three different types of cones. Responses vary depending on the wavelengths available. Combinations of the responses across all three cone types lead to perception of all colors. Trichromatic Theory of Color Vision Trichromatic theory states that we need three different cone receptors in order to see all colors Univariance problem: information from a single receptor is ambiguous we get a signal that a receptor has fired. How do we know that it is orange if we had only one kind? One can’t represent all colors! What if two waves come to us that fires one receptor? Any two wavelengths can cause the same response by changing the intensity. Having two different responses creates a ratio that the brain can label into a particular color The two wavelengths that produce the same response from one type of cone (M), produce different patterns of responses across the three types of cones (S, M, and L) Color perception depends on this pattern of activity from all 3 receptor types It takes takes at least two cones to at least see one color One cone = univariance. For each frequency, the number of action potentials dictate what color we will see in our brain. It’s kind of like “color mixing” it takes percentages of each cone and creates the color we se in our brain. Each cone represents a different “pigment” At least 2 receptors for different wavelengths are necessary for color perception One receptor type cannot lead to color vision because: ○ Light of different wavelengths can cause the same response (principle of univariance) Two receptor types (dichromats) solves this problembut 3 types (trichromats) allows for perception of more colors This is the color matching process that was used ○ Were at least two different light (colors) were needed in order match a color The color processing area uses information from all three receptors to calculate what color is being transmitted The Opponent-Process Theory of Color Vision Ewald Hering (1834-1918): ○ Color vision is caused by opposing physiological responses generated by blue and yellow and by green and red. Behavioral evidence: ○ Types of color blindness are red/green and blue/yellow. ○ Color afterimages and simultaneous color contrast show the opposing pairings Red and green switch places and blue and yellow switch places. Opponent-process mechanism ○ Three mechanisms: red/green, blue/yellow, and white/black ○ The pairs respond in an opposing fashion, such as positively to red and negatively to green ○ These responses are believed to be the result of chemical reactions in the retina ○ If you see a lot of red and then take it away, you’ll see green. If you see a lot of yellow and take it away, you’ll see yellow Physiology of opponent-process Researchers performing single-cell recordings found opponent neurons (1950s) Opponent neurons: ○ Are located in the retina ganglion cells and LGN ○ Respond in an excitatory manner to one end of the spectrum and an inhibitory manner to the other We see color with the trichromatic but the opponent cells help us see better detail Trichromatic vs. Opponent Color Vision Each theory describes physiological mechanisms in the visual system but at different levels Trichromatic theory explains the responses of the cones in the retina, i.e., how are different light wavelengths detected. Opponent-process theory explains how color information is efficiently encoded at the level of the ganglion cells and LGN-further in brain. Trichromatic process: Opponent process: Retina receptor level Ganglion cells and LGN 3 kinds of cones 3 types of information streams S (blue), M (green), L (red) M-L; (M+L)-S; B-W opponency Color in the Cortex Areas of processing have been found for: texture, color, shape, and all three. Types of Opponent Neurons in the Cortex Single-opponent neurons ○ Excited when M wavelength hits center and decreases when L wavelength hits surround Aids in perceiving colors within boundaries Double-opponent neurons ○ Vertical bar of light makes these neurons fire Aids in perceiving boundaries between different colors Color Deficiency Does everyone see colors the same way? (Yes and No) General agreement on colors Some variation due to age (lens turns yellow) Generally, males required a slightly longer wavelength to experience the same hue as females. About 8% of male population, 0.5% of female population has some form of color vision deficiency: Color blindness Located on the X chromosome Monochromatism Monochromats have: A very rare hereditary condition Only rods and no functioning cones – or none Ability to perceive only in white, gray, and black tones True color-blindness Poor visual acuity Very sensitive eyes to bright light Achromatopsia Dichromatism There are three types of dichromatism Protanopia affects 1% of males and.02% of females. ○ Individuals see short-wavelengths as blue. ○ Neutral point occurs at 492nm. ○ Above neutral point, they see yellow. ○ They are missing the long-wavelength pigment. Deuteranopia affects 1% of males and.01% of females. ○ Individuals see short-wavelengths as blue. ○ Neutral point occurs at 498nm. ○ Above neutral point, they see yellow. ○ They are missing the medium wavelength pigment. Tritanopia affects.002% of males and.001% of females. ○ Individuals see short wavelengths as blue. ○ Neutral point occurs at 570nm. ○ Above neutral point, they see red. ○ They are most probably missing the short wavelength pigment. Testing Color Deficiency Unilateral dichromat ○ A person with one trichromatic and one dichromatic eye! Rare ○ Cerebral achromatopsia Color blindness due to damage to occipital lobe Color constancy: Color of objects remains relatively constant under varying illuminations. The same surface illuminated by two different light sources will generate two different patterns of activity in the S-, M-, and L-cones Retinex theory – Land (1977) color is determined by the proportion of light of different wavelengths that a surface reflects Relative wavelengths are constant, so perception is constant ○ Need more than one wavelength for comparison Knowledge helps us define color Lightness constancy Achromatic colors are perceived as remaining relatively constant Perception of lightness: ○ Is not related to the absolute amount of light reflected by object ○ It is related to the relative (percentage) amount of light reflected by object The ratio principle - two areas that reflect different amounts of light look the same if the ratios of their intensities are the same. This works when objects are evenly illuminated, but they are not… Color Detection - night & day Photopic: Light intensities that are bright enough to stimulate the cone receptors and bright enough to “saturate” the rod receptors ○ Sunlight and bright indoor lighting are both photopic lighting conditions Scotopic: Light intensities that are bright enough to stimulate the rod receptors but too dim to stimulate the cone receptors ○ Moonlight and extremely dim indoor lighting are both scotopic lighting conditions Chapter 10 The problem: to infer a three dimensional world from a two dimensional, curved retinal image. The house is farther away than the tree, but (b) the images of points F on the house and N on the tree both fall on the two-dimensional surface of the retina, so (c) these two points, considered by themselves, do not tell us the distances of the house and the tree. Two points on the retina from the two objects cannot provide depth or distance information. Cue Approach to Depth Perception Depth cue: Information about the third dimension (depth) of visual space Oculomotor: Cues based on our ability to sense the position of our eyes and muscle tension in eyes Monocular depth cue: A depth cue that is available even when the world is viewed with one eye alone Binocular depth cue: A depth cue that relies on information from both eyes Monocular Cues Pictorial cues ○ Sources of depth information that can be depicted in a scene Occlusion ○ A cue to relative depth order in which, for example, one object obstructs the view of part of another object Relative height ○ Below the horizon, objects higher in the visual field appear to be farther away. Above the horizon, objects higher in the visual field appear to be farther away Relative size ○ All things being equal, we assume that smaller objects are farther away from us than larger objects Perspective convergence ○ Lines that are parallel in the three-dimensional world will appear to converge in a two-dimensional image as they extend into the distance ○ Vanishing point: The apparent point at which parallel lines receding in depth converge Familiar size ○ A cue based on knowledge of the typical size of objects ○ Our knowledge of an object's size influences our perception of that object’s distance Atmospheric pressure ○ A depth cue based on the implicit understanding that light is scattered by the atmosphere ○ More light is scattered when we look through more atmosphere ○ Thus, more distant objects are subject to more scatter and appear fainter, bluer (scattering of short wavelengths), and less distinct Texture gradient ○ Elements that are equally spaced in a scene appear to be more closely packed together as distance increase ○ More detail in closer objects than farther objects Shadows ○ Shadows help understand the location of objects related to other surfaces ○ Emphasize contours Monocular Cues to Three-Dimensional Space Motion parallax: Images closer to the observer move faster across the visual field than images farther away The brain uses this information to calculate the distances of objects in the environment Motion-Produced Cues Closer objects move further across the back of the retina. One eye moving past (a) a nearby tree; (b) a faraway house. Because the tree is closer, its image moves farther across the retina than the image of the house Oculomotor Feedback Oculomotor - cues based on sensing the position of the eyes and muscle tension Convergence: The ability of the two eyes to turn inward, often used to focus on nearer objects Divergence: The ability of the two eyes to turn outward, often used to focus on farther objects Accommodation: The process by which the eye changes its focus (in which the lens gets thicker as gaze is directed toward nearer objects) We cannot use just one cue, we use them in combinations Binocular Depth Information Binocular disparity ○ Depth perception created by input from both eyes Corresponding retinal points: images points of an object are formed at the same distance from the fovea in both eyes Horopter is an imaginary circle of object points that fall on corresponding retina points. Objects that do not fall on the horopter ○ Fall on noncorresponding points Angle of disparity ○ The angle between these points is called the Disparity is the basis for stereopsis ○ A vivid perception of the three-dimensionality of the world that is not available with monocular vision ○ Slightly different views from the 2 eyes provides us very important information about depth Binocular summation ○ The combination (or “summation”) of signals from each eye in ways that make performance on many tasks better with both eyes than with either eye alone. The two retinal images of a three-dimensional world are not the same Objects located in front the horopter have crossed disparity Objects located beyond the horopter have uncrossed disparity Disparity angles from non-corresponding retinal points provide depth information The closer the object the larger the angle of disparity Disparity selective neurons detect (are tuned) to distinct disparity angles (Hubel and Wiesel) Binocular Vision Stereoblindness: An inability to make use of binocular disparity as a depth cue. ○ Can result from a childhood visual disorder, such as strabismus, in which the two eyes are misaligned ○ Most people who are stereoblind do not even realize it Free fusion: The technique of converging (crossing) or diverging (uncrossing) the eyes in order to view a stereogram without a stereoscope ○ “Magic Eye” pictures rely on free fusion Binocular disparity alone can support shape perception. Random dot stereogram (RDS): A stereogram made of a large number of randomly placed dots Random dot stereogram contain no monocular cues to depth RDSs are significant because they show that stereopsis can be achieved without monocular depth cues. Stereoscope: A device for presenting one image to one eye and another image to the other eye –Stereoscopes were a popular item in the 1900s Binocular rivalry: The competition between the two eyes for control of visual perception, which is evident when completely different stimuli are presented to the two eyes Size Perception & Size Constancy Visual angle An angle of an object relative to the observer’s eye Depends on both physical object size and distance to object ○ Person moving closer will create a larger angle Higher visual processing and size perception:size constancy Perception of an object’s size remains relatively constant This effect remains even if the size of the object on the retina changes Size-distance scaling equation ○ S=RXD ○ Perceived size (S) = Retinal size (R) x Perceived distance (D) ○ Ex: As a person walks away from you, the size of the person image on your retina (R) gets smaller, but your perception of the person’s distance (D) gets larger Perceptual constancy provides for stable visual world Size constancy: With good distance information, the apparent size of an object tends to remain remarkably stable, especially for highly familiar objects that have a standard size. Other examples: Color constancy White paper stays white in moonlight or daylight (lightness constancy) A chair remains a stool no matter form where we look at it (viewpoint invariance) Visual illusions Ambiguous distance information can lead to size illusions Illusions can be used to find out how the brain accomplishes size constancy Ponzo illusion Perceived increase in distance increases perceived size of an object Müller-Lyer illusion Why does this illusion occur? Inappropriate applied size constancy since no depth or distance cues are intended ○ Observers unconsciously perceive the fins as belonging to outside and inside corners ○ Outside corners would be closer and inside would be further away Without obvious distance cues we judge size by visual angle Moon illusion: Moon appears larger on horizon than when it is higher in the sky One possible explanation: Apparent-distance theory - horizon moon is surrounded by depth cues while moon higher in the sky has none Horizon is perceived as further away than the sky - called “flattened heavens” Another possible explanation: Angular size-contrast theory - moon appears smaller when surrounded by larger objects Thus, the large expanse of the sky makes it appear smaller Actual explanation may be a combination of a number of cues

Use Quizgecko on...
Browser
Browser