Higher Visual Functions 1 Lecture 23 & 24 PDF

Document Details

RightfulByzantineArt4168

Uploaded by RightfulByzantineArt4168

University of Melbourne

Tags

visual perception human vision primate vision cognitive neuroscience

Summary

This document presents lecture notes on higher visual functions, focusing specifically on how the brain processes motion and depth within the visual system. It introduces concepts like error signals and predictions in visual processing and details the operation of different visual pathways in primate vision. The material is relevant to the study of visual neuroscience, cognitive psychology, and related academic disciplines.

Full Transcript

**[LECTURE 23 -- WEEK 8 -- HIGHER VISUAL FUNCTIONS 1 -- PERCEPTION OF OBJECTS MOTION, AND DEPTH]** **Lecture concept:** *How do we detect motion within the visual system and what cells are responsible for this?* **PRIMATES:** - Primates need to be able to forage for fruits and nuts that usually...

**[LECTURE 23 -- WEEK 8 -- HIGHER VISUAL FUNCTIONS 1 -- PERCEPTION OF OBJECTS MOTION, AND DEPTH]** **Lecture concept:** *How do we detect motion within the visual system and what cells are responsible for this?* **PRIMATES:** - Primates need to be able to forage for fruits and nuts that usually grow at the end of tree branches -\> requires reaching and grasping - Need to identify shapes and 3D objects, and then to move to them to pick them etc., this requires: **object recognition, motion and depth perception.** The brain uses '**error signals'** and '**predictions'** as feedback from higher cortical processing to the early visual areas (V1), in order to determine what the input is and what to do about it: A diagram of a yellow circle Description automatically generated [FOR EXAMPLE....] IF you have a car moving in a certain direction, your sight can confirm to higher cortical areas that indeed, via prediction, the car is in fact moving to the right at roughly 30 km/h. However, if the car suddenly stops, there is an **error signal** produced by the lower visual areas (V1), **in order to correct the original prediction** to that of a more likely prediction as to what Is visually happening. This takes time and will show delays in cognition as the brain processes the difference. **GESTALT PRINCIPLES:** ![A screenshot of a diagram Description automatically generated](media/image2.png) - Late (\>100ms) signal processing in V1 is sensitive to global organization of a scene due to feedback from higher-order areas (V4, IT, or MT) -\> V1 response modulation. **[{ V1 } -- Contour integration for object perception]** - **Long range horizontal connections in V1 can span several mm, they connect co-orientated cells, and WHEN DONE SO, shows a very, very clear STRAIGHT LINE, of the co-orientated cells within V1.** - These horizontal connections are EXCTITATORY and align one another. - Lesion here impairs conscious vision (where vision in general and conscious vision are slightly different) = blindsight **[{ V2 } -- Borders, figure-ground segregation and illusory contours ]** - Objects may be partially occluded in natural scenes -- yet the human visual system enables seamless completion of these objects, where illusory edges have been used to study the response of V2 neurons in primates. - Even when we don't have the full picture in terms of edges etc., our brain is able to fill in the gaps to create those edges/shapes: A diagram of a diagram Description automatically generated with medium confidence **[{ V4 } -- Integration of local cues into global shapes, object-based representation ]** - Neurons in primate V4, respond to particular curvatures of the object presented anywhere in its RF and demonstrate **position invariance** within their RFs. - V4 neurons encode features in a visual scene that provide information about the true shape of objects. - **Lesion =** *severe disruptions of objects discrimination* **What is the next step now?** **Well,** once V4 has done its processing, it passes on information to the **-\> LOC (lateral occipital complex) & IT.** **[{ LOC } ]** - Representation of complex shapes. - Results for studies are captured using **fMRI --** study where humans were presented with real-life objects, degraded images, textures; - **LOC** responded selectively to objects, both familiar and unfamiliar, and showed ***[size invariance]*** - **Showed greater responses to the actual objects (familiar or not), and not the patterns without objects.** **So...** - **LOC** demonstrates ***[form-cue invariance.]*** - The lateral occipital cortex (LOC) plays a key role in visual object recognition, and one of its important properties is *form-cue invariance*. This refers to the ability of the LOC to recognize an object regardless of the type of visual cue (such as colour, texture, motion, or shading) used to represent it. In essence, the brain can identify an object based on its shape or form, even if its appearance changes due to different lighting, material properties, or partial occlusion. VISUAL AGNOSIA **--** *The inability to recognize visually presented objects* +-----------------------------------+-----------------------------------+ | **[Apperceptive Agnosia -- A | **[Associative Agnosia -- A | | LESION]** | LESION]** | +===================================+===================================+ | - Lesion toward the back of the | - Lesion toward the front of | | temporal lobe near the | the temporal lobe (in a more | | occipital lobe (in the more | higher-cortical processing | | primitive visual processing | area). | | areas, closer to V1). | | | | - Cannot identify the object | | - Cannot integrate the | with the required knowledge | | components' visual features | of it. May be able to copy | | into a global whole | it, but does not recognise | | | it. | +-----------------------------------+-----------------------------------+ **[Prosopagnosia -- FACE BLINDNESS]** **=** *Damage to the **IT** area., within the **IT** is an area called there **fusiform gyrus,** an area along the middle (bilaterally) bottom of the brain **which is responsible for face recognition.*** - ***People with damage can still** identify individual objects that make up a face like nose and eyes, but cannot integrate this into a complete facial percept.* **MOTION PERCEPTION: [Real V Apparent motion]** - **Apparent motion: a** *perception of motion arises from presentation of [stationary images/objects].* - **Biologically important forms of motion -- types of motion:** 1. **Smooth motion** = throwing a ball 2. **Optic flow =** moving an object closer or further away from oneself 3. **Complex biological motion --** specific movements that result from the actions of animated objects (e.g. people) **STREAM for Motion Perception:** - **Key area for motion perception is the: MT/V5** - **V1** can actually directly innervate **V2, V4 or MT/V5!!** - Many V1 cells show direction selectivity in their response. **V1 \> MT/V5 (middle temporal) \> (MST) Medial Superior Temporal \> (PPC) Posterior Parietal Cortex** ![A diagram of a brain Description automatically generated](media/image4.png) **Summation of ESPS =** When a light stimulus is moved in a selectively preferred direction (in a left to right manner for example), and where the neurons being activated as the stimulus moves across them, converge **AT THE SAME TIME, AND MEET AT THE TARGET CELL AT THE SAME TIME = Summation of ESPS:** **\ ** **[LECTURE 23 -- WEEK 8 -- HIGHER VISUAL FUNCTIONS 2 -- PERCEPTION OF OBJECTS MOTION, AND DEPTH]** **Lecture concept:** *How do we detect motion within the visual system and what cells are responsible for this? A Continuation.* **Motion processing: [MT (V5) -- ]**[Medial Temporal ] - *Have large receptive fields* - *Are sensitive to moving stimuli, and show motion adaption* - *Is where the process of global integration occurs, which is the process of assembling all objects and stimuli in the environment to determine what in fact the summation of this means/is.* **Motion processing: [MST -- ]**[Medial Superior Temporal ] - *Also involved in motion processing and sits **right next to the MT (V5)** area within the brain, kind of on the sulcus separating the dorsal occipital lobe and the rear of the parietal lobe.* - *Have large receptive fields too.* - *MST integrates motion information across larger visual fields, allowing for the detection of more sophisticated patterns like [ **expansion**]**, contraction, and rotation**, which occur during self-motion. [ ]* **SO, we know what MT and MST encode for (motion detection), but what happens if they are lesioned bilaterally?** **Answer:** - Can't pour tea, as the person cannot see the movement of the cup or the kettle etc., - *life becomes very hard and all things involving movement become restricted.* **There are multiple pathways for which motion can be perceived and interpreted that differ from the regular pathway (LGN -\> V1 -\> MT):** ![A diagram of a path system Description automatically generated](media/image6.png) **[PERCEPTION OF DEPTH -- FRONTAL VS SIDE EYES]** - Mammals (sometimes) have front eyes and provides overlapping retinal images from the two eyes **=** rely on **stereopsis** - Side eyes (like on horses for example), provide limited or no overlapping of retinal images from the two eyes which is harder for depth, **but,** provide **better peripheral vision.** - **Binocular disparity =** difference between two retinal images - The eyes have different positions in space, resulting in different views. - **Horopter:** - The **horopter** is a crucial concept in binocular vision and depth perception. It refers to an imaginary curve or surface in space where objects project images that fall on corresponding points on the two retinas (the same relative locations in both eyes). Objects located on this horopter are perceived as being at the same depth or distance from the observer because their images stimulate the same retinal locations, leading to no disparity between the two eyes\' views. **Depth Perception and the Horopter:** Depth perception relies heavily on **binocular disparity**, which occurs when each eye sees a slightly different image of the same scene due to their horizontal separation (the distance between the eyes). The brain uses these differences (disparities) to calculate the relative depth of objects.  Here\'s how the horopter fits into this process: 1. **Corresponding Retinal Points:** - When an object lies on the horopter, its image falls on **corresponding points** in the two retinas. That means each eye sees the object in exactly the same position on its retina, and the brain interprets the two views as coming from the same depth. As a result, the object appears single and clear, and no disparity is generated. 2. **Disparity and Depth Perception:** - Objects **in front of or behind** the horopter will project to **non-corresponding points** on the retinas. This creates **binocular disparity**---a small difference in the positions of the object's images in each eye---which is interpreted by the brain as a cue for depth. - The greater the disparity, the farther away from the horopter the object is, and the more depth is perceived. This disparity allows the brain to compute how close or far objects are from the observer, contributing to the three-dimensional perception of space. - **Objects in front of the horopter** (closer to the observer) will have **crossed disparity** because the images fall closer to the nasal side of the retinas - **Objects behind the horopter** (farther away) will have **uncrossed disparity** because the images fall closer to the temporal side of the retinas. Seeing as V1 is the first point in the visual pathways where neurons receive binocular inputs, they are sensitive to binocular disparity. All areas, **V1, V2, V3, MT & MST have high levels of *disparity tuning,*** with cells either being tuned to near or far (from the horopter). **[Monocular cues] to depth include:** - **Relative sizes** - **Occlusions** (when something Is hidden by something and so seems further away -- like a square partially hidden by a circle) - **Cast shadows** (size and angle of a shadow defines distance between objects) - **Shading** - **Aerial perspective** - **Linear perspective** - **Texture perspective** - **Motion parallax** - **Blurring** Monocular cues **=** cues which benefit each eye individually, so to help us see if one eye does not function correctly. It is required to compute absolute distances as disparity sensitive is relative to the horopter. **=** *Very important for animals with eyes on the sides of their heads for example.* - **Motion parallax:** Objects move at different speeds on the retia depending on the distance from the observer -- objects that are closer to you appear to move faster than objects that are farther. A white paper with black text Description automatically generated SHORT ESSAY: **Q: What is Akinetopsia and what kind of cortical damage most often leads to Akinetopsia? (4)** **Ans:** - Akinetopisa is the loss of motion perception (2) that often results from bilateral damage to the area MT(V5) (2). **Explain the function(s) of this cortical area (2) and its anatomical input connections involved in that function(s) (4):** **Ans:** - Neurons of area MT are sensitive to motion and are involved in global integration of visual information (2). - They receive input from area V1, especially from direction-selective cells in V1 (2), however, some of their input bypasses V1: MT receives projections from koniocellular LGN cells (geniculo-extrastriate pathway) (1) as well as from the superior colliculus/pulvinar (colliculo-cortical pathway) (1).

Use Quizgecko on...
Browser
Browser