PSY 205 Cognitive Psychology PDF

Document Details

SkilledWombat993

Uploaded by SkilledWombat993

Hong Kong Shue Yan University

Tags

visual perception cognitive psychology gestalt principles human perception

Summary

These lecture notes cover visual perception and related concepts in cognitive psychology, focusing on Gestalt principles, bottom-up and top-down processing, templates, feature detection theory, and facial perception. The notes also discuss interactions between these elements. The document is part of a course at Hong Kong Shue Yan University.

Full Transcript

Hong Kong Shue Yan University Department of Counselling and Psychology PSY 205 -Cognitive Psychology Lecture: Visual Perception 1. Gestalt Principles 2. Bottom-up and Top-down processing 3. Interaction between bottom-up and top-down processing 4. Templates 5...

Hong Kong Shue Yan University Department of Counselling and Psychology PSY 205 -Cognitive Psychology Lecture: Visual Perception 1. Gestalt Principles 2. Bottom-up and Top-down processing 3. Interaction between bottom-up and top-down processing 4. Templates 5. Feature detection theory 6. This is a 3D world 7. Facial perception 8. How many visual perceptual systems do we have? A huge amount of studies have explained how an image can be perceived Different theories are proposed, even though some of them are contradictory Before looking into different theories, several concepts have to be introduced first Gestalt principles What Gestalt principles concern are the basic rules that govern how an object is seen There are several rules in governing our perception Figure-ground perception: a tendency that we detect the foreground and background relationship Similarity: a tendency of grouping similar objects (in terms of shape, location or other possible physical dimensions) as a category Closure: an automatic filling processing which forms shape Proximity: things which are physically close will be grouped as a same category Continuity: a strong feeling towards a contour with no interruption Gestalt psychology raises a debate in modern psychology since it does not fit well to the processing algorithm of a number of computational models in visual processing Bottom-up processing and Top-down processing The object in the external world can be called as a distal object What we perceived, with the proximal stimulation, is regarded as a perceptual object If our perceptual images are purely and directly constructed from the distal objects, the whole process can be referred as an example of bottom-up processing When experiences and expectation affect how a perceptual image is constructed, discrepancy is, therefore, expected between distal object and perceptual object, i.e., top-down processing Note: Contextual information is explained by Gibson’s direct perception which arguably rejects the idea of top-down processing Examples of top-down processing can be illustrated with the concept of constancy, object superiority effect Top-down processing seems to be so automatic that no one can consciously control it The advantage of top-down processing is a reduction of processing time as some of the features can be filled up by experiences Sensible guessing becomes important as it can base on what have been stored in the memory to predict what is perceived in the external world The drawback of top-down processing is overlooking And this occasionally makes us to fall into the traps, e.g., visual illusions Discussion: Is it possible that top-down processing plays no role on the level of sensation? Interaction between bottom-up and top-down processing Interaction is necessary Otherwise, our processing is either too slow to process the incoming stimuli (i.e., empiricism) or too unrealistic when comparing to the actual world (i.e., idealism) No one can explicitly explain the actual processing of this interaction Templates It can be treated as an analogy of ‘grandmother cell’ Hypothetically, every object has an image (i.e., template) in memory When an object is encountered, we will search whether there is any template stored in the memory which fits what we see If the searching result is positive, that object can be recognized If not, we try to guess based on the closer template As the recognition is described as a data-driven process, so it is regarded as a bottom-up processing Any shortcoming then? Feature detection theory This theory suggests that objects can be decomposed into different simple features, e.g., a line with certain orientations Every object shares different degrees of perceptual similarity with each others A hierarchical model with layers of processing nodes (demons) therefore is established Beside the detection layer that captures the presented image, the first layer contains nodes which can be activated by the presence of simple features For example, a letter ‘H’ can be seen as having two vertical lines and a horizontal line The nodes at that layer which are sensitive to the vertical line and horizontal line are therefore activated The activation results will then be sent to the nodes on the next layer which is sensitive to a level-up complication For example, they may be sensitive to different combinations of lines And results of this layer are passed to another upper layer and the process repeats Until there is a decision layer where a number of candidates will be formed and a decision should be made, i.e., what letter do I see? In the example of ‘H’, at the decision layer, possibly we come up with the letters ‘H’, ‘A’, ‘F’ and ‘L’ Each letter carries a value, i.e., the letter with the highest value would be the most possible letter that we see in the external world Furthermore, this model can be used to explain the results in other visual tasks (like in visual search task) and auditory processing Although this is a popular model in explaining object perception, the rise of the notions of global features and local features demand for an alternative explanatory account This is a 3D world At the first glance, the theories mentioned before mostly focus on 2-D perception Our eyes and the corresponding visual processing are well-prepared to received information in a 3D context Binocular disparity makes use of the difference between the left and right visual fields and creates a 3-dimensional perspective Whereas binocular convergence refers to the detection of the stretch of the eye muscles in estimating the depth of the distal object Even we only left with one eye, monocular cues are able to provide adequate information to provide the sense of depth Considering structural description theory, as object can be decomposed into different parts and it is believed that there should be some fundamental parts which are adequate to make up all the objects, i.e., geons A feature of geons: as we are living in a three-dimensional world, geons are also three-dimensional parts Still, there is no conclusive answer to the exact number of geons needed for perception Facial perception How is a face perceived? Eyes match with a nose and hair in a certain colour? Face is firstly perceived as a whole which is quite different from what we see in the old-school Hong Kong’s detective movie, i.e.., holistic perception This also explains why we can hardly tell whether one is wearing a pair of new glasses at a first glance Further evidence is found in the prosopagnosia patients People with prosopagnosia are not able to recognize faces, and sometimes familiar objects They could outperform the normal subjects in recognizing facial components but not the face as a whole Like the normal subjects, these patients are able to perceive other common objects Emotion expression is another factor that may facilitate the recognition of faces Some studies suggested that for elderly, a happy face was rated as more familiar than the control and faces with negative emotion Several neurological studies have suggested that more activation was found in fusiform gyrus (in the temporal lobe) But it is uncertain whether such increased activation exclusively relates to face perception or other expertise behaviors How many visual perceptual systems do we have? From the clinical cases, we can roughly sort out three types of disorder The inability to perceive face, and highly familiar objects, i.e., prosopagnosia The inability to perceive letters and words, i.e., alexia The inability to perceive common objects, i.e, associative visual agnosia Reference: Title Chapter and Pages Sternberg, R. J., & Sternberg, K. (2017). Cognitive Ch.3 Psychology. Canada: Wadsworth. (pp.82-95, pp. 97-102, p. 111)

Use Quizgecko on...
Browser
Browser