Cognition & Face Recognition PDF

Summary

This document explores the fascinating field of face recognition, examining various aspects, from the development of an infant's ability to recognize faces, to adult face perception. It discusses research findings, including studies with monkeys and humans, and explores how different age groups differ in their perception of facial features.

Full Transcript

WHAT IS SPECIAL ABOUT A FACE? Like other common objects: Faces have component parts Their parts are in a common configuration Unlike many objects: Faces are non-rigid, constantly changing Faces convey different ‘meanings’ (e.g., emotions) FACE RECOGNITION/PERCEPTION RESEARCH QUESTIONS...

WHAT IS SPECIAL ABOUT A FACE? Like other common objects: Faces have component parts Their parts are in a common configuration Unlike many objects: Faces are non-rigid, constantly changing Faces convey different ‘meanings’ (e.g., emotions) FACE RECOGNITION/PERCEPTION RESEARCH QUESTIONS Is face recognition a specialized system in the human brain or an example of a general object recognition system? How do we recognize faces? How early in life do we prefer to look at faces? What do we look in a face? How do faces communicate emotional states? What is attractive about a face? What does prosopagnosia tell us about face recognition? Perception of Faces Development Infant perception Acuity Visual acuity (resolution): smallest spatial detail that our visual system can resolve Use distance to quantify visual acuity 20/20 vision means that when standing 20 feet away, you can read the line on the Snellen chart that person can read from 20 feet 10/20: need to be 10 feet away to read what average person can read at 20 feet 20 feet = angle is 1 arc min (1/60 degree) The finest lines that newborns can discriminate from gray field at a distance of 1 foot Discrimnation Face discrimination only coarse lines (low spatial frequency) are recognized within the first 3 months The visual cliff experiment: children (typically, 7 month-old babies) are placed on a glass-covered table with a board in the middle. There is a “shallow” side and a “drop-off” side. Most babies do not crawl to meet their mothers on the drop- off side... (even when the mothers are smiling) Children Preference for faces Face recognition: Within the first month a baby moves eyes and head to follow a face-like pattern Children focus on different features of human faces From outer to inner features in one month Preference for face shown early on (within the first days or weeks) Experiment: Face Perception & Discrimination in Monkeys Monkeys were separated from their mothers at birth & reared by humans wearing masks Face deprivation lasted 6 to 12 months Group was split: half presented to human faces, half to monkey faces & presented new and old faces After deprivation, they spent as much time looking at monkey faces as human faces New human faces better discriminated after exposure to human faces during 1 month New monkey faces better discriminated after exposure to monkey faces during 1 month It seems that faces are biologically wired in monkeys too Face Proprioception Infants (45 min to 3 week-olds) imitate facial expressions hardwired mechanism of face proprioception: reception of stimuli within the organism WHAT DO WE LOOK FOR IN FACES? Face triangle: both eyes and nose DO WE FOCUS ON INNER OR OUTER FEATURES? It appears that children and adults fixate more on internal features of the face A case study: Famous face recognition in different age groups Different age groups were given pictures with famous people pictures Each subject saw only one member of each picture pair All subjects saw half outer-feature and half inner-feature blurred pictures Task: recognize the face Which one do you think was recognized better? Results: Adults: Preference for inner features in face recognition Inner blurred (outer intact): 59% correctly identified Outer blurred (inner intact): 74% correctly identified Children / Adolescents: Preference for outer features in face recognition Preference for inner blurred (outer intact) Tendency towards preference for inner features appears at about 14-15 years of age Children fail to disregard external features in face identity judgment Looking times: more to internal than external Same as adults Judgment of similarity: based on external features Summary Babies show preference for faces (and face- like patterns) early on Face discrimination in monkeys suggest that there is a biologically determined mechanism special for face discrimination Some studies suggest a developmental trend for facial recognition—from outer to inner features Recognition Modularity of face perception Face recognition appears to be a domain-specific cognitive system (a “module”) module: an informationally encapsulated system It is a “vertical” faculty, not a horizontal, holistic one It’s operations are autonomous, not affected by other systems’ operations It’s autonomous, usually fast, mandatory It is neurologically implemented as anatomically distinct mechanism Principles 1. SPATIAL FREQUENCY: Low frequency (LSF): coarse features, layout High frequency (HSF) : fine lines, details Spatial frequency and face perception Methods: 50 ms presentation of 112 stimuli; categorization task Results: No gender differences detected Expressiveness was better detected in HSF (vs. neutral) Actual expression (Happy / Sad) was better detected in LSF 2. LUMINANCE Male or Female? Contrast between skin and facial features greater in females, independent of ethnicity, pigmentation High reliability of the judgments of masculinity and femininity Facial contrast was positively correlated with rated femininity of female faces 3. ENHANCED FEATURES (= BETTER RECOGNITION) Enhanced, even exaggerated features leads to better, faster recognition than pictures Ex.: caricatures Enhanced features, better recognition Methods: Subjects exposed to 16 pictures of ‘celebrities’ Average celebrity (composite of 108 pictures) Veridical (the true picture) Anti-caricature and Caricature: 50% (+/-) difference from the average Results: Recognition accuracy was best for caricatures Anti-caricatures were significantly less recognized What do we encode about a face? We encode features Margaret Thatcher illusion (evidence): When eyes and mouth are in right-side-up orientation they are perceived as normal Eyes and mouth are the features which convey the main characteristics of the face Masked face recognition As expected, masked faces hinder identification Mouth and eye regions are most important for face identification Key also to Expressiveness and Gender identification Effects known since at least the ‘bubbles’ experiment The Face Area The “middle face patch” in monkeys corresponds to the Fusiform Face Area (FFA) in humans The FFA shows greater activation for faces than other objects location: right fusiform gyrus at the occipito-temporal area the right FFA shows stronger activation for faces Areas involved in face identification: Fusiform face area (FFA): responds best to faces, or context that implies faces superior temporal sulcus (STS) LOC (lateral occipital cortex) & IT (inferotemportal cortex) = objects, faces, places Some evidence for the modularity of face recognition – Primates: different cortical areas are activated for objects and for faces face recognition and identification are hard-wired Babies: Young babies have strong preference for face-like patterns Adults: FFA strongly activated for faces; functional and anatomical Brain damage is selective: prosopagnosia (deficit in face recognition) occurs without inanimate object agnosia (deficit in object recognition) There are patients who fail to recognize a face but can recognize their emotional expressions (and vice-versa) Prosopagnosia GENERAL SYMPTOMS Impaired recognition of family faces, own face in a mirror or in pictures, famous faces Familiar people are recognized by other cues (voice, context, description, etc.) Emotion recognition is preserved No visual/perceptual disability: Patients have an intact visual system but have a lesion on brain areas responsible for integration of features or access to semantic properties related to faces (what or who?) Ability to recognize facial features is preserved Common symptom: inability to recognize faces of animals Object recognition is intact (varies) (different subsystems!) Reading is intact (different subsystems!) Lesion: generally right occipital/temporal lobes Fusiform Face Area (FFA) Ability to recognize the expression of emotions in faces is preserved Some studies (using priming, EEG, eye-tracking): covert face recognition was obtained in the absence of explicit, overt recognition ALZHEIMER’S DISEASE Dementia: loss of intellectual functions General deterioration of semantic systems Non-localized but affecting temporal, parietal and frontal lobes Recognition of Emotions by probable Alzheimer’s Disease patients: Do they show similar patterns of dissociation between faces and emotion expression? The thinning cortex of Alzheimer’s (AD) patients Areas affected the most are mainly “semantic” Medial temporal Inferior temporal gyrus Temporal lobe Faces and Emotion Expression Faces and Emotion Expression Test 1: Facial identity (same person?) Subjects (N=31) shown cards with 2 photographs of the same emotion expression In half of the cards the person was the same Is it the same or two different persons? Results: impaired Test 2: Discrimination of facial emotion Is the emotion in both photos the same or different? Results: normal Test 3: Emotion identification Subjects are shown one picture at a time Is the person happy, sad, angry or in different? Subjects shown a card with four photographs, each one depicting an emotion Point to the happy/sad/angry/in different face Results: impaired Expression of Emotional States The communication of emotions is the most fundamental form of facial expression Barring the enigmatic Mona Lisa, facial expression of emotions are easily identifiable Emotion identification occurs even when face recognition is impaired How do we perceive facial expressions of emotions? A study on normal (unimpaired) face perception and emotion identification Line drawings of faces were computer generated from photographs depicting emotions Only points in the mouth, eyebrows and eyelids were manipulated Tasks: Comparison ofthree faces presented successively A -B - X : Subjects responded whether or not X was similar to A or B Identification: press a button to identify the emotion Results: NO LINEAR IDENTIFICATION (except Surprise) Category boundary effect obtained suggesting emotions are perceived categorically Conclusion: Perception of emotions is categorical, not linear: Discrimination between A and B is harder when they belong to the same emotion and easier when they are within boundaries of different emotional categories Subjects see intermediate faces as either happy or sad not expressing an intermediate emotion Differences at the boundary between categories are detected more accurately than within boundaries when facial features receive the same amount of transformations I.e., if two faces are within the sad category, their differences are harder to detect if they are at the boundary between happy and sad Face and sexual attraction What makes a face attractive? Methods: Morphing of several face images Task: Subjects rate each image for attractiveness Results: The closer to the average (and the more faces in a set), the higher the rating Face and attraction High conception (HC) period: attraction for features that are more masculinized HC: Short-term relationships: preference for less feminized face Low conception period (LC): attraction for features that are more feminized Are attractive faces only average? A. Prototype face B. prototype reformed into the average shape of a set of faces rated high for attractiveness C. enhanced difference between a and b Results: C rated more attractive than A

Use Quizgecko on...
Browser
Browser