Test 1 Psycholinguistic Review PDF

Summary

This document is a review of psycholinguistics, covering mental processes and mechanisms related to language processing, including production and comprehension models. It details the study of human language, grammar, and various linguistic phenomena.

Full Transcript

Test 1 Psycholinguistic Review Chapter 1 - What is Psycholinguistics? Psycholinguistics: Mental processes involved in producing and understand language Mechanisms by which language is processed and represented in the mind and brain Linguistics: The scientific study of of human language...

Test 1 Psycholinguistic Review Chapter 1 - What is Psycholinguistics? Psycholinguistics: Mental processes involved in producing and understand language Mechanisms by which language is processed and represented in the mind and brain Linguistics: The scientific study of of human language ➔ GOAL: Build a model of the system that allows speakers to speak and understand their native language. Mental Grammar = System of rules ➔ We have unconscious knowledge of the patterns and rules of our own language Rudimentary Model of Production Thoughts/meanings are encoded by the speaker into a particular form 1. Intended meaning/thought 2. Select word 3. Put words in order - order matters 4. Translate words into instructions for pronunciation Rudimentary Model of Comprehension The listeners receives encoded message and decodes it to get the intended meaning 1. Hear sounds 2. Identify/discriminate speech sounds 3. Recognize words those sounds make up 4. Figure out structure of phrase/sentence Psycholinguistic Phenomenon 1. Continuous Speech There are no pauses between spoken words, we need to figure out where one word ends and the next begins, using prior knowledge about what is a word Identify/discriminate Speech Sounds Hypothesis: use ALL potentially relevant sensory information 2. Perceiving Speech Sounds Speakers use a mix of auditory and visual information to decide/decode what sounds they’re “hearing” 3. Understanding Sentences Understanding a sentence requires building a syntactic structure of that sentence ➔ We have a strong bias to perceive only one of the possible meanings at first Experimental Design: Independent Variable What we MANIPULATE Ex. Word frequency Dependent Variable What we MEASURE Ex. Reaction Time Conditions A PARTICULAR VALUE of an independent variable Ex. are real words low frequency or high frequency - Low Frequency: Condition 1 - High Frequency: Condition 2 Task/Method/Procedure What participants are asked to do Ex. Lexical Decision Grainger Experiment: Lexical Decision Task Does a word’s (written) frequency affect how quickly people recognize that word? Correct Hypothesis: Frequency DOES affect recognition It takes longer for participants to access their mental lexicon to recognize low frequency words Chapter 2.6, 3.1, 3.2, 3.3 - Language and the Brain Domain Specific: Processes and representations involved in language are specific to language (not shared with or by other domains) Domain General: Processes and representations involved in language are NOT specific to language (shared with or by other domains e.g., memory, vision) What is evidence for domain specificity of language? Specific Language Impairment (SLI) Impaired language, intact general cognition Unknown cause - Hereditary Difficulty at multiple levels of linguistic representation/processes Often exhibit delayed language acquisition No neurological damage or disorder No hearing loss Characteristic Difficulties Sound Level Consonant Cluster Production (Two/more adjacent consonants) E.g, spectacle -> pectale Perceiving subtle differences between speech sounds ([ba] vs. [pa]) Phonological Decomposition:Breaking words down into individual sounds Speech Perception -> difficulty categorizing speech sounds Weak or distorted categorical perception Morphology (words) Tagging words with right grammatical markers for plural, tense, etc. - Yesterday I fall over Vs. Yesterday I fell over (hard to mark tense) Syntax (Sentences) Understanding meaning of sentences with multiple participants, complex structures Other Deficits: Oral-Motor Control (dyspraxia): planning of complex oral motor programs is impaired in handful of cases Working Memory: Shorter working memory spans Analogical Reasoning - Cat to kitten is like dog to puppy Visual Imagery: Mental rotation Williams Syndrome (WMS) Impaired cognition, intact language Deletion of multiple genes on one chromosome 7 Significant impairment with general cognitive abilities Mild to moderate intellectual disabilities Numerical terms (50 pennies vs. 5 dollars) Spatial organization of parts of objects into coherent wholes Difficulty with visual spatial tasks not seen with other types of intellectual disability Facial characteristics - Puffiness around the eyes, short nose, wide mouth Williams Syndrome (WMS) Vs. Down Syndrome (DNS) WMS DNA Speech fluently, grammatically Understanding of primary Errors decrease with age associate is different than Equal meaning between primary secondary associate and secondary associate DNS participants under-perform WMS participants do not perform on linguistic tasks as well as their mental-age matched normal controls Double Dissociation “Standard” View: Simultaneous existence of SLI and william syndrome is evidence of double dissociation - At least some language abilities are dissociated from general cognition - Evidence from some domain-specificity - Not seen with every language ability Angelo Mosso's Human Circulation Balance Brain needs more blood when it works harder Phrenology (Franz-Joseph Gall) Now Discredited: View that protrusions on the scalp, or shape of the skull/cranium were associated with personality traits Introduced idea that cognitive abilities and traits were connected to localizable regions in the brain - Stressed that the brain not the heart was the center for emotions, actions, etc. Phineas Gage Survives 3 ft railroad spike piercing and destroying of his left frontal lobe Drastic personality changes Before: Responsible, well-mannered, well-liked, efficient worker, pious After: lack of self-restraint, impulsive, excessive profanity, hyper-sexual First case that damage to specific parts of the brain might induce specific mental changes Evidence that aspects of personality and social aptitude are linked with the frontal lobe Language ability was intact Brain Lobes Louis Victor Leborgne Epileptic fits, stroke Lost ability to speak expect a single syllable “tan” = temps - When enraged, he could produce a single swear word or short phrase - Unable to speak and write - His intellect was normal - Paralysis of his right-side limbs, able to communicate by gestures with his left hand and facial expression - Able to move his left side, tongue and mouth not paralyzed, voice normal Paul Broca - Another patient, lelong, could only speak 5 words -> “yes”, “no”, “three”, always”, “lelo” (mispronunciation of his name) - Similar lesion in the same area as leborgne Broca reasoned this region was associated with speech production Broca’s Aphasia (Expressive Aphasia - lack of expression) PRODUCTION Trouble producing signs but could comprehend reasonably well Caused by lesion to the left inferior frontal gyrus (Broca’s area) Associated with speech production deficit Region where speech motor commands are store Perception/comprehension remain intact Characteristics: Halting, Disfluent Speech - Syntactic Deficits “It's hard to eat with a spoon” [...har it…wit..pun] ➔ Omission of function words and grammatical affixes The boy ate it up VS. *The boy ate up it ➔ Unable to reliably judge acceptability of sentences Described as Telegraphic speech Simplification in Consonant Clusters /spun/ -> /pun/ Stoping Becomes stops: /θ/ -> [t]... Bath -> bat - Little or no comprehension deficit - Frustrating for patients. They know what they want to say but cannot Carl Wernicke Found patient with speech that sounded fluent, but was nonsensical (opposite of Broca’s) - Patients didn’t understand anything being said to them - Difficulty comprehending language (both written and spoken) - Patients are unaware of the deficit Wernicke reasoned this region where speech production monitoring processes are stored -> mapping conceptual representations of word meanings into phonological codes Wernicke’s Aphasia (Receptive Aphasia) PERCEPTION Could produce signs rapidly but had trouble understanding them Caused by lesion to posterior part of the left superior temporal gyrus (wernicke’s area) Assumed to be a language comprehension deficit Characteristics: Retention of function words Syntax is generally intact OK phonological production ((prosody, consonants/vowel production) Speech is meaningless Lexical errors, nonsense words (MAIN CHARACTERISTIC) Comprehension difficulty Inability to appropriately select, monitor, and organize their language production Neologism - new, made-up words ➔ Toothbrush -> “slunker” ➔ Shirt -> “glimbop” Broca’s area is critical for speech production and Wernicke's area subserves language comprehension, and the necessary information exchange between these are (ex. Reading aloud) is done via the arcuate fasciculus which are fiber bundles that connect the two language areas. Phoneme identification experiment on Broca’s and Wernicke’s aphasics (Basso et al. 1977) Stimuli: [da] - [ta] on a continuum of VOT Broca’s aphasics showed difficulty on phoneme identification (wasn't able to determine if they heard [da] or [ta]) Only half of wernicke’s did This was not expected if Broca’s patients have in-tact perception - If patients had phonemic errors in speech production, they also had difficulty in phoneme identification Broca’s patients perform at chance on comprehension questions on complex syntactic structure: Doer - V - Recipient Sentences they CAN comprehend Sentences they can NOT comprehend Doer - v - recipient Implausible Passive – plausible Reversible passive Subject relative clause Passive – implausible Object relative clause Grodzinsky - does not seem like a production deficit would cause this Laterality of Language Lesions to broca’s and wernicke’s area in left hemisphere most always result in some language deficit Lesions to the right hemisphere analogs of broca’s and wernicke’s area rarely result in language deficits CONCLUSION - Language is left lateralized Contralateral Organization Cerebrum (and thalamus) are split into right and left hemispheres ➔ Sensory input from the right side of our body is sent to the left hemisphere ➔ Sensory input from the left side of our body is sent to the right hemisphere Ipsilateral Organization ➔ Same side of the body Dichotic listening (Studdert-Kennedy & Shankweiler, 1970) Dichotically presented: /pa/, /ta/, /ka/, /ba/, /da/, /ga/ /i/, /ɛ/, /a/, /æ/, /u/ Participants had to identify which sound it was - Measured accuracy of identification for each ear Right Ear Left Ear Accuracy 45% 29% Left Hemisphere Functions Right Hemisphere Functions Language Emotional prosody (right ear advantage) (left ear advantage) Writing Visual-spatial processing Right visual field Left visual field Handedness Right-Ear Advantage = left lateralized - 90% right handers - 70% left handers Split Brain Paradigms Corpus Callosum - Connects the two hemispheres Common treatment for epilepsy is a corpus callosotomy Split-Brain Patients: two hemispheres can no longer communicate with one another *Dichotic listening and split brain studies support the idea that language is left lateralized Non-Invasive Recording From Human Brain - Functional Brain Imaging Hemodynamic Techniques: Functional Magnetic Resonance Imaging (fMRI) Excellent spatial resolution Poor temporal resolution ○ Why? Blood flows very slowly on the order of seconds Measures blood-oxygenation flow from outside ➔ Brain regions with higher levels of blood flow and blood oxygen are more active ➔ Good for mapping brain areas ➔ Bad for investigating incremental processes (ex. Speech perception) Electromagnetic Techniques: Electroencephalography(EEG)& Magnetoencephalography(MEG) Reasonable spatial resolution Excellent temporal resolution EEG’s measure brain's electrical activity from outside the scalp ➔ Language processing it fast Event-Related Potential (ERP) Changes in response to a particular linguistic stimulus causes change in electrical voltage (potential) in the brain What is N400 and what is it a marker of? What triggers it? Negative voltage peaking at around 400 ms after odd word (stimuli) was presented Processing of semantic information Trigger = when content of a word is unexpected What is P600 and what is it a marker of? What triggers it? Positive voltage peaking at around 600 ms after odd word (stimuli) was presented Processing of syntactic structure Trigger = processing (and trying to repair) an unexpected syntactic structure Is there a brain region that is specialized for syntax? (Humphries et al. 2006) ➔ Superior temporal sulcus (STS) in the left anterior temporal lobe (ATL) was more activated with sentences than words lists regardless of semantic plausibility. ASL - American Sign Language Aphasics Same regions of the brain is used regardless if spoken or sign language Both being housed in the left hemisphere Domain specific Left hemisphere is specifically adapted for abstract grammatical processing (phonology, syntax) regardless of modality (spoken, signed) Contemporary View of Brain Areas that Contribute to Language Function Chapter 4.3 - Phonetics, Phonology Phonetics: The study of the physical properties and production of speech sounds Phonology: The study of the mental representation and categorization speech sounds 2 Classes of Speech Sounds Consonants Vowels Some form of obstruction of air Relatively unobstructed vocal tract, created by articulators allowing air to pass freely Features Describing Articulation: Voicing: Whether the vocal cords vibrate or not Voiced = Vocal fold vibration VS. Voiceless = No vocal fold vibration due to them being pulled apart resulting in them being open Place of Articulation (POA): Where the airflow is obstructed Bilabial - Made through the articulation of the upper and lower lips [p], [b], [m] Alveolar - Made through the articulation of the tongue touching or coming close to the the alveolar ridge [t], [d], [s], [z], [n], [l], [r], [ɾ] Alveopalatal - Made through the articulation of the tongue between the alveolar ridge and hard palate. [ʃ], [ʒ], [t͡ʃ], [d͡ʒ] Velar - Made through the articulation of the tongue with the velum (soft palate) [k], [g], [ŋ] Manner of Articulation (MOA): Manner in which the airflow is obstructed Stops Complete Closure [p, b, t, d, k, g, ʔ] - Air is stopped Oral Stops: raised velum with no air passing through the nasal cavity Fricatives Continuous airflow through the mouth [f, v, θ, ð, s, z, ʃ , ʒ, h] - Continuous audible noise - Narrow constriction causes friction We mentally represent some sounds as if they are the same even though they are acoustically different Phoneme An abstract mental representation of a distinctive sound in a language / / ➔ Smallest unit that makes a difference in meaning ➔ Exists abstractly in the speakers mind ➔ Separate phonemes Evidence: Minimal pairs, only vary in one sound but create different meanings Allophone The set of predictable phonetic variants of phoneme [ ] ➔ Physical realization ➔ Allophones of the same phoneme Aspiration: Puff of air or the period of voicelessness after release of the stop before the following vowel VOT - Voice Onset Time Time between the release of burst and the onset of vocal fold vibration Stops: Characterized by complete occlusion in the oral cavity followed by an abrupt release - Resulting in sharp burst onset in waveform/spectrogram English VOT Production: The boundary in english between voiced vs. voiceless sounds is approximately 30 ms of VOT. Chapter 7.1, 7.2, 7.3 - Speech Perception Categorical Perception Gradient (Continuous) Perception {tall-short: “levels”} Continuous Changes in a stimulus that is perceived as gradual No sharp break Categorical Perception {pass-fail “this or that”} Continuous changes in a stimulus are perceived not as gradual, but as having a sharp break between discrete categories Differences between members of the same category are minimized Differences between items belong to different categories are magnified Does categorical perception mean that people lose sensitivity to all low-level acoustic properties? Categorical perception seems to reflect a (subconscious) “filter” that is imposed on low-level acoustic input in the context of language-related decisions - Provides the ability to hear small acoustic differences between members of the same category Perceptual Invariance: Ability to perceive highly variable stimuli as instances of the same category How do we perceive VOT differences? If phone perception was gradient, small differences in VOT would result in a continuous, steady shift in our perception between phones - Humans perceive phonemes categorically - The mind imposes discrete, abstract categories, which do not exist in the physical world A-X Discrimination Task Play sounds side by side and ask people if they were the same or different sounds Was the two stimuli presented the same or different? Word Identification Task (McMurray et al. 2008) - Listener’s degree of uncertainty or sensitivity about what sound they heard may depend on the experimental task. fMRI Study on Categorical Perception of VOT (Myers et al. 2009): Left Superior Temporal Gyrus - Activation upon hearing a different variant of the same category (Within-Category Change) Acoustic processing of speech sound (Sensitivity to within-category differences) Left Inferior Frontal Sulcus - Activation for between-category changes, but NOT for within-category changes Insensitivity to within category differences Higher order computation, goal-directed actions Repetition Suppression: The repetition of the same stimulus over time leads to a decrease in neural activity in the regions of the brain that process that type of stimulus OR A subsequent change in the stimulus leads to an increase in neural activity when the stimulus is perceived as different from the repeated stimulus Contextual Cues Top-Down Processing: When external, “higher” knowledge influences how the input is processed Bottom-Up Processing: Processing of the direct sensory input (e.g auditory or visual information) - All systems that take external input have bottom-up processing What kind of information (Contextual cues) might facilitate speech perception? 1. Lexical Knowledge 2. Sentence context 3. Visual information about articulation (McGurk Effect) Ganong Effect - An effect in which listeners perceive the same ambiguous sound differently depending on which word is embedded within. Example: A sound that is ambiguous between /t/ and /d/ will be perceived as /t/ when it appears in the context of _ask but as /d/ in the context of _ash. ➔ Lexical Information DOES have an influence on phoneme recognition Phoneme Restoration Effect Illusion in which a non-speech sound (white noise/cough) replaces a speech sound with a recognizable word, with the result that people perceive the non-speech sound while also “hearing” the missing speech sound. [s] in legislatures replaced with a cough We “hear” a sound that was actually NOt present McGurk Effect An illusion in which the mismatch between auditory and visual information pertaining to a sounds articulation results in altered perception of that sound Example: When people hear an audio recording of a person uttering the syllable ba while viewing a video of the speaker uttering ga, they often perceive the syllable as da. fMRI Study on McGurk Effect (Hasson et al. 2007): Do Context Cues Completely Override Acoustic Cues? ➔ Contextual Information cannot completely override basic acoustics Individual Variation What about the speaker can we tell from their speech? - Gender - Age - Physical size - Social class/geographic location - Race (sometimes) Can Evidence of Discrimination Be Shown Based on This Identification? (Purnell, Idsardi & Baugh 1999) Called for housing appointment in different neighbourhoods of San Francisco using different dialects SAE (Standard American English) AAVE (African American Vernacular English) Neighbourhood Demographic White Favoured: San Francisco, Palo Alto, Woodside White and Black Favoured: Oakland, East Palo Alto White Neighbourhoods: Harder to get an appointment Discrimination against AAVE speakers What are the differences between [s] and [ʃ]? Alveolar = [s] Alveopalatal = [ʃ] Main Difference - FREQUENCY Kraljic & Samuel (2007) Study: Speaker-Specific Adaptation Can we learn “talker-specific” phoneme boundaries through exposure to different individuals’ pronunciations of “the same phoneme”? & Can listeners adjust their phonemic boundaries in response to specific talker pronunciations? In exposure phase, participants listen to two different speakers, a male and a female pronouncing the same set of words - Some words contained /s/ and others contained /ʃ/ - Participants placed in different groups After, in categorization phase, participants judge non-word syllables containing tokens from [s]-[ʃ] continuum - Forced - Choice Task: did you hear “S” or “SH” Do participants' category boundaries shift on the ambiguous phoneme-voice pairs? When participants heard a speaker use the ambiguous phone in place of the [ʃ] in the exposure phase, they were more likely to identify ambiguous phones as an [ʃ] when produced in the same speakers voice They did NOT change their category boundaries when judging sound produced by the other voice Speakers can adjust their phonemic boundaries in the response to specific talker pronunciations Speaker-specific Adaptation: Does NOT happen for Voicing Continuum ➔ Voicing normally does NOT reliably signal a particular speaker Happens for [s]- [ʃ] Place-Frequency Continuum ➔ Often distinguishes one speaker from another (male vs. female) Does our phonological knowledge wrap our perception? Yes, Categorical perception affects linguistic judgement Categorical perception does NOT mean that people lose sensitivity to low-level acoustic properties How do contextual cues influence identification of speech sounds? Hearers use contextual cues in addition to basic acoustic cues when identifying speech sounds Lexical knowledge, sentence context, visual information Context sues can't completely override acoustic cues How do we deal with variation between different talkers? We can adapt speaker-specific identification boundaries for SOME phonemes Chapter 8.1, 8.2, 8.3 - Word Recognition How many words do we produce/process per minute/second on average? ➔ Produce more than 150 words per minute ➔ Process about 2.5 words per second ➔ Identify a word every 0.4 seconds How many milliseconds do we have on average to find a word in the lexicon? ➔ 400 ms to “find” a word in the lexicon Mental lexicon = Mental Dictionary What kind of information is stored in the mental lexicon? Semantic Information: - The “meaning” of concept Syntactic Information: - Part of speech (noun,verb, …) - Whether the verb requires an object Ex. john bugged Mary Requires an object to be grammatical which in this case is Mary Phonological Information Lexical Decision Task Decide as quickly and accurately as possible whether a stimulus is a word or not ➔ Measure the time from the onset of the stimulus to when the bottom is pressed Method: Create different lists of words: - Words starting with “A” - Words starting with “w” Non-word fillers - to divert attention away, to prevent participants from recognizing the true purpose of the experiment Test: Have MANY participants respond to ALL words in randomized order Compare AVERAGE reaction time to words in different groups Dependent Variables: Reaction time, Accuracy Independent Variables: Onset letter, Real words Vs. Nonword Results: “A” words were NOT recognized faster than “W” words Is The Lexicon Arranged Alphabetically? NO, the mental lexicon is NOT arranged as an alphabetical list that must be searched serially. *Supported from the lexical decision task experiment Frequency Effect Compare a list of different words of different frequency Lexical decision times for high frequency words (words that people encounter a lot in everyday life) are faster on average than for low frequency words The mental lexicon is influenced by word frequency - High frequency words are easier to access than lower frequency words Resting Activation: How activated a lexical entry is at baseline? When a word is accessed in the mental lexicon, it gets more activated The more often a lexical item is accessed, the higher its resting activation is and the easier it is to retrieve Frequency is not an organizational property but a property of words that affects how they’re accessed. Semantic Effects Some words are related to others, or share similarities Doctor - Nurse/ Cat - Dog Semantic Priming: Hearing/reading a word partially activates other words that are related in meaning to the word, making the related words easier to recognize - Present a “prime” briefly before a target Related OR Unrelated Conditions Priming effect = 940 - 870 = 70ms Facilitation (faster) Semantic priming suggests our lexicon is organized according to semantic relatedness Masked Priming The prime word is presented subliminally - too quick to be consciously recognized Our unconscious minds have proceeded a stimulus while our conscious minds don’t know that we have done so Eye-Tracking - Records eye movement continuously in real-time - Display of objects on a screen - Measures location of eye-gazes to an auditory stimulus - Measures how eye-gaze location changes as linguistic information is processed in real time - Good temporal resolution When hearing hammer, participants look more to nail than unrelated controls Why? Related meaning Lexical Organization Semantic Network: lexical items are arranged in a network of interconnected unites “Activation” determines whether nodes/units is accessed - Units have baseline levels of activation - Words must reach an activation threshold to be retrieved Activation can increase or decrease - Increases when features in stimulus match with the unit/node - Decrease (goes down gradually) over time Spreading Activation: Activation from one word can spread to semantically related word If a word is pre-activated by an associated word (before lexical access), it will be easier to retrieve Phonological Neighbourhood Effect How does spreading activation from individual phonological units to non-targets affect word recognition? A words Phonological neighbour = A word that differs in 1 phoneme Beat (many neighbours) Meat, Heat, beam, Bean Death (few neighbours) Meth Phonological neighbourhood effect = Inhibition (slower) Words with many phonological neighbours were responded to more slowly All things equal (matching frequency, length) Words with many words similar in for (sound-alikes) Dense neighbourhoods -> Slower to recognize High Density -> Slower to recognize Words with few sound-alikes Spare neighbourhoods -> Faster to recognize Low Density -> Faster to recognize Lexical Competitors “Pick up the beaker” - When there is a competitor beetle, people take longer to locate the beaker Inhibitory Connections Excitatory Connections Connections that lower the activation of Connections along which activation is connected units passed from one unit to another - Decreases activation of linked - Increases activation of linked units units Our lexicons are organized semantically and phonologically Evidence that phonologically and semantically related words are connected to one another Spreading Activation Models Wed of inter-connections 1. Semantic connection/association (lion and tigger) 2. Neighbourhood effect: how many connections 3. Frequency: Stronger connections for frequent words -> more activations Higher baseline Ambiguity Homophones: Words that have separate meanings but sound the same Words need to be disambiguated to be understood Certain meanings are more relevant or likely in specific contexts Context can help us figure out which sense we use Example: He brought her money to the bank (likely government bank) He lay in the grass along the bank (likely river bank) “Bottom-up” Vs. “Top-Down” processing Modular System Interactive System Uses only bottom-up processing Employs both bottom-up and first, then uses top-down knowledge top-down processing later Access all meanings, later use context Uses context to block out inconsistent to pick appropriate one meanings and only access one Evidence of Modularity Lexical access is modular Access-Selection model: Access all lexical items/meanings consistent with the input Select the one congruent with the context

Use Quizgecko on...
Browser
Browser