Phonology Lecture Notes PDF
Document Details
![ExultantConflict9535](https://quizgecko.com/images/avatars/avatar-4.webp)
Uploaded by ExultantConflict9535
Tags
Summary
These lecture notes cover various aspects of phonology, including the study of speech sounds, neural tracking, and prelinguistic speech development. Topics such as EEG bands, and the evolution of speech production are also discussed.
Full Transcript
Phonology Lecture 3 Clarification! What exactly "split-brain" means and how lateralization ties into the brain’s ability to compensate for damage. The different frequency bands (delta, theta, beta, gamma) and their roles in language processing. Why do kids with aphasia recover better?...
Phonology Lecture 3 Clarification! What exactly "split-brain" means and how lateralization ties into the brain’s ability to compensate for damage. The different frequency bands (delta, theta, beta, gamma) and their roles in language processing. Why do kids with aphasia recover better? Bioprogram hypothesis Split Brain EEG Bands 1. Delta (0.5-4 Hz) 1. Associated with: Deep sleep, unconscious states, and certain types of slow-wave cognitive processes. 2. Speech relevance: Delta waves are less directly associated with speech processing in awake individuals but can be observed in studies involving deep sleep and its impact on speech and language processing. 2. Theta (4-8 Hz) 1. Associated with: Drowsiness, light sleep, meditation, and memory retrieval. 2. Speech relevance: Theta waves are linked to memory encoding and retrieval, which are crucial for language comprehension and speech production. They play a role in semantic processing and the integration of contextual information during speech. 3. Alpha (8-12 Hz) 1. Associated with: Relaxation, calmness, and an idle, wakeful state. 2. Speech relevance: Alpha waves are often observed when individuals are in a relaxed state but not actively engaged in language processing. However, changes in alpha activity can indicate shifts in attention and cognitive workload during speech tasks. 4. Beta (12-30 Hz) 1. Associated with: Active thinking, focus, problem-solving, and motor control. 2. Speech relevance: Beta waves are prominent during active speech processing, including language comprehension, speech production, and auditory processing. They are often linked to the motor planning required for speech articulation and the cognitive load associated with complex linguistic tasks. 5. Gamma (30-100 Hz) 1. Associated with: High-level information processing, perception, and consciousness. 2. Speech relevance: Gamma waves are crucial for higher-level cognitive functions, including the integration of sensory input during speech perception and the synchronization of neural networks involved in language comprehension and production. They are also associated with the binding of phonological and semantic information during speech. Example: Theta Syllables are units of organization for sequences of speech sounds. Theta oscillations help the brain segment continuous speech into smaller, manageable units, like syllables. This segmentation is crucial for understanding the rhythmic and intonational structure of speech. EEG work has shown that theta activity increases during tasks that involve syllable discrimination and repetition. These studies provide evidence that the brain uses theta rhythms to organize and process syllabic information Neural tracking I am unclear about how gamma activity reflects the brain's ability to distinguish subtle phonemic differences, such as cat versus pat. Specifically, I struggled to understand what is meant by "tracking the speech signal with higher fidelity" and how this coherence with the speech rhythm translates into effective speech processing. Additionally, while I grasp the idea of multiple frequency bands working together, I found it challenging to understand how the brain prioritizes or enhances specific bands like gamma over others during tasks requiring phonemic precision Subtle phonemic differences like cat vs. pat require distinguishing specific acoustic features, such as voice onset time (VOT) or formant transitions. Gamma activity helps track these rapid acoustic changes. Gamma oscillations enhance temporal resolution: They enable precise temporal alignment between the neural response and subtle phonemic cues. "Tracking with higher fidelity" refers to how closely the brain’s neural oscillations (in multiple frequency bands) synchronize with the dynamic features of speech (e.g., syllable timing, pitch contour, phoneme transitions). During tasks requiring fine acoustic discrimination, gamma activity is upregulated because it is suited for high-resolution, short-timescale processing.In contrast, lower-frequency bands (e.g., theta, delta) track slower modulations like syllable or word rhythms. Neural tracking The brain uses a hierarchical organization of frequency bands and different groups of neurons or brain regions can oscillate at different frequencies at the same time. Delta (0.5–4 Hz): Tracks sentence structure and intonation. Theta (4–8 Hz): Tracks syllables. Gamma (30–100 Hz): Tracks phonemes and fine acoustic features. Depending on the speech task, higher-level (delta/theta) or lower-level (gamma) oscillations may dominate. To summarise: The brain doesn't only "favor" gamma oscillations but adjusts the balance across frequencies based on what’s needed for the task. During phonemic processing more gamma oscillations are synchronized with the speech signal, reflecting more focused neural resources for fine auditory discrimination. Kids with aphasia Why do kids with aphasia recover faster? Shouldn't their vulnerability to injuries and illnesses be stronger at birth? Children recover better from aphasia because their brains are still developing, highly adaptable, and not as specialized yet. While their brains are sensitive to injury, this plasticity allows other brain regions to step in and compensate for damaged areas, particularly in the context of language. Adults, with their more rigid and specialized neural networks, lack this same degree of flexibility, making recovery slower and less complete. Does it have anything to do with the left hemisphere of their brain? In most people, the left hemisphere dominates for language. Damage to this area in adults often causes severe, long-lasting aphasia. In children the right hemisphere can take over many language functions if the left hemisphere is damaged early in life. This "backup" ability diminishes with age as the left hemisphere becomes more specialized for language and the right for other functions. Bioprogram hypothesis The Bioprogram Hypothesis was confusing, particularly the evidence supporting it. What are the main differences between child language development and creole languages that support or refute this hypothesis? Bickerton argued that the systematic structure of creoles reflects a universal grammatical framework that is biologically hardwired, similar to what drives child language acquisition. Similarities: limited input -> lang, gap filling, universality, critical period Differences: children exposed to full grammar/creoles not, CLA is predictable/creole not, CLA = complex as input Infant Directed Speech Rhythmic Repeats words frequently Has a hyperarticulated vowel space Higher pitch Exaggerated Intonation Slower tempo Simpler vocabulary Diminutives (doggy instead of dog) So many different speech sounds! Speech sounds = The acoustic signals that language use to express meaning ~200 different speech sounds English uses about 45 Some languages also use tone, duration, clicks, to make new speech sounds Tonal language – use prosody to mark different words Which sounds signal meaning? Distinctive Feature: acoustic characteristic that makes a difference in meaning → a distinctive feature is linguistically contrastive ([t] vs [d] in English) Phones: different sounds that can be produced in a language Allophones: phones that do not differentiate meaning (p vs ph are allophones of the same phone in English) Phonemes: meaningfully different sounds in a language (aspiration in some lang e.g. Hindi) or [s] vs. [ʃ] in English. Example: p vs ph in English – spill vs pill Anyone have any other examples? Allophones + phonemes In English p vs ph are allophones of the /p/ phone. Switching allophones of the same phoneme won't change the meaning of the word: [sphIt] still means 'spit'. Different languages can have different groupings for their phonemes. [p] and [ph] belong to the same phoneme in English, but to different phonemes in Chinese. In Chinese, switching [p] and [ph] does change the meaning of the word. Summary If two sounds CONTRAST in a particular language (e.g. [t] and [d] in English), the sounds are separate phonemes in that language. There are minimal pairs distinguishing the two sounds. Example: In English, we have a minimal pair [tɹejn] vs. [dɹejn] (train vs. drain). If two sounds DO NOT CONTRAST in a particular language (e.g. light [l] and dark [ɫ] in English)... The sounds are allophones of a single phoneme in that language. Example: [l] and [ɫ] are allophones of the English phoneme /L/. The sounds are in complementary (or non-overlapping) distribution, meaning the places one shows up, the other never shows up. Example: In English, [l] only shows up before vowels (‘love’, and [ɫ] never shows up before vowels (’ball’). There are no minimal pairs distinguishing the two sounds. ** Minimal pair = two words that differ only in one sound Example: allophone, phoneme, phones /ra/ vs /la/ Japanese: Allophones English: Phonemes Both: phones retroflex vs. dental /da/ sounds Hindi: Phonemes English: Allophones (your doll vs. this doll) Both: phones Phonological Structure of words Onset + Rime Cat: onset= /c/ + rime= /aet/ Evidence that we implicitly know this structure of words? Rhyming! A man has a plan OR sat on a wall, had a great fall Language play: Spoonerisms Dear old queen → Queer old dean Glad puppy → plad guppy Phonotactic knowledge: we know ‘kpakali’ and ‘zloty’ are not words in English. Can you decode these messages? Pig Latin: Itay isay lmostay ridayfay! Eastern Canadian: Wabe abare sabo abexcabitabed Horse Latin: Whabat abare yobou doboibing thibis weebeekebend What’s your name in all three? Pig Latin: Ollyhay Eastern Canadian: Habolly Horse Latin: Hobolly Can you come up with some good spoonerisms? switching the onsets of two words jelly bean –> belly jean bad salad → sad ballad doing the chores→ chewing the doors Phonological rules How do you make something plural? Add an s? Try it: Bug Bike English Rule (voicing assimilation): Voiced gets /z/ Unvoiced gets /s/ Kids know these rules by age 4 Assimilation: when 2 consonants are together in a word, they match in terms of voicing. Describing speech sounds Phonetics: one symbol per sound “letters of the alphabet are not adequate for describing speech sounds” using brackets [ph] Phonemics: indicate just the meaningfully different sounds (don’t need extra info about e.g., aspiration) /p/ Task https://www.ipachart.com Write a word or sentence using the IPA. Use the above link to help you learn the sounds. Speech sounds can be described in terms of physical properties: frequency + amplitude Speech sounds can also be described in terms of where they are produced: articulatory phonetics. Articulatory phonetics can help to describe all sounds of all languages. Consonants: Place of articulation: where vocal tract is closed Manner of articulation: how the vocal tract is closed Voiced vs voiceless: do the vocal cords help produce the sound? Manner of articulation Stop: Consonant sounds produced by completely stopping airflow e.g. /p/, /t/, /k/ Nasal: speech sound in which the airstream passes through the nose as a result of the lowering of the soft palate (velum) at the back of the mouth /m/, /n/ Fricative: a consonant sound, such as English /f/ or /v/, produced by bringing the mouth into position to block the passage of the airstream, but not making complete closure, so that air moving through the mouth generates audible friction Affricative: a consonant that begins as a stop and releases as a fricative, generally with the same place of articulation (‘chair’) Glide: a sound that is phonetically similar to a vowel sound but functions as the syllable boundary, rather than as the nucleus of a syllable /y/, /w/ Liquid: a consonant sound in which the tongue produces a partial closure in the mouth, resulting in a resonant, vowel-like consonant /l/, /r/ Articulatory phonetics Voicing Voicing determined by vocal fold position Voiceless = vocal folds pulled apart Voiced = vocal folds close together; vibrate Practice – voicing! Voiced phonemes Unvoiced phonemes /b/ /m/ /p/ /w/ /wh/ Name the manner (not /v/ necessarily the place) of /TH/ /s/ articulation: /d/ /sh/ stop/plosive /l/ /ch/ nasal fricatives /j/ /y/ /k/ affricatives /z/ /f/ liquids glides (from one position to /n/ /r/ /th/ another) /g/ /h/ Now how do we learn these sounds? And how to put these sounds together? Before children produce meaningful speech, they have to develop the ability to produce speech sounds. Prelinguistic speech development 1) Reflexive crying: cry, burp sneeze (vegetative sounds) 2) Cooing and laughing (happy sounds) 3) Vocal play: increase in consonant and vowel-like sounds, squeals, growls 4) Reduplicated/canonical babbling: dadadadada 5) Non-reduplicated babbling: more range of consonant vowel combos, more prosody Vocal development: pre-speech +Prosody Babbling Drift: babies babble differently (esp. before 10 months) depending on language they’re learning, but these are subtle differences Happy babbling baby! Deaf infants babble, too! Babble with their hands – making signs Some may babble vocally, but later than hearing babies Typically, it is canonical babbling where deaf and hearing infants diverge in pre-speech sounds Canonical babbling: same sound repeated over and over e.g. dadadadada Evolution of a word https://www.ted.com/talks/deb_roy_the_birth_of_a_word Start at 40 seconds to hear about method 4:30 for evolution of word over ~ 6 months Protoword: word that does not resemble the referenced word E.g., yumyum or guhguh What allows for these huge changes during the first year? Newborns: vocal tract is small, tongue fills the whole mouth Vocal Play period: Skeletal changes = tongue has more room to articulate! Muscle maturation Sensory receptors in the throat give better perception of what’s going on in the vocal tract Brain development – maturation of higher-level cortices Experience (hearing speech): Babies with frequent ear infections babble later (Polka et al., 2007) Hearing their own voice and imitating Social interaction Word Recognition What’s involved in word recognition? Match sound to internal representation of the sound of the word and to an external representation of the object of that word What aspects do infants include in that? Speaker’s accent? Speaker’s gender? Words that typically happen around those other words? Words are detailed in early learning Babies can hear mispronunciations (Swingley & Aslin, 2000) BABY vs. VABY Takes them longer to look to picture of baby with mispronunciation Don’t recognize words spoken by different genders (high vs. low pitch?) until 10.5 months Don’t recognize words spoken in different accents until 12 months+ Babies are still working on figuring out which sounds are meaningfully different (/b/ vs. /p/) and which are not (male vs. female voice pitch) through their first and into their second year (minimal pairs example) 4.5 month olds can tell apart different speakers (mono vs bilingual differences: visual fixation paradigm) Word Production Simple: single or reduplicated syllables Easier to pronounce more likely to come first (/m/ /b/ /d/ > /th//l/ /r/) 18 months: begin to systematically transform heard language into sounds that they can produce (phonological processes) Bottle → /baba/ Church → /turts/ First words Simple syllable structure (single syllables or reduplicated syllables) Lack of consistency in the way children produce sounds at this age. Phonological idioms: words that children produce in a very adultlike way, while still incorrectly producing other words that use the exact same sounds. Speech errors Match the word to the speech error Gisgusting Dross (“gross”) Caw (“car”) Task Think of some more examples you might hear from a child using the following phonological processes Menti: 9410 7431 https://www.mentimeter.com/app/presentation/blgqac6wj9ofdeg7k u283c4ij7d9cshw/edit?question=ehiy6ag7fy8n - Consonant cluster reduction - Weak syllable deletion - Reduplication Production Development General developmental timeline for certain words, but this will depend on so many factors: Where the sound occurs in the word Articulatory complexity Functional load (how often is it used in language) Cross-linguistic differences (e.g., English vs. Swedish/Estonian/Bulgarian) Individual differences (preference) Perception/Production link Toddlers don’t like it when you repeat their mistakes… Suggesting that they have a good mental representation of the word, their articulatory gestures are just still developing An understanding of the sounds of one’s language Phonological Rhyme Syllable counting Awareness List words with same onset sound (e.g., /g/) Begins around age 2! Phonological development is related to lexical development Kids with lots of different babble sounds start producing words more quickly Later, the more words a child knows the more refined their phonological awareness becomes Theories of phonological development Behaviorist (50’s came before the Nativist approach): Babies imitate the sounds they hear and are rewarded for it, so keep doing it But phonology is more than just building a database of sounds, it’s about mental representations about phonotactics, etc Not wrong, but insufficient Universalist (Nativist – Universal Grammar approach) Pre-specified list of all possible sounds, the baby’s job is to learn which are important for their language Babies do this by developing a rank of universal constraints specific to their language Theories of phonological development, cont’d Biologically-based theories Anatomical and physiological factors play a critical role in phonological development – constrained by motor capacity (e.g., vocal tract development) Early-produced sounds are common among all languages 97% of languages have an /m/ sound and it’s among first produced; /r/ is only in 5% of world’s language https://phoible.org/parameters Usage-based input is critical for guiding phonological development statistical patterns in the input influence children’s phonological development One type: connectionist Multisensory Integration Importance of visual info in phoneme perception Infants are first learning language they look much more often at the mouth than the eyes (Lewkowicz & Hanson-Tift, 2012) ~8 months Once they learn many of the sounds, they look again to the eyes (~12 months) https://www.youtube.com/watch?v=2k8fHR9jKVM McGurk Effect see va, hear ba, brain registers it as /fa/ Your brain integrates info from visual and auditory senses Normally they don’t conflict, but when they do you can hear something that isn’t actually there!