Hearing Measurement 2025 - Introduction PDF
Document Details
Uploaded by SatisfactoryOsmium
Dalhousie University
2025
Dr. Steve J Aiken
Tags
Related
- Systematic Review of Extratympanic Electrocochleography in Ménière's Disease Diagnosis (PDF)
- Identification Of Conductive Hearing Loss In Young Infants PDF
- Analogue Instruments PDF
- Meteorology Temperature Notes PDF
- Lec #1 Hearing Disorders: Part 3 PDF
- Hearing Measurement 2025 - Pure Tone Audiometry (PDF)
Summary
This is a lecture about hearing measurement, focusing on audiometric testing for SLP and AUD students. It covers types of diagnosis, physiological and rehabilitative perspectives, course format, and grading.
Full Transcript
Hearing Measurement Lect.01 Dr. Steve J Aiken, Dalhousie University, 2025 Course Goals for SLP students: providing a professional foundation for understanding audiometric results for AUD students: providing a basis for more advanced training understanding of basic pri...
Hearing Measurement Lect.01 Dr. Steve J Aiken, Dalhousie University, 2025 Course Goals for SLP students: providing a professional foundation for understanding audiometric results for AUD students: providing a basis for more advanced training understanding of basic principles and techniques – what are the basic tests – how to perform the basic tests – what information does each test provide – how do I use this information Types of Diagnosis Physiological What might be wrong, Rehabilitati such as ve diagnosing a How to condition, if make surgery is a people better option overcome than a obsticals hearing aid, etc. Perspective we work like otologists when we inspect a TM or assess the middle ear, we work like neurologists when we perform electrophysiologic tests or check neural reflexes, we work like engineers when we adjust the parameters of a hearing aid to achieve optimal gain, but none of these things define us – our focus and goal is the assessment and rehabilitation of receptive communication International Classification of Functioning Disability and Health Disability is “Biopsychosocial” The Body Function/ impairment (impairment not typically used in audiology) The Person Activity limitations The Person in Context _____________________ Physiologic Diagnosis Body level (structure and function) Issues – normal/abnormal – site of lesion – cause – prognosis Rehabilitative Diagnosis Person level (activities and participation) Issues – problematic or unproblematic – hearing capacity (how well can they do) – is this person having trouble understanding speech – can we mitigate the effects of the loss – can we help this person overcome any limitations Physiologic Perspective different types of hearing loss require different types of treatment Rehabilitative Perspective Hearing loss is often normal (e.g. presbyacusis) Hearing loss is usually not a medical concern But hearing impairments almost always interfere with communication – “Activity Limitation” And they generally have a negative impact on life in general – “Participation Restriction” Audiological Diagnosis Perso Body n Making Sense of Audiological Data 2 Topics Pure-tone Air and Bone-Conduction Measures Audiometric Masking Speech Testing Pseudohypoacusis 1 Hyperacusis and Tinnitus Hearing Loss Prevention The Immittance Battery Otoacoustic Emissions 2 Paediatric Assessment Electrophysiologic Measures 3 Hearing Aids and Implants 4 Course Format Lectures / integrated lab demonstrations Readings with workbook Case quizzes Hands-on laboratory experiences Midterm examination (Unit One) In-Class tests (Units Two and Three) Grading Evaluation Case-Study Quizzes (5) 15% Laboratory Report 15% (Guide/Manual — 10%) (Lab data — 5%) Mid-Term Examination 30% Tests 40% (Test 1 — 15%) (Test 2 — 15%) (Test 3 — 10%) Case Quizzes 5 short (~5-10-minute) case study quizzes will be given in class Each quiz will consist of a clinical audiogram and will be followed by 3 multiple choice questions – e.g., describe the loss, likely cause, appropriate follow-up Each quiz will be worth 3% of final grade Lab Structure Nine groups of 4 and one groups of 3 – group assignments will be posted by next week – let me know if there are any potential group issues 1 member of each group will come to a training session to learn how to do the lab (led by the TA), and that person will become the trainer for the group Each trainer will prepare a guide for their group (details to follow) Each group will complete the lab, led by their trainer (support will be available if needed) Lab Guides and personal lab data (from all four labs) will be due April 9th Tests A midterm exam will be given on all material from the first half of the course (30%) – behavioral measures The second half of the course will cover physiologic measures as well as hearing aids and implants. There will be three in-class tests to assess learning: – immittance and emissions (15%) – electrophysiology and pediatrics (15%) – hearing aids and implants (10%) There will be no final exam Activity Limitations in Audiology What is the Main Activity Limitation in Audiology? Trouble hearing or understanding speech Why? Is it just because speech is too soft? – people usually still have trouble even when speech is audible! Loss of Tuning on Basilar Membrane? Precise Tuning in the Ear Likely Not Important Details Don’t Seem to Matter For Speech Aoccdrnig to rscheearch at Havrard uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer is in the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae we do not raed ervey lteter by itslef but the wrod as a wlohe. Initsereg!! Information in Speech: Features Glottal Pulses (Opening and Closing of the Vocal Folds) /b/ /ɑ/ /d/ /i/ 4000 Frequenc Hz y 0 +1 Time -1 Time Information in Speech: Features /b/ /ɑ/ /d/ /i/ 4000 Plosive bursts (air release Frequenc after consonant closure) Hz y The The harmonic structure ofcreated formants (resonances the voiceby the (important for listening to a single speaker shape of the vocal tract) provide speech in noise, and(e.g. information intonation). this is a particular vowel). 0 +1 Time -1 Time But what is the “real” feature? Formants? sinewave speech 4000 Frequenc Hz y 0 – Extracted formants & took away other speech variables & found people were still able to understand What is the “real” feature? Another Look at Formants hannon et al. – Science – 1995 Across-Frequency Shape (Formants) 4000 Hz 0 Across-Frequency Shape (Formants) 4000 Hz 0 hannon et al. – Science – 1995 Across-Frequency Shape (Formants) 4000 Hz 0 hannon et al. – Science – 1995 Information in Speech: the Beginning – Experimented by removing parts of the spectrum, and measuring how intelligible the speech was – Foundational research important for the telephone, audiometry, and hearing aid fitting Harvey Fletcher frequency 1/3rd Octave-Band Importance Function (ANSI S3.5) 0.1 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 160 200 250 315 400 500 630 800 1000 1250 1600 2000 2500 3150 4000 5000 6300 8000 Information in Speech 4000 2200 Hz 1500 800 0 Importance of 0-800 Hz 4000 2200 Hz 1500 800 0 Importance of 800-1500 Hz 4000 2200 Hz 1500 800 0 Importance of 1500-2200 Hz 4000 2200 Hz 1500 800 0 Importance of 2200 - 4000 Hz 4000 2200 Hz 1500 800 0 General Finding Fletcher found that in order to understand speech, you need to hear speech features (e.g., formants), which are distributed across the spectrum This lead to the development of the Articulation Index (now the Speech Intelligibility Index) – predicts intelligibility from a weighted sum of the audibility (or S/N ratio) in a number of frequency How Does this Relate to Hearing Measurement? – Fletcher divided the frequency spectrum into 20 bands – each band corresponded to about 1 mm on the basilar membrane… about 114 IHC 1 mm (1 Band) How Does this Relate to Hearing Measurement? – Each speech feature required 4.65 bands or 4.65 mm (i.e., 1 feature every 4.65 mm, or 530 hair cells) – An octave is about 5 mm on the basilar membrane… so there is about 1 feature per octave (e.g. 1 at 250 Hz, 1 at 500 Hz etc.) How Does this Relate to Hearing Measurement? – You need about 4 features per speech sound – You need about 4 octaves per speech sound How Does this Relate to Hearing Measurement? – speech information is distributed – each speech sound (i.e., phoneme) requires 4 audible octaves or 4 speech features **usually doesn’t matter which 4** – intelligibility is best predicted by audibility across a wide range The Audiogram was Designed for This Quick Summary 1. Speech perception requires audibility across a wide range of frequencies. 2. Very little detail is needed in any frequency region, so the comparison across frequencies is most important Formants vs. Harmonics /a/ vowel formants or “shape” information reflects the movements of the articulators – this information is distributed across frequency Mouth Movements Arnt Maasø, U Oslo Wide Patterns, Not Local Frequency Details the Auditory System mostly processes sound within narrow frequency channels this doesn’t make sense if what matters are large differences between frequency channels Tonotopicity persists from cochlea to cortex (PAC) Optical imaging of changes in blood flow with tonal stimulation in the chinchilla (Harrison et al., 1998) An simple auditory processing schematic Tonotopic processing channels... through brainstem to cort Cochlea Across frequency processing must be above the cortex! (PAC) Cochlea Across-Frequency Integration – Association Cortex likely the planum temporale (association cortex) – this area is always involved with tasks that require spectrotemporal integration (Griffiths & Warren, 2002, Trends Neuroscience) – this area gives rise to evoked Hickick & Poeppel, potentials that depend on 2005, Nature spectrotemporal integration Neuroscience (the MMN) Superior temporal sulcus (yellow area) important for phonology planum temporale processes across-frequency changes Functions associated with the planum temporale (Griffiths & Warren, Trends Neurosci, 2002) duration sequences (-silence) harmonic complexes (-pure tones) frequency modulated (-pure tones) amplitude modulated (-steady noise) spectral motion (-stationary) pitch sequences (-silence) melodies (-noise) PT is enlarged in humans on left side speech (-noise) speech (-complex non-speech) speech (-tones) consonant-vowel (-vowel) unvoiced consonant (-voiced)!!!! dichotic (-diotic) lip-reading (-meaningless facial mov’t) source mov’t versus stationary Speech sounds are processed in the brain, not the ear If what we need for speech is the overall general formant pattern across frequency… why is the auditory system arranged to provide fine details in narrow frequency channels? The Detail Puzzle Why does the ear have such precise within- channel information (fine spectral and temporal resolution) if it isn’t necessary for speech understanding? Allows hearing in difficult environments & discriminate sounds Auditory Chimeras (Smith et al., Nature, 2002) Shape Across-Frequency versus Details Coarse Detail Sentence #1 (Envelope) Chimera #1 Fine Detail (Fine Structure) Coarse Detail Sentence #2 (Envelope) Chimera #2 Fine Detail (Fine Structure) Shape Across-Frequency versus Details Coarse Detail (Envelope) Music #1 Chimera #1 Fine Detail (Fine Structure) Coarse Detail Music #2 (Envelope) Chimera #2 Fine Detail (Fine Structure) Shape Across-Frequency versus Details Coarse Detail Music (Envelope) Chimera #1 Fine Detail (Fine Structure) Coarse Detail Speech (Envelope) Chimera #2 Fine Detail (Fine Structure) The Brain is for Speech The Ear is for Music Seems Like a Lot of Trouble just for Music! Seems Like a Lot of Trouble just for Music! Chimpanzee Forest Sounds What happens in noise when the details are missing? Coarse Detail Sentence #1 (Envelope) Chimera #1 Noise Coarse + Detail Sentence #2 (Envelope) Chimera #2 Noise And when the details are present... Coarse Detail Sentence #1 (Envelope) Sentence # Fine Structure 1 Coarse + Detail Sentence #2 (Envelope) Sentence #2 Fine Structure Sound is All Around Us Details are required for disentangling and tracking voices Auditory Scene Analysis The Healthy Ear Detail provided by the healthy ear: – fundamental frequency – information about voices and intonation – spatialization – information about source location – harmonicity – the gluing together of a voice Does music exploit our scene analysis system? Consequences of Hearing Loss Hearing loss adversely impacts the encoding of fine details – OHC damage reduces tuning of peripheral auditory system (Evans & Harrison, 1976, J Physiol, 252, 43-44) – OHC damage can reduce the range of frequencies at which phase-locking can occur (and its precision; Woolf, Ryan & Bone, 1981, Hear Res, 4, 335-346) Consequences of Hearing Loss Hearing loss produces a disproportionate problem with hearing in noise Disentangling/Tracking Sounds is Called “Scene Analysis” Hearing loss creates two problems: – an audibility problem – a scene analysis problem Understanding Speech Perception – speech information is largely “shape” information (e.g., formants), which tell us about MOUTH movements – each phone requires 4 audible octaves or 4 speech features – intelligibility in quiet is best predicted by audibility across the frequencies (details are not necessary) Understanding Speech Perception – fine details are required for hearing in noise, tracking voices in space, and disentangling overlapping sounds – this is where hearing loss creates the greatest problems – this is an auditory scene analysis problem Going beyond English… Is it really true that fine structure (e.g., pitch and harmonics) only important for hearing in noise, localizing sounds, and for music perception? Consider linguistic diversity – there are nearly 7000(!) languages in the world – more than half (including many in Africa and Southeast Asia) use pitch distinctions for lexical identity e.g., Mandarin—spoken by more than one billion people – this has an impact on CI programming and on Diagnosis of Speech Limitations Primarily based on audibility (pure tone tests) What are other ways to determine speech understanding problems? Speech test Ask patient Diagnosis of Speech Limitations Is this the whole purpose of audiometric testing? Part Two: Physiologic Diagnosis The audiologist as a point of entry in health care Diagnosis of Impairment The Non-Impaired System Understanding Structure and Function Auditory system is complex – many types of problems can lead to hearing dysfunction – acoustic system (external ear) – mechanical system I (middle ear) – mechanical system II (cochlea) active and highly-non-linear – metabolic system (cochlea) – neural system (hair cell transduction, auditory nerve, brainstem, cortex) The Outer Ear Manubrium Umbo Pars Flaccida Light Reflex Pars tensa Incus Annular Ligament Outer Ear Effects Role of Middle Ear MAP and MAF Human Hearing In the Cochlea The Travelling Wave Basilar Membrane Pulled Upwards Tip Links Open Ion Channels Action Potential is Generated How Frequency Selective? Role of Outer Hair Cells Neural Tuning Curves Cochlear Nerve Responses to Pure Tones Introduction to Diagnosis Diagnosis is Detective Work Example: – used to get dizzy, but no longer – tinnitus in right ear – asymmetric hearing loss (right worse) – reflexes absent with sound to right ear – ABR I-III and I-V extended (III-V normal) – speech perception poorer at high levels – reflex decays more than half of its strength within 3 seconds (also suggests neural abnormality) Diagnosis is Detective Work Example: – young female athlete – hearing suddenly poorer in left ear after trauma – very high tympanic compliance – word discrimination corresponds to hearing loss Diagnosis is Detective Work Example: – young woman with an infant – hearing deteriorating over last few years (left ear only) – moderate mixed loss left (normal right), although BC thresholds normal everywhere circa 2 kHz – ipsi reflexes present right, elevated left – word discrimination corresponds to hearing loss Diagnosis is Detective Work Example: – hears own footsteps – hears eyeballs moving – dizzy when loud sounds are presented – difficulty began after mild trauma to the head – otoacoustic emissions present bilaterally Diagnosis is Detective Work Example: – rural Nova Scotia – unilateral right hearing loss with notch at 4 kHz – no difficulty with speech except in noise, scores good at low and high levels – reflexes present bilaterally, elevated right – prolonged I-V interval in ABR on right side – no abnormal reflex decay Diagnosis is Detective Work Example: – mild to moderate high frequency hearing loss – low tympanic compliance – chronic otitis media – draining ears – vertigo Diagnosis is Detective Work Example: – child doing poorly in school – word discrimination scores normal – thresholds normal – ABR normal – reflexes normal – no abnormal reflex decay – abnormal cortical evoked response – difficulty with competing speech Types of Assessment Overview Assessment Types Acoustic Immittance Electrophysiology Tympanometry EcochG Acoustic Reflexes ABR, PAMR MLR, P1-N1-P2, MMN, P300 ASSR Audiometry Otoacoustic Emissions Pure Tone Audiometry TEOAE Speech Audiometry DPOAE Masking Psychoacoustic Tests