Hearing, Speaking, and Making Music PDF

Document Details

DiligentCaesura

Uploaded by DiligentCaesura

Tags

hearing human auditory system sound waves neuroscience

Summary

This document is a presentation on the topic of hearing, speaking, and making music. It provides an overview of the human auditory system, including the anatomy, physiology, and processes involved in the perception and understanding of sound. It also addresses the relationship between music and language processing, exploring the role of different brain regions and pathways in language and music.

Full Transcript

TOPIC 6 Hearing, Speaking and Making Music AUDITORY AND LANGUAGE PROCESSES How Do We Hear, Speak, and Make Music?  Sound Waves: Stimulus for Audition  Functional Anatomy of the Auditory System  Neural Activity and Hearing  Anatomy of Language and Music  Disorders of Language ...

TOPIC 6 Hearing, Speaking and Making Music AUDITORY AND LANGUAGE PROCESSES How Do We Hear, Speak, and Make Music?  Sound Waves: Stimulus for Audition  Functional Anatomy of the Auditory System  Neural Activity and Hearing  Anatomy of Language and Music  Disorders of Language Hearing Hearing (audition), the ability to construct perceptual representations from pressure waves in the air, includes: a. Sound localization – identifying the source of air pressure waves, b. Echo localization- identifying and locating objects by bouncing sound waves off them, and b. Complexity – ability to detect the complexity to hear speech and music  Changes in air pressure cause sound waves  Auditory receptors detect the frequency, amplitude, and complexity of air-pressure waves Language and music  Oral language of every known culture follows similar basic structural rules, and people in all cultures make and enjoy music.  Language and music allow us to organize and to interact socially. Physical Properties of Sound Waves Physical Properties of Sound Waves 1. Frequency  Number of cycles that a wave completes in a given amount of time  Measured in hertz, or cycles per second  Corresponds to our perception of pitch  Low pitch, low frequency (fewer cycles/second)  High pitch, high frequency (many cycles/second)  Differences in frequency are heard as differences in pitch.  Each note in a musical scale has a different frequency. Hearing Ranges Among Animals The frequency ranges of whales, dolphins, and dogs are extensive. Humans’ hearing range is broad, but we do not perceive many sound frequencies that other animals can both make and hear. Hearing Ranges among Animals  Very low frequency sound waves travel long distances in water.  Whales produce them for underwater communication over hundreds of miles.  High-frequency sound waves echo and form the basis of sonar.  Dolphins produce them in bursts, listening for echoes from objects.  Bats use echolocation to navigate and find insects (food). They produce sound waves above human hearing threshold (ultrasound) which bounce off objects in environment and return to their ears. Physical Properties of Sound Waves 2. Amplitude  The intensity, or loudness, of a sound, usually measured in decibels (dB)  The magnitude of change in air molecule density  Corresponds to our perception of loudness  Soft sound, low amplitude  Loud sound, high amplitude Sound Wave Amplitude The human nervous system is sensitive to soft sounds. People regularly damage their hearing through exposure to very loud sounds or by prolonged exposure to them. Prolonged exposure to sounds louder than 100 decibels is likely to damage human hearing. Sound Wave Amplitude  Rock bands routinely play music that registers higher than 120 decibels and sometimes as high as 135 decibels.  Because of the high sound levels, hearing loss is common in symphony musicians.  Prolonged listening to loud music through headphones or earbuds is responsible for significant hearing loss in many young people. Physical Properties of Sound Waves 3. Complexity  Pure tones  Sounds with a single frequency  Complex tones  Sounds with a mixture of frequencies  Corresponds to our perception of timbre, or uniqueness  How we can distinguish between a trombone and a violin playing the same note Perception of Sound  A pebble hitting water (making waves) is much like a tree falling to the ground.  The waves that emanate from the pebble’s entry point are like the air pressure waves that emanate from the place where the tree strikes the ground.  The frequency of the waves determines the pitch of the sound heard by the brain.  The height (amplitude) of the waves determines the sound’s loudness. Perception of Sound  Auditory system converts the physical properties of sound wave energy to electrochemical neural activity that travels to the brain.  Sounds are products of the brain.  Our sensitivity to sound waves is extraordinary.  Detect the displacement of air molecules of about 10 picometers (1picometer = 1 trillionth of a meter) Perception of Sound  Each frequency change in air pressure (each different sound wave) stimulates different neurons in your auditory system.  Your brain interprets sounds to obtain information about events in your environment, and it analyzes a sound’s meaning.  Your use of sound to communicate with other people through language and music clearly illustrates these processes. Properties of Language and Music as Sounds  Language and music both convey meaning and evoke emotion.  Left temporal lobe analyzes speech for meaning.  Right temporal lobe analyzes musical sounds for meaning.  Language facilitates communication.  Music helps us to regulate our emotions and affect the emotions of others. Functional Anatomy of the Auditory System Functional Anatomy of the Auditory System  Ear collects sound waves from surrounding air  Converts mechanical energy to electrochemical neural energy  Routed through the brainstem to the auditory cortex Structure of the Ear  Three anatomical divisions of the human ear  Outer Ear – Pinna and external ear canal  Middle Ear – Eardrum and the ossicles, the hammer, anvil, and stirrup  Inner Ear – Oval window and cochlea  Cochlea contains the:  Hair cells – the sensory receptor cells  Basilar membrane  Organ of Corti Structure of the Ear  Processing Sound Waves  Pinna  Funnel-like external structure designed to catch sound waves in the surrounding environment and deflect them into the ear canal  External ear canal  Amplifies sound waves somewhat and directs them to the eardrum, which vibrates in accordance with the frequency of the sound wave Transducing Sound Waves into Neural Impulses  Pressure waves in the air are amplified and transformed a number of times in the ear  Frequency of a sound is transduced (converted) by the basilar membrane  Basilar membrane  Thick at the base; tuned for high frequencies  Thin and wide at the apex; tuned for low frequencies In short…  Sounds are caught in the outer ear and amplified by the middle ear.  In the inner ear they are converted to action potentials on the auditory pathway going to the brain, and  we interpret the action potentials as our perception of sound Hearing  Tonotopic representation - is the spatial arrangement of where sounds of different frequency are processed in the brain. Tones close to each other in terms of frequency are represented in topologically neighbouring regions in the brain  Different points on the basilar membrane and in the auditory cortex represent different sound frequencies Auditory Pathways  We are designed to pick up sound frequencies: have hairlike receptors in cochlea of inner ear.  As the middle ear responds to external sound waves, it causes vibration in fluid (and therefore of the receptors) of inner ear.  These receptors connect with auditory nerve.  The auditory nerve from each ear extends ipsilaterally to cochlear nuclei of the medulla (lowest part of brainstem)  From here, each pathway branches to project auditory info to both ipsilateral and contralateral superior olivary nuclei of the medulla.  SO: Each hemisphere receives input from both ears so we have bilateral presentation of sound.  Auditory pathways then course through the lower brainstem and ascend (go up) through thalamus (relay station).  From thalamus, these sounds are then projected to the primary auditory cortex (in the temporal lobe). Commonly referred to as Heschl’s Gyrus.  REMEMBER: Heschl’s Gyrus is sometimes larger in right hemisphere.  Primary Auditory Cortex processes several elements of sound e.g., frequency, loudness, duration, change. Detecting Loudness  The greater the amplitude of the incoming sound waves, the higher the firing rate of bipolar cells in the cochlea.  More intense sound waves trigger more intense movements of the basilar membrane, causing more shearing action of the hair cells, which leads to more neurotransmitter release onto bipolar cells. Detecting Location  Estimate the location of a sound both by taking cues derived from one ear and by comparing cues received at both ears  Each cochlear nerve synapses on both sides of the brain to locate a sound source.  Neurons in the brainstem compute the difference in a sound wave’s arrival time at each ear—the interaural time difference (ITD). Detecting Location  Another mechanism for source detection is relative loudness on the left and the right—the interaural intensity difference (IID).  The head acts as an obstacle to higher-frequency sound waves, which do not easily bend around it.  As a result, higher-frequency waves on one side of the head are louder than on the other. Detecting Location  Cells in each hemisphere receive inputs from both ears and calculate the difference in arrival times between the two ears.  More difficult to compare the inputs when sounds move from the side of the head toward the middle; the difference in arrival times is smaller.  When we detect no difference in arrival times, we infer that the sound is coming from directly in front of us or behind us. Detecting Location  Source of sound is detected by the relative loudness on the left or right side of the head.  Since high-frequency sound waves do not easily bend around the head, the head acts as an obstacle.  As a result, higher-frequency sound waves on one side of the head are louder than on the other. Locating a Sound  Interaural time difference (ITD) may be short, but the auditory system can discriminate it and fuse the dual stimuli so that we perceive a single clear sound coming from the left. Detecting Patterns in Sound  Music and language are perhaps the primary sound wave patterns that humans recognize.  Music: right hemisphere  Language: left hemisphere  Ventral and dorsal cortical pathway for audition  Ventral pathway decodes spectrally complex sounds (auditory object recognition), including the meaning of speech sounds for people.  Dorsal auditory stream integrates auditory and somatosensory information to control speech production (audition for action). Anatomy of Language and Music Processing Language  Musical ability is generally a right-hemisphere specialization complementary to language ability, which is in the left hemisphere in most people.  Does the brain have a single system for understanding and producing any language, regardless of its structure, or are very different languages processed in different ways? Processing Music  Music processing is largely a right-hemisphere specialization.  The left hemisphere plays some role in certain aspects of music processing, such as those involved in making music.  Recognizing written music, playing instruments, and composing Music as Therapy  Music is used as a treatment for mood disorders such as depression.  Best evidence of its effectiveness lies in studies of motor disorders, such as stroke and Parkinson disease.  Listening to rhythm activates the motor and premotor cortex and can improve gait and arm training after stroke.  Parkinson patients who step to the beat of music can improve their gait length and walking speed. Uniformity of Language Structure  All languages have common structural characteristics stemming from a genetically determined constraint (Chomsky, Pinker).  1) Language is universal in human populations.  2) Humans learn language early in life and seemingly without effort.  There is likely a sensitive period for language acquisition that runs from about 1 to 6 years of age.  3) Languages have many structural elements in common.  Examples: syntax and grammar Localization of Language in the Brain  Broca’s area  Anterior speech area in the left hemisphere that functions with the motor cortex to produce the movements needed for speaking  Wernicke’s area  Posterior speech area at the rear of the left temporal lobe that regulates language comprehension; also called the posterior speech zone Neurology of Language Wernicke’s model of speech recognition: stored sound images are matched to spoken words in the left posterior temporal cortex. Speech is produced through the arcuate fasciculus, which connects Wernicke’s area and Broca’s area. Higher Auditory Processing: Speech and Language  To speak requires that we must differentiate between speech sounds such as vowels and consonants (e.g, vowels have slightly different freqency to consonants).  Speech also means we need to be understood, to speak in a manner that the sounds we produce make sense.  Language also means we have to make sense of what we hear: understanding word fragments as well as semantics. Read this slide for interest  Once the primary cortex has registered sound, then the secondary auditory processing area known as Wernicke’s Area makes sense of that sound.  2nd auditory processing area connects sound from the primary auditory areas to word meanings stored in the cortex.  We also need other cortical areas to help integrate the understanding of individual words into phrases, and to link spoken words to symbols so we can understand what we read.  Also need to understand tone, humour, puns, sarcasm and so forth. Expressive speech links to Broca’s Area: that is concerned with aspects of speech planning. Plays role in grammatical arrangement of words. Broca’s and Wernicke’s areas are linked by band of white matter fibres called the arcuate fasciculus that allows for interaction between the two areas. Left hemisphere is dominant for speech. Left cerebral cortex processes speech sounds, right cerebral cortex processes non-speech sounds/environmental sounds. Left hemisphere understands rhythm for both speech and music as it codes for the sequence of sounds. Right-Hemisphere Contributions to Language  Good auditory comprehension of language  Hemispherectomy (removal of a hemisphere)  If left hemisphere is removed early, the right hemisphere can acquire language  If left hemisphere removed in adults, it results in severe deficits in speech, but still good auditory comprehension  Removal of the right hemisphere produces subtle changes in language comprehension For language processing: Left hemisphere specialises in processing word sounds, semantics, the grammatical rules of language; Right hemisphere plays a role in the emotional intention of both vocalisation and understanding. Some people have a bilateral representation of speech, e.g., more women than men. So, right hemisphere stroke may manifest as someone being unable to understand metaphors and jokes. Neural Connections Between Language Zones  Lesion studies in humans (stroke patients) Wernicke–Geschwind Model The 3 part model proposes that comprehension is (1) extracted from sounds in Wernicke’s area and (2) passed over the arcuate fasciculus pathway to (3) Broca’s area to be articulated as speech Wernicke–Geschwind Model Dual Language Pathway  The double headed arrows on both paired pathways indicate information flows both ways between temporal & frontal cortex  Information from vision enters into the auditory language pathways and contributes to reading.  Information from body-sense regions of the parietal cortex also contributing to touch language like braille. DISORDERS OF LANGUAGE: APHASIA  Disturbance of language usage or comprehension. May impair speaking, writing (agraphia), reading (alexia), gesture and comprehension.  Is a disturbance linking speaking to thinking.  Does not include disorders that result from  Loss of sensory input, especially vision and hearing  Motor paralysis or incoordination of mouth (anarthria) or hand (for writing)  Most often caused by vascular disorders such as strokes/CVAs, or by tumour or brain trauma. Disorders of Language Disorders of Language  Aphasia  Inability to speak or comprehend language despite having normal comprehension or intact vocal mechanisms  Broca’s aphasia is the inability to speak fluently despite having normal comprehension and intact vocal mechanisms.  Wernicke’s aphasia is inability to understand or produce meaningful language even though the production of words is still intact. Disorders of Language  Three Categories of Aphasia: 1. Fluent Aphasia (Wernicke’s Aphasia) Fluent speech but difficulties either in auditory verbal comprehension or in the repetition of words, phrases or sentences spoken by others. 2. Nonfluent Aphasia (Broca’s Aphasia) Difficulties in articulating but relatively good auditory verbal comprehension 3. Pure Aphasia Selective impairments in reading, writing or recognizing words in the absence of other language disorders. Fluent Aphasias  Impairment in input or reception of language  Wernicke’s Aphasia, or Sensory Aphasia 1. Deficits in classifying sounds or comprehending words 2. Word salad: can speak but confuses phonetic characteristics; intelligible words strung together randomly 3. Cannot write because cannot discern phonemic characteristics Nonfluent Aphasias Broca’s Aphasia, or expressive aphasia – Can understand speech – Labors hard to produce speech – Can be mild or severe Pure Aphasias  Alexia  Inability to read  Agraphia  Inability to write  Word deafness  Cannot hear or repeat words Localization of Lesions in Aphasia  Why is studying neural basis of language complex?  1. Most of the brain takes part in language in one way or another  2. Most patients who add information to studies of language have had strokes, usually of the middle cerebral artery  3. Immediately following stroke, symptoms are generally severe but improve considerably as time passes  4. Aphasia syndromes described as nonfluent (Broca’s) or fluent (Wernicke’s) have many varied symptoms, each of which may have different neural basis Read this slide for interest Read this slide for interest

Use Quizgecko on...
Browser
Browser