Physio C8 – Acoustic System PDF
Document Details
Uploaded by SplendidNovaculite8819
Università degli Studi di Milano Bicocca
Tags
Summary
This document discusses the acoustic system, focusing on sound waves, their characteristics, and how humans perceive sound. It covers the identification and localization of sounds, their frequency, intensity, and timbre. The document also touches upon the mechanisms of sound perception in the human ear.
Full Transcript
PHYSIO C8 – Acoustic system 1. A sound wave The somatic sensory system allows the animal world to directly perceive the surrounding, meanwhile the special senses allow for it even from quite far away (e.g., vision, hearing, …). This basically consists of detecting a special type of energy in...
PHYSIO C8 – Acoustic system 1. A sound wave The somatic sensory system allows the animal world to directly perceive the surrounding, meanwhile the special senses allow for it even from quite far away (e.g., vision, hearing, …). This basically consists of detecting a special type of energy in the form of a wave, in other words a progressive changing in pressure leading to a sinusoidal movement of air molecules. In fact, sound waves are made up of alternate regions of compression and rarefaction of air molecules and the ‘hearing’ process is the neural perception of this sound energy. This process involves two main aspects: The identification of the sound (what exactly we are listening to) The localization of the sound (where it comes from) Sound is measured in Decibels (dB) and in its wave there would be two peaks, one for compression and one for rarefaction, with in between the actual modulation of air molecules’ density. This peculiar form of energy needs to be adequate in order to stimulate an inner system and so, basically transduced. The example involves a tuning fork being an instrument capable of producing a pure sound. Its oscillation of course induces the movement of the air particles and even though the apparent perturbating movement is either on or off, microscopically the particles from an homogeneous concentration, then follow a continuous sinusoidal trend. This way the change in pressure enters the ear and the elaboration process takes place. 2. The characterization of a sound wave The three main features of the sound are: - The PITCH (or tone) which depends on the frequency of the wave. It is the major attribute of the sound and is clearly perceivable. Additionally, each voice has its own natural pitch. - The INTENSITY (or loudness) which depends on the amplitude of the wave. It is expressed in decibel. - The TIMBRE (or quality) which corresponds to the additional overtones contained in a complex sound wave. Sounds are usually complex, composed by a predominant wave standing for the pitch and known as Fundamental Frequency (FF) and by the overtones, so basically the multiples of the FF characterized by a lower amplitude and building up the timbre. Overtones are the additional frequencies that are superimposed on the fundamental pitch or tone. A tuning fork has a pure tone, but most tones lack purity. The overtones are responsible for the characteristic ≠ in voices. Timber enables the listener to distinguish the source of the sound, because each source produces a ≠ pattern of overtones. The difference between a sound and noise: - sound, the overtones have frequencies multiple of the fundamental one (pitch tone; can be recognized because it has the maximum amplitude); - a noise has no periodicity, and thus can’t be analyzed with a spectral analysis. 3. The audible spectrum Humans with normal hearing are able to detect sounds that fall within a frequency range from about 20 Hz to 20 kHz, with the upper limit dropping off somewhat in adulthood. Among these values that it is possible to stimulate our nervous system, but still, the most sensitive range stands in frequencies between 1000 and 4000 Hz. Ultrasounds and infrasounds cannot be detected without using the appropriate instruments. Humans need, for example, more decibels to perceive the lowest values (such as 20 Hz), this means different sensitivities correspond to a changing value of the sound in Hz. 4.000 Hz is the value at which humans possess the highest sensitivity and this is also the reason why the speech area lies around that value (humans probably developed it consequently). The graph above shows the Tuning curve, it demonstrates that human ears can detect sound waves with frequencies from 20 to 20.000 cycles per second, or hertz (Hz), but are most sensitive to frequencies between 1000 and 4000 Hz. 0 dB is the minimum intensity for our acoustic system, from then on there can be higher intensities, which will perceive as increasing loudness. The number of frequencies isn’t the same in all the dB levels, therefore, to be able to listen to higher frequencies we need to also increase the loudness. So, humans are more sensitive to some frequencies and less sensitive to other. I. The acoustic system The acoustic system is made of different structures: - External ear: collection of sounds; sound localization - Middle ear: amplification; protection - Internal ear: transduction - Superior olive: is the one thanks to which it is possible to localize the sound source. - Auditory cortex: is the final target where humans actually experience the stimulus 1. External ear The external ear, which consists of the pinna, concha, and auditory meatus, gathers sound energy and focuses it on the eardrum, or tympanic membrane. It boosts the sound pressure 30- to 100-fold for frequencies around 3KHz via passive resonant effect occurring in the auditor meatus. Due to the shape of the pinna and the concha, the external ear filters different sound frequencies providing cues about the “elevation” of the sound source. 2. Middle ear The specialized receptor cells for sound are located in the fluid filled inner ear. The airborn soundwaves must be channeled and transferred into the inner ear in way that compensates for the loss on sound energy that naturally occurs as sound passes from air to water. This function is performed by the external ear and the middle ear. The middle ear, also referred to as the tympanic cavity, con sists of an irregular-shaped air-filled chamber, embedded in the petrous portion of the temporal bone. It contains : - Three small articulating bones, the auditory ossicles o malleus o incus o stapes - Two miniature skeletal muscles: o tensor tympani muscle, which attaches to the malleus, o the stapedius muscle, which attaches to the stapes. - Two membrane-covered foramina in the bone: o the oval window o the round window - Two additional openings: o Eustachian (pharyngeal, auditory) tube, which permits communication between the middle ear and the nasopharynx o the communica tion to the mastoid air cells. - The chorda tympani nerve: a branch of the facial nerve (CN VII), which passes through but has no function in the middle ear. The mechanical arrangement of the ossicles increases the force exerted on the oval window by 20 times (specific total amplification factor = 21.6 times) with regards to what it would be if the airborne sound wave struck the oval window directly. Protective theory The afferent stimulus arrives with a high intensity of 85dB (if in the frequency range of 500-4000Hz) or 65dB (higher band sound). The efferent action is the contraction of the stapedius with a 40-80ms delay in respect to the sound arrival. The contraction will cause a stiffness in the stapes, causing the amplification of the sound to be modulated. The muscles are always potentially active as it allows the filtering of the sounds from the outside environment (car, …) but also from the internal body (movement of blood, cough, …). Eustachian tube The Eustachian tube connects the middle ear to the nasopharynx. It serves to equalize the atmospheric pressure between the middle ear and outer ear. At higher elevations, the atmospheric pressure in the external auditory meatus is lower than that of the middle ear cavity and causes the tympanic membrane to curve outwards (towards the external auditory meatus) stimulating pain receptors of the tympanic membrane. Since the wall of the Eustachian tube is normally collapsed, pressure differences can be relieved by the action of swallowing, chewing, yawning, or coughing, which open the Eustachian tube, allowing equalization of pressure on the two sides of the tympanic membrane. The Eusctachian tube also allows any fluid accumulation in the middle ear to drain into the nasopharynx. RECAP 3. Inner ear As the sound of a particular frequency is set up in the cochlea by oscillation of the stapes, the wave travels to the region of the basilar membrane that naturally responds maximally to that frequency. Looking at the arrows 1 and 2, it is possible to observe that the vibration takes two ways. According to number 1, it goes directly through the fluid till the round window where it dissipates at the target point thus inducing a vibration into the basilar membrane. According to number 2 it sort of takes a ‘shortcut’ and the dashed lines above stand for a specific region of the basilar membrane vibrating at a higher amplitude, finally transmitting to the round window as well. The energy of the pressure wave is dissipated with this vigorous membrane oscillation, so the wave dies out at the region of maximal displacement. The cochlea is a small (~10 mm wide) coiled structure, which, were it uncoiled, would form a tube about 35 mm long. Both the oval window and the round window, another region where the bone is absent surrounding the cochlea, are at the basal end of this tube. The cochlea is bisected from its basal end almost to its apical end by the cochlear partition, a flexible structure that supports the basilar membrane and the tectorial membrane. - There are fluid-filled chambers on each side of the cochlear partition, called the scala vestibuli and the scala tympani. - A distinct channel, the scala media, runs within the cochlear partition. The cochlear partition does not extend all the way to the apical end of the cochlea. Instead, an opening known as the helicotrema joins the scala vestibuli to the scala tympani, allowing their fluid, known as perilymph, to mix. One consequence of this structural arrangement is that inward movement of the oval window displaces the fluid of the inner ear, causing the round window to bulge out slightly and deforming the cochlear partition. The manner in which the basilar membrane vibrates in response to sound is the key to understanding how hearing is initiated. Measurements of the vibration of different parts of the basilar membrane, as well as the discharge rates of individual auditory nerve fibers that terminate along its length, show that both of these features are tuned. That is, although they respond to a broad range of frequencies, they respond most intensely to a specific frequency. Frequency tuning within the inner ear is attributable in part to the geometry of the basilar membrane, which is wider and more flexible at the apical end and narrower and stiffer at the basal end. Georg von Békésy, working at Harvard University, showed that a membrane that varies systematically in its width and flexibility vibrates maximally at different positions as a function of the stimulus frequency. Using models and human cochleas taken from cadavers, von Békésy found that an acoustical stimulus such as a sine tone initiates a traveling wave in the cochlea that propagates from the base toward the apex of the basilar membrane, growing in amplitude and slowing in velocity until a point of maximum displacement is reached. The point of maximum displacement is determined by the frequency of the stimulus and persists vibrating in that pattern as long as the tone endures. The points responding to high frequencies are at the base of the basilar membrane, and the points responding to low frequencies are at the apex, giving rise to a topographical mapping of frequency (i.e., tonotopy). Spectrally complex stimuli cause a pattern of vibration equivalent to the superposition of the vibrations generated by the individual tones making up that complex sound, thus accounting for the decompositional aspects of cochlear function mentioned earlier. This process of spectral decomposition appears to be an important strategy for detecting the various harmonic combinations that distinguish natural sounds that have a periodic character, such as animal vocalizations, including vowels and some consonants in speech. The cochlea ends up being a ‘frequency analyzer’. Overtones of varying frequencies cause many spots along the basilar membrane to vibrate simultaneously but less intensely than the fundamental tone, enabling the CNS to distinguish the timbre of the sound. This process is known as timbre discrimination. At the end, the brain collects all the components of the sound and finally builds it up. Parameters of the stimulus to be coded are: - Discrimination of the pitch depends on the portion of the basilar membrane vibrating at resonance. - Discrimination of the intensity (loudness) depends on the amplitude of vibration - Discrimination of the timbre depends on varying frequencies of the overtones that cause many points along the basilar membrane to vibrate simultaneously but less intensely than the fundamental tone. 4. The cochlear mechano-electrical transduction The cochlear mechano-electrical transduction can occur thanks to the Organ of Corti contained in the cochlea and being the sense organ for hearing. It is also important to highlight that the receptor cells contained in this area are not neurons, they’re just able to ‘talk’ to sensory afferents and so, communicate. The traveling wave initiates sensory transduction by displacing the sensory hair cells that sit atop the basilar membrane. Because the basilar membrane and the overlying tectorial membrane are anchored at different positions, the vertical component of the traveling wave is translated into a shearing motion between these two membranes. This motion bends the tiny processes, called stereocilia, that protrude from the apical ends of the hair cells, leading to voltage changes across the hair cell membrane. How the bending of stereocilia leads to receptor potentials in hair cells is considered in the following section. a. Tectorial membrane The tectorial membrane (horizontal) is fundamental for the transduction and separated from the basilar membrane (oblique). Its attachment point (pivot point) is not the same as the one of BM so it stays pretty still with regards to the basilar membrane. The tectorial membrane touches the outer hair cells, not the internal ones. The fluid movements in the inner ear induce the deflection of the basilar membrane causing the mechanical interaction of the hair cells with the tectorial membrane. The receptor cells are bent back and forth when oscillating basilar membrane shifts their position in relationship with the tectorial membrane with which they are in contact. The endolymph production occurs in the stria vascularis, which is an organ located pretty remotely with regards to the vessels surrounding the Organ of Corti and the receptors themselves. This localization avoids the mixing up of noises and sounds. Stria vascularis works in an opposite manner if compared to sodium-potassium pumps of the well-known PMs, since it allows for a high concentration of potassium outside and a high concentration of sodium inside. Once potassium gets out it cannot enter anymore thus resembling the interstitial fluids. This allows the stereocilia of the hair cells protrude in the endolymph, which is high in K+ and has an electric potential of +80 relatively to the perilymph. b. Hair cells When the hair bundle is deflected toward the tallest stereocilium, Ca2+cselective channels open near the tips of the stereocilia, allowing K+ to flow into the hair cell down their electrochemical gradient. The resulting depolarization of the hair cell opens voltage-gated Ca2+ channels in the cell soma, allowing Ca2+ entry and release of NT onto the nerve endings of the auditory nerve. The hair cell generates a sinusoidal receptor potential in response to a sinusoidal stimulus, thus preserving the temporal information present in the original signal, up to frequencies of around 3 kHz: response with an AC offset. Hair cells still can signal at frequencies above 3 kHz, although without preserving the exact temporal structure of the stimulus : it responds with a DC offset. Like photoreceptors, the hair cells do not undergo action potentials. Instead, graded potential changes in the hair cell leads to changes in the rate of action potentials in the afferent nerve fibers that make up the cochlear (auditory) nerve. The back-and-forth mechanical deformation of the hair cells alternatively opens and closes mechanically gated Ca2+ channels in the hair cell. The result is an alternating depolarizing and hyperpolarizing receptor potential changes at the same frequency of the sound stimulus. II. Innervation 1. Afferent fibers Afferent nerve fibers form the cochlear nerve. In the perspective of the receptors, this innervation is characterized by a high divergence (which means one receptor is feeding a lot of wires), guaranteeing the transmission of the stimulation. 2. Efferent nerve fibers Efferent nerve fibers originating from the superior olivary complex to stimulate the outer supporting cells. These efferent nerve fibers is responsible for the pitch discrimination. The outer cells contract responding to the sound induced depolarization, moving all of the organ of Corti apparatus nearer to the tectorial membrane increasing ciliary deflection. Since contraction follow the frequency of the sound, the result is a summation of the mechanical effect of the sound with the mechanical effect of contraction, in turn amplifying the basilar membrane oscillations. It is important to strictly detect the pitch. To do so, usually there is a strong stimulation of the central receptor and a weaker one for the neighboring receptors, then the olivary system receives the stimulus and send back efferent fibers to the most stimulated cell to ‘speak louder’. This process is called lateral inhibition. 3. Tuning curves These tuning curves show that in the auditory nerve the recording coming from the axon of course will demonstrate different sensitivities to different pitches. This happens not because of suspected special features of the axon but because this last one comes from the specific receptor cell in stimulated by the pitch. There is a minimum sound level required to increase the firing rate of fibers above the spontaneous firing level. The lowest point of the plot ( ) is the weakest sound intensity to which the neuron will respond is the frequency at which the neuron will repond the best. It is the neurons characteristic frequency. In the picture on the right, the modulation of the spike is observable, together with the frequency of discharge according to the incoming sound in a perfect correspondence. 4. Processing auditory input The auditory central pathways are a bilateral/biaural system because from one ear two ascending tracts can be detected, heading to the cortex both contralaterally and ipsilaterally with regards to the entrance point. In the image above, the pathway is represented in red finally reaching the auditory cortex where there is the discrimination of the different features/pitches composing the stimulus, and the connection to the memory system where there is the storing of the various sounds (e.g., parents voices). Sounds also go in the part of the limbic system dedicated to emotions (e.g., music that is vocative of different feelings). The fibers, form the cochlear nuclei, can go in reticular formation in connection with a-specific thalamic nuclei and, in turn, connected to limbic cortex or to the hypothalamus. In some cases, there is a strict relationship between sounds and the endocrine system. One example it is when a baby cries and the mother releases oxytocin, needed for lactation. So, the familiar sound induces the release of a hormone. Two ears are needed to correctly localize the sound source. The angle of the incoming sound matters as well, since one ear receives the sound before the other one. This elaboration of all these information is accomplished by the olivary nucleus, a matrix of nuclei, among which the one reached at the same time by the stimulus of both the ears will give the information about the delay value between the two ears.