Document Details

DazzlingDifferential8762

Uploaded by DazzlingDifferential8762

Tags

phonetics speech production linguistics language

Summary

This document provides an introduction to phonetics, focusing on the process of speech production. It covers the three stages of conceptualization, formulation, and articulation, explaining how thoughts are translated into speech. The document also touches on the significance of phonetics to language teaching.

Full Transcript

Phonetics I-II-III  Overview of Speech Production & Mechanisms in Linguistics  What is Phonetics?  How is Phonetics significant to teaching language? _________________________________________________  How to apply Phonetics?  Phonetics vs Phonology  What do...

Phonetics I-II-III  Overview of Speech Production & Mechanisms in Linguistics  What is Phonetics?  How is Phonetics significant to teaching language? _________________________________________________  How to apply Phonetics?  Phonetics vs Phonology  What do Phoneticians study? (Focus)  History & Development _________________________________________________  IPA  Methodology  Phonetics & other disciplines Linguistics, the scientific study of human language, is divided into six sub-branches based on the part of a language being studied. These are phonetics, phonology, morphology, syntax, semantics, and pragmatics. Linguists interested in language structure consider the formal properties of language, including word structure (morphology), sentence structure (syntax), speech sounds and the rules and patterns between them (phonetics and phonology), and meaning in language (semantics and pragmatics). Overview of Speech Production Speech production is the process by which thoughts are translated into speech. This includes the selection of words, the organization of relevant grammatical forms, and then the articulation of the resulting sounds by the motor system using the vocal apparatus. Speech production can be spontaneous such as when a person creates the words of a conversation, reactive such as when they name a picture or read aloud a written word, or imitative, such as in speech repetition. Speech production is not the same as language production since language can also be produced manually by signs. In ordinary fluent conversation people pronounce roughly four syllables, ten or twelve phonemes and two to three words out of their vocabulary (that can contain 10 to 100 thousand words) each second. Errors in speech production are relatively rare occurring at a rate of about once in every 900 words in spontaneous speech. Words that are commonly spoken or learned early in life or easily imagined are quicker to say than ones that are rarely said, learnt later in life, or are abstract. Speech Production deals in 3 levels:  Conceptualization  Formulation  Articulation Speech production is a remarkable process that involves multiple intricate levels. From the initial conceptualization of ideas to their formulation into linguistic forms and the precise articulation of sounds, each stage plays a vital role in effective communication. Understanding these levels helps us appreciate the complexity of human speech and the incredible coordination between the brain and the vocal tract. By honing our speech production skills, we can become more effective communicators and forge stronger connections with others. 1- Conceptualization Conceptualization is the first level of speech production, where ideas and thoughts are born in the mind. At this stage, a person identifies the message they want to convey, decides on the key points, and organizes the information in a coherent manner. This process is highly cognitive and involves accessing knowledge, memories, and emotions related to the topic. During conceptualization, the brain‘s language centers, such as the Broca‘s area and Wernicke‘s area, play a crucial role. The Broca‘s area is involved in the planning and sequencing of speech, while the Wernicke‘s area is responsible for understanding and accessing linguistic information. For example, when preparing to give a presentation, the conceptualization phase involves structuring the content logically, identifying the main ideas, and determining the tone and purpose of the speech. 2- Formulation The formulation stage follows conceptualization and involves transforming abstract thoughts and ideas into linguistic forms. In this stage, the brain converts the intended message into grammatically correct sentences and phrases. The formulation process requires selecting appropriate words, arranging them in a meaningful sequence, and applying the rules of grammar and syntax. At the formulation level, the brain engages the motor cortex and the areas responsible for language production. These regions work together to plan the motor movements required for speech. During formulation, individuals may face challenges, such as word-finding difficulties or grammatical errors. However, with practice and language exposure, these difficulties can be minimized. Continuing with the previous example of a presentation, during the formulation phase, the speaker translates the organized ideas into spoken language, ensuring that the sentences are clear and coherent. 3- Articulation Articulation is the final level of speech production, where the formulated linguistic message is physically produced and delivered. This stage involves the precise coordination of the articulatory organs, such as the tongue, lips, jaw, and vocal cords, to create the specific sounds and speech patterns of the chosen language. Smooth and accurate articulation is essential for clear communication. Proper articulation ensures that speech sounds are recognizable and intelligible to the listener. Articulation difficulties can lead to mispronunciations or speech disorders, impacting effective communication. Overview of Speech Mechanisms The speech mechanism is a complex and intricate process that enables us to produce and comprehend speech. The speech mechanism involves a coordinated effort of speech subsystems working together seamlessly. Speech Mechanism is done by 5 Sub-systems:  Respiratory System  Phonatory System  Resonatory System  Articulatory System  Regulatory System I. Respiratory System: The Foundation of Speech Speech begins with respiration, where the lungs provide the necessary airflow. The diaphragm and intercostal muscles play a crucial role in controlling the breath, facilitating the production of speech sounds. II. Phonatory System: Generating the Sound Source Phonation refers to the production of sound by the vocal cords in the larynx. As air from the lungs passes through the vocal cords, they vibrate, creating the fundamental frequency of speech sounds. Phonation, in simple terms, refers to the production of sound through the vibration of the vocal folds in the larynx. When air from the lungs passes through the vocal folds, they rapidly open and close, generating vibrations that produce sound waves. These sound waves then resonate in the vocal tract, shaping them into distinct speech sounds. The process of phonation involves a series of coordinated movements. When we exhale, air is expelled from the lungs, causing the vocal folds to close partially. The buildup of air pressure beneath the closed vocal folds causes them to be pushed open, releasing a burst of air. As the air escapes, the vocal folds quickly close again, repeating the cycle of vibrations, which results in a continuous sound stream during speech. III. Resonatory System: Amplifying the Sound The sound produced in the larynx travels through the pharynx, oral cavity, and nasal cavity, where resonance occurs. This amplification process adds richness and depth to the speech sounds. IV. Articulatory System: Shaping Speech Sounds Articulation involves the precise movements of the tongue, lips, jaw, and soft palate to shape the sound into recognizable speech sounds or phonemes. When we speak, our brain sends signals to the muscles responsible for controlling these speech organs, guiding them to produce different articulatory configurations that result in distinct sounds. For example, to form the sound of the letter ―t,‖ the tongue makes contact with the alveolar ridge (the ridge behind the upper front teeth), momentarily blocking the airflow before releasing it to create the characteristic ―t‖ sound. The articulation process is highly complex and allows us to produce a vast array of speech sounds, enabling effective communication. Different languages use different sets of speech sounds, and variations in articulation lead to various accents and dialects. Efficient articulation is essential for clear and intelligible speech, and any impairment or deviation in the articulatory process can result in speech disorders or difficulties. Speech therapists often work with individuals who have articulation problems to help them improve their speech and communication skills. Understanding the mechanisms of articulation is crucial in studying linguistics, phonetics, and the science of speech production. Articulators are the organs and structures within the vocal tract that are involved in shaping the airflow to produce specific sounds. Here are some of the main articulators and the sounds they help create: V. Regulatory system: The Role of the Brain and Nervous System ______________________________________________________ Phonetics is a fundamental building block not just in linguistics but also in fields such as communication disorders. What happens when we talk? How do we make the sounds of spoken language? What does the human talker do to make those sounds? What physical laws link the actions of the talker to the specific noises that come out of our mouth and nose? Of course, making sounds is not the only thing that happens when we talk. There is the process of ideation, thinking of what we want to say. There is the process of converting what we want to say into the units of language (words, sentences, and so on). There is the process of how we select the sounds we intend to say, based on these linguistic units. What is Phonetics? Simply put, Phonetics, from the Greek word fōnḗ, is the branch of linguistics that deals with the physical production and reception of sound. We call these distinct sounds phones. Phonetics is not concerned with the meaning of sounds but instead focuses on the production, transmission, and reception of sound. It is a universal study and is not specific to any particular language. Phonetics is the study of speech sounds and their physiological production and acoustic qualities. It deals with the configurations of the vocal tract used to produce speech sounds (articulatory phonetics), the acoustic properties of speech sounds (acoustic phonetics), and the manner of combining sounds so as to make syllables, words, and sentences (linguistic phonetics). Phonetics is a branch of linguistics that focuses on the production and classification of the world‘s speech sounds. The production of speech looks at the interaction of different vocal organs, for example the lips, tongue and teeth, to produce particular sounds. By classification of speech, we focus on the sorting of speech sounds into categories which can be seen in what is called the International Phonetic Alphabet (IPA). The IPA is a framework that uses a single symbol to describe each distinct sound in the language and can be found in dictionaries and in textbooks worldwide. For example, the noun ‗fish‘ has four letters, but the IPA presents this as three sounds: f i ʃ, where ‗ʃ‘ stands for the ‗sh‘ sound. Phonetics as an interdisciplinary science has many applications. This includes its use in forensic investigations when trying to work out whose voice is behind a recording. Another use is its role in language teaching and learning, either when learning a first language or when trying to learn a foreign language. This section of the website will look at some of the branches of phonetics as well as the transcription of speech and some history behind phonetics. Phonetics Vs. Phonology – the key differences Phonetics looks at the physical production of sounds, focusing on which vocal organs are interacting with each other and how close these vocal organs are in relation to one another. Phonetics also looks at the concept of voicing, occurring at the pair of muscles found in your voice box, also known as the Adam‘s apple. If the vocal folds are vibrating, this creates voicing and any sound made in this way are called voiced sounds, for example ―z‖. If the vocal folds are not vibrating, this does not lead to voicing and creates a voiceless sound e.g., ―s‖. You can observe this yourself by placing two fingers upon your voice box and saying ―z‖ and ―s‖ repeatedly. You should feel vibrations against your finger when saying ―z‖ but no vibrations when saying ―s‖. Phonology however is associated more with the abstract properties of sounds, as it is about how these categories are stored in the mind. Phonetics also describes certain properties as being gradient such as voicing where we can compare the length of voicing between two sounds. For example, in French, [b] is voiced for longer than English [b]. In Phonology, these segments are simply defined categorically as being voiced or voiceless, regardless of these subtle differences. Overall, Phonetic is a study or classification of sounds. Phonetic word is derived from Greek word phone-sound/voice. Phonetics is a branch of linguistics that focuses on the production and classification of the world's speech sounds. The production of speech looks at the interaction of different vocal organs, for example the lips, tongue and teeth, to produce particular sounds. How is studying phonetics significant to teaching English ? Phonetics is fundamental as it helps learners grasp the correct pronunciation of words. When someone learns a new language, they encounter unfamiliar sounds that may not exist in their native tongue. For instance, the difference between the English sounds ―th‖ as in ―thin‖ and ―th‖ as in ―this‖ can be challenging for non-native speakers. Phonetics provides the tools to accurately produce these sounds by understanding the articulatory properties involved. For language teachers, a solid understanding of phonetics and phonology is indispensable. It enables them to diagnose and correct pronunciation errors, design effective pronunciation drills, and tailor lessons to address the specific phonological challenges of their students. Courses such as those offered by Vidhyanidhi Education Society emphasize practical phonetic training to empower educators with the necessary skills to enhance their teaching methodologies. Phonetics and Phonology are essential components of language learning and teaching. They provide learners with the tools to master pronunciation, understand the structure of language sounds, and improve overall communication skills. Educators who are proficient in phonetics and phonology can make a significant impact on their students‘ language acquisition journey. 1. Builds Confidence: When learners by themselves can decode sounds and their relation to the pronunciation of letters and their combination in words, communication becomes a natural process for them. Even when the words seem unfamiliar to them, instead of getting overwhelmed they will be able to associate words with clear conceptualization. 2. Helps in Recognition and Interpretation: Be it young learners or adults, once they know how to use phonetics in everyday life, they can easily recognize the sound each letter makes and how they must be pronounced when they are in combination with each other. One of the core objectives of learning phonetics is to make learners capable of interpreting the words even when they listen from a person having a different accent. 3. Helps to Spell Words Correctly: Phonetics not only guides the learner in decoding the sound, it also helps them to know how a word must be spelt out while writing. When you spell a word with a phoneme, it is called Grapheme. Graphemes are the symbols that are used to identify a single phoneme – a letter or group of letters that represent the sound. And effective communication can only be completed when learners can use the language appropriately in both reading and writing. 4. Improves Fluency: When it comes to the fluency of a speaker, two things matter the most: How fast can a person recognize words! How accurate the pronunciation is! Phonetics does take care of both. Fluency indicates the ‗ease‘ with which one can read text. Moreover, when learners can decode words it builds a memory dictionary in their minds and with times this helps to build up the comprehension skill within oneself. Without an understanding of phonetics, one cannot effectively read and spell. It is required to understand how speech sounds are formed by paying attention to the shape and feel of the mouth when speaking or reading, and have the knowledge to remember, separate, combine, and manipulate phonemes, and to do so rapidly and without effort (Moats, 2020). Once there is a sense of how phonetics works, then one can form a file system of rules for how to create words using the speech sounds. Furthermore,  You know (can hear and pronunce) all the sounds = the whole learning process goes faster overall Sounds are basis of a spoken language. When you speak, you use a combination of words and these are nothing more than a combination of sounds joined together. When you use a grammatical structure, you use words and these are nothing more than a combination of sounds. When you want to change meaning slightly by changing your intonation you use different pitch by using your muscles (face, tongue etc.), if you want to change from a more formal to a more informal register, you very often alter sounds or perform a phonetic altenation of some kind. And so on, and so forth. Now, when you learn a language, your brain has to deal with many things simultaneously: right word or set of words depending on the context, right grammatical structure, right intonation (secondary thing), right register (secondary thing when one is at a beginner‘s level). So as you can see, there‘s quite a lot to think about in a very short time. What do all the things I just mentioned have in common? That‘s right. They are made of sounds. Now, what if you don‘t know all the sounds? Does it make it easier or harder to say something in a foreign language? The most important goal of learning a language is to speak (communicate), that is, physically create sounds. Out of 4 main language competencies (reading, speaking, listening, and writing) speaking and listening need sounds. When you read or write you don‘t need sounds. The question is: ―Do you need a foreign language only to write and/or read‖? Only in this case sounds are not the most important thing to master. But, I think learners want to communicate generally.  you control what you‘re saying Many languages, English in particular, have many words where only a single sound makes the difference. And in English you have many minimal pairs where one word is very common and the other one is a curse). Now imagine, one says the curse word and doesn‘t know about it because nobody has taught him/her this. I know many examples of this kind of mistake. Many of my students used to say the curse word instead of the correct version before they attained knowledge during my classes. They said that they hadn‘t been taught that way, never (people in their 30‘s, 40‘s, 50‘s and even 60‘s. Correct language is often required at work, especially in some particular professions. Even though it is not always the case. I‘d say very often. Of course you have to distinguish between correct word pronunciation (normal word vs curse word) and willingness to reduce a foreign accent, which is a secondary thing and depends on the one‘s need for the correct speaking and this, in my opinion depends on one‘s character. Application Phonetics is an important foundation to many areas of linguistics. Think about this. Without the study of Phonetics…  How could you study a child‘s development in their production and perception of speech sounds? – Child Language Acquisition.  How would you be able to understand and treat speech and hearing disorders? – Clinical Phonetics.  How would a computer system be able to turn text into speech correctly? – Speech Synthesis.  How would a mobile phone be able to recognise what you say to it? – Speech Recognition.  And, in a criminal trial, how could you prove whether a voice recording is or isn‘t the suspects voice? – Forensic Phonetics. Phonetics and Phonology Phonology and phonetics both study the sounds of language, but they focus on different aspects of sound production and perception. Phonetics is the study of the physical properties of speech sounds, including their production and acoustic properties. It looks at the articulatory and acoustic aspects of speech, such as how sounds are formed by the human vocal apparatus and how they are transmitted through the air as waves. Phonology, on the other hand, is the study of the abstract, cognitive aspects of speech sounds, including how they are organized and used in language. It focuses on the patterns and systems of sounds in language, such as how different sounds can change the meaning of a word or how they interact with each other in speech. In terms of similarities, both phonology and phonetics are concerned with the sounds of language and seek to understand and describe the properties of speech sounds. They also both use similar methods of analysis, such as examining spectrograms or articulatory descriptions. Overall, phonetics and phonology are closely related fields that complement each other in their study of speech sounds, with phonetics focusing on the physical properties of sounds and phonology focusing on the abstract, cognitive aspects of sound organization in language. Phonology VS Phonetics Phonology Phonetics Analyzes the sound pattern of a particular Analyzes the production of all human speech sounds, regardless of language. language by determining which phonetic sounds are significant and explaining how these sound are interpreted by the native speaker Phonology is the study of how sounds are organized and used in natural languages. Phonetics is the study of human speech sounds The phonological system of a language includes an inventory of sounds and their features, and pragmatic rules which specify how sounds interact with each other. Phonetics studies which sounds are present in a language. Phonology studies how these sounds combine and how they change in combination, as well as which sound can contrast to produce difference in meaning Phonetics simply describes the articulatory and acoustic properties of speech sounds What do phoneticians study? Phonetics is all about studying the sounds we make when we talk. Here is a summary of the three branches of this discipline:  Acoustic Phonetics This is the study of the sound waves made by the human vocal organs for communication and how the sounds are transmitted. The sound travels through from the speaker‘s mouth through the air to the hearer‘s ear, through the form of vibrations in the air. Phoneticians can use equipment like Oscillograms and Spectrograms in order to analyse the frequency and duration of the sound waves produced.  Auditory Phonetics This is how we perceive and hear sounds and how the ear, brain and auditory nerve perceives the sounds. This branch deals with the physiological processes involved in the reception of speech.  Articulatory Phonetics Articulatory phonetics is interested in the movement of various parts of the vocal tract during speech. The vocal tract is the passages above the larynx where air passes in the production of speech. In simpler terms, it is understanding which part of the mouth moves when we make a sound. History and development Much of phonetic structure is available to direct inspection or introspection, allowing a long tradition in phonetics (see also articles in Asher & Henderson, 1981). The first true phoneticians were the Indian grammarians of about the 8th or 7th century bce. In their works, called Pratiśãkhya, they organized the sounds of Sanskrit according to places of articulation, and they also described all the physiological gestures that were required in the articulation of each sound. Every writing system, even those not explicitly phonetic or phonological, includes elements of the phonetic systems of the languages denoted. Early Semitic writing (from Phoenician onward) primarily encoded consonants, while the Greek system added vowels explicitly. The Chinese writing system includes phonetic elements in many, if not most, characters (DeFrancis, 1989), and modern readers access phonology while reading Mandarin (Zhou & Marslen-Wilson, 1999). The Mayan orthography was based largely on syllables (Coe, 1992). All of this required some level of awareness of phonetics. Attempts to describe phonetics universally are more recent in origin, and they fall into the two domains of transcription and measurement. For transcription, the main development was the creation of the International Phonetic Alphabet (IPA) (e.g., International Phonetic Association, 1989). Initiated in 1886 as a tool for improving language teaching and, relatedly, reading (Macmahon, 2009), the IPA was modified and extended both in terms of the languages covered and the theoretical underpinnings (Ladefoged, 1990). This system is intended to provide a symbol for every distinctive sound in the world‘s languages. The first versions addressed languages familiar to the European scholars primarily responsible for its development, but new sounds were added as more languages were described. The 79 consonantal and 28 vowel characters can be modified by an array of diacritics, allowing greater or lesser detail in the transcription. There are diacritics for suprasegmentals, both prosodic and tonal, as well. Additions have been made for the description of pathological speech (Duckworth, Allen, Hardcastle, & Ball, 1990). It is often the case that transcriptions for two languages using the same symbol nonetheless have perceptible differences in realization. Although additional diacritics can be used in such cases, it is more often useful to ignore such differences for most analysis purposes. Despite some limitations, the IPA continues to be a valuable tool in the analysis of languages, language use, and language disorders throughout the world. For measurement, there are two main signals to record, the acoustic and the articulatory. Although articulation is inherently more complex and difficult to capture completely, it was more accessible to early techniques than were the acoustics. Various ingenious devices were created by Abbé Rousselot (1897–1908) and E. W. Scripture (1902). Rousselot‘s devices for measuring the velum (Figure 1) and the tongue (Figure 2) were not, unfortunately, terribly successful. Pliny Earl Goddard (1905) used more successful devices and was ambitious enough to take his equipment into the field to record dynamic air pressure and static palatographs of such languages as Hupa [ISO 639-3 code hup] and Chipewyan [ISO 639-3 code chp]. Despite these early successes, relatively little physiological work was done until the second half of the 20th century. Technological advances have made it possible to examine muscle activity, airflow, tongue-palate contact, and location and movement of the tongue and other articulators via electromagnetic articulometry, ultrasound, and real-time magnetic resonance imaging (see Huffman, 2016). These measurements have advanced our theories of speech production and have addressed both phonetic and phonological issues. Acoustic recordings became possible with the Edison disks, but the ability to measure and analyze these recordings was much longer in coming. Some aspects of the signal could be somewhat reasonably rendered via flame recordings, in which photographs were taken of flames flickering in response to various frequencies (König, 1873). These records were of limited value, because of the limitations of the waveform itself and the difficulties of the recordings, including the time and expense of making them. Further, the ability to see the spectral properties in detail was greatly enhanced by the declassification (after World War II) of the spectrograph (Koenig, Dunn, & Lacy, 1946; Potter, Kopp, & Green, 1947). New methods of analysis are constantly being explored, with greater accuracy and refinement of data categories being the result. Sound is the most obvious carrier of language (and is etymologically embedded in ―phonetics‖), and the recognition that vision also plays a role in understanding speech came relatively late. Not only do those with typical hearing use vision when confronted with noisy speech (Sumby & Pollack, 1954), they can even be misled by vision with speech that is clearly audible (McGurk & MacDonald, 1976). Although the lips and jaw are the most salient carriers of speech information, areas of the face outside the lip region co-vary with speech segments (Yehia, Kuratate, & Vatikiotis-Bateson, 2002). Audiovisual integration continues as an active area of research in phonetics. Sign language, a modality largely devoid of sound, has also adopted the term ―phonetics‖ to describe the system of realization of the message (Goldin-Meadow & Brentari, 2017; Goldstein, Whalen, & Best, 2006). Similarities between reduction of speech articulators and American Sign Language (ASL) indicate that both systems allow for (indeed, may require) reduction in articulation when content is relatively predictable (Tyrone & Mauk, 2010). There is evidence that unrelated sign languages use the same realization of telicity, that is, whether an action has an inherent (―telic‖) endpoint (e.g., ―decide‖) or not (―atelic‖, e.g., ―think‖) (Strickland et al., 2015). Phonetic constraints, such as maximum contrastiveness of hand shapes, has been explored in an emerging sign language, Al-Sayyid Bedouin Sign Language (Sandler, Aronoff, Meir, & Padden, 2011). As further studies are completed, we can expect to see more insights into the aspects of language realization that are shared across modalities, and to be challenged by those that differ. IPA IPA stands for: ―International Phonetic Alphabet‖ which is an alphabet developed in the 19th century to accurately represent the pronunciation of languages. One aim of the International Phonetic Alphabet (IPA) was to provide a unique symbol for each distinctive sound in a language—that is, every sound, or phoneme, that serves to distinguish one word from another. It is the most common example of phonetic transcription. In the declining years of the 19th century, the landscape of linguistics was marked by a significant development: the inception of the International Phonetic Alphabet (IPA). This initiative was spearheaded by a collective of linguists and language experts, motivated by the urgent need for a unified system to document the diverse sounds of human speech. Before the advent of the IPA, the academic world was fragmented by an array of phonetic notations, each with its own set of rules and symbols, complicating the study and comparison of languages across different families. The drive behind the creation of the IPA was to eliminate these inconsistencies, offering a singular, comprehensive phonetic system that could accurately depict any language‘s phonemic nuances. This groundbreaking endeavour was not only aimed at facilitating linguistic research but also at supporting language teaching, speech therapy, and other disciplines where precise phonetic transcription is indispensable. The resulting alphabet was a testament to the collaborative effort of these pioneers, transcending linguistic boundaries and setting a new standard for phonetic notation. Through their visionary work, these scholars laid the foundation for a tool that would profoundly influence the study of languages, enabling an unprecedented level of cross-linguistic analysis and understanding. The development of the IPA was, therefore, a pivotal moment in linguistic history, reflecting a convergence of expertise and a shared commitment to advancing our comprehension of language as a universal human faculty. The concept of the IPA was first broached by Otto Jespersen in a letter to Paul Passy of the International Phonetic Association and was developed by A.J. Ellis, Henry Sweet, Daniel Jones, and Passy in the late 19th century. Its creators‘ intent was to standardize the representation of spoken language, thereby sidestepping the confusion caused by the inconsistent conventional spellings used in every language. The IPA was also intended to supersede the existing multitude of individual transcription systems. It was first published in 1888 and was revised several times in the 20th and 21st centuries. The International Phonetic Association is responsible for the alphabet and publishes a chart summarizing it. The IPA primarily uses Roman characters. Other letters are borrowed from different scripts (e.g., Greek) and are modified to conform to Roman style. Diacritics are used for fine distinctions in sounds and to show nasalization of vowels, length, stress, and tones. The IPA can be used for broad and narrow transcription. For example, in English there is only one t sound distinguished by native speakers. Therefore, only one symbol is needed in a broad transcription to indicate every t sound. If there is a need to transcribe narrowly in English, diacritical marks can be added to indicate that the t‘s in the words tap, pat, and stem differ slightly in pronunciation. The IPA did not become the universal system for phonetic transcription that its designers had intended, and it is used less commonly in America than in Europe. Despite its acknowledged shortcomings, it is widely employed by linguists and in dictionaries, though often with some modifications. The versatility of the International Phonetic Alphabet extends across numerous professional and educational landscapes, proving itself indispensable in a variety of settings. In the realm of language education, instructors utilise the IPA to illuminate the pronunciation patterns of foreign tongues for their students, thereby facilitating a more accurate and rapid acquisition of linguistic proficiency. This approach not only aids in the reduction of accent barriers but also enriches the learner's understanding of phonetic distinctions across languages. Similarly, speech and language therapists harness the precision of the IPA to document and analyse the speech patterns of individuals encountering communicative challenges. Through detailed transcription of speech sounds, therapists can devise targeted intervention strategies to address specific phonetic or phonological issues, enhancing the efficacy of therapeutic outcomes. Furthermore, the IPA serves as a fundamental tool for researchers in linguistics, enabling a detailed comparison of phonetic elements across diverse language systems. By providing a uniform set of symbols for sound representation, the IPA facilitates the exploration of phonological structures, dialectal variations, and the evolutionary dynamics of language sounds. Additionally, the application of the IPA transcends academic research, finding utility in the crafting of dictionaries and language learning materials, where precise phonetic guides significantly aid in the correct pronunciation and comprehension of lexical items. The International Phonetic Alphabet, with its comprehensive and adaptable framework, thus remains a cornerstone for professionals engaged in the nuanced analysis and teaching of spoken language, highlighting its enduring relevance in linguistics and beyond. The following tables list the IPA symbols used for American English words and pronunciations. Please note that although the IPA is based on the Latin alphabet, the IPA contains some non-Latin characters as well. Consonants IPA Examples p pit, lip b bit, tub t tip, sit d dig, sad k cup, sky, click g guy, bag m my, jam n not, ran ŋ sing, finger, link tʃ check, etch dʒ just, giant, judge, age IPA Examples f fish, cuff v vowel, leave θ thigh, breath ð thy, father, breathe s sip, mass z zip, jazz ʃ shop, wish ʒ genre, pleasure, beige h house, ahead w wit, swap j yes, young r rip, water, write l lap, pull Vowels IPA Examples i feet, seat, me, happy ɪ sit, gym e late, break, say ɛ let, best æ cat, mad ʌ but, trust, under (stressed positions) ə comma, bazaar, the (unstressed positions) u goose, rude, cruel ʊ foot, took IPA Examples oʊ boat, owe, no ɔ frog, bought, launch ɑ not, father aɪ buy, aisle, isle aʊ cow, mouth ɔɪ soil, boy Why do we need the International Phonetic Alphabet? In English, the same letters in a word can represent different sounds, or have no sound at all. Therefore, the spelling of a word is not always a reliable representation of how to pronounce it. The IPA shows the letters in a word as sound-symbols, allowing us to write a word as it sounds, rather than as it is spelt. For example, tulip becomes /ˈtjuːlɪp/. The IPA is very helpful when studying a second language. It can help learners understand how to pronounce words correctly, even when the new language uses a different alphabet to their native language. How is Phonetics studied? The phonetician Raymond Stetson wrote: ‗Speech is rather a set of movements made audible than a set of sounds produced by movements.‘ The field of phonetics can be roughly divided into study of the speaker (articulatory), the sound (acoustic), or of the listener (auditory). Each of these divides down further. There‘s a useful diagram on page 10 of Hewlett & Beck‘s ‗Introduction to the Science of Phonetics‘. →Methods in Articulatory Phonetics There are various instruments to help us look at the vocal apparatus during speech. A real-time or recorded MRI lets us actually watch the vocal tract and see how it changes during speech. See the video below for an example of this: Other methods are a bit more abstract:  Ultrasound Tongue Imaging involves sending ultrasound waves through the tongue from various angles, and comparing the time taken to receive the echo. A gap between the tongue and palate will show up in the image as a line.  Palatography involves using a colouring agent (such as dye) on a speaker‘s tongue or the roof of their mouth to identify which part of the mouth is used when producing different sounds. This method has been extensively used at UCLA. When we begin to analyse the frequencies of the sounds we produce, we‘re getting out of articulatory phonetics and into acoustic phonetics. →Methods in Acoustic Phonetics Acoustic phonetics is the study of the sound in the air; the way it travels from speaker to listener. We record speech and try to investigate its acoustic characteristics. Even recorded speech is difficult to study though – sound is temporary whether it‘s recorded or not. We need visual representations of the sound, the simplest of which is the oscillogram (a way of visualising sound waves) and with these graphs we can analyse and compare the frequencies and other properties of speech sounds (See the example below): However, speech is very complex, because it consists of many signals, each with their own frequency. Even so, there is still a regularly repeating period. It will repeat with a particular frequency, which we call the fundamental frequency. This is what we recognise as the ‗pitch‘ of an utterance and depends on the rate of vibration of the vocal cords. There are many complex methods for finding the fundamental frequency of an utterance, but all have some degree of error, especially because the vocal cords don‘t give a perfectly periodic signal. Still, if the vocal cords open and close 150 times in a second, the fundamental frequency will be 150Hz.  Changing this frequency is how we make ‗He‘s late?‘ into a question. But it‘s not important for giving meaning to the sounds in other ways; that‘s why an /a/ sound is the same sound whether said by a man or woman. →Methods in Auditory Phonetics Now we get to the listener. The hearing mechanism is quite well understood, but it‘s difficult for phoneticians to get a look at it ‗in action‘ as it receives a sound. The instruments are generally too invasive to use, so when they‘re needed, we have to use cadavers. But it‘s good to be reminded that hearing‘s not quite as simple as just using our ears. We can feel vibrations (even if we‘re deaf) and even vision plays a part. For example, we find it easier to understand people in person than on the phone, and not being able to see somebody‘s mouth can be disorienting, especially in a noisy environment, or in a foreign language. ECG‘s and other ways of directly measuring the brain are important – just like in speech production – but a lot of study is still done by exposing subjects to sounds in large quantities and analysing what they say they can hear. By graphing the results of tests like these, researchers can get a picture of where people see one vowel as ‗turning into‘ another. Phonetics and other disciplines Phonetics is closely related to other levels of language, such as semantics, morphology, and syntax, in these ways: →Semantics: Phonetics can contribute to the meaning of words through the use of prosody, which refers to the intonation, stress, and rhythm of speech. For instance, the stress pattern of a word can affect its meaning. In English, the word "record" can be pronounced with the stress on the first syllable to mean a ―vinyl recording,‖ or with the stress on the second syllable to mean a written record. →Morphology: Phonetics can help to distinguish between different morphemes (the smallest units of meaning in a language) that are pronounced similarly. For instance, in English, the plural "-s" morpheme is pronounced differently depending on the preceding sound. The "-s" in "dogs" is pronounced as [z], while the "-s" in "cats" is pronounced as [s]. →Syntax: Phonetics can affect the structure of sentences through the use of intonation and stress. For instance, a rising intonation at the end of a sentence can indicate a question. Conversely, a falling intonation can indicate a statement. Moreover, stress can be used to emphasize particular words or phrases within a sentence.

Use Quizgecko on...
Browser
Browser