ALS426 Language And The Brain Part 1 PDF

Summary

This document covers aspects of language, encompassing speech production, comprehension, and perception. It delves into various linguistic processes and concepts like normalisation, categorical perception, and lexical access.

Full Transcript

LANGUAGE AND THE BRAIN Part 1 ALS426 Week 11 Agenda 1 The Human Mind at Work Learning Outcome Understand the processes involved in speech production and comprehension The Human Mind at Work Psycholinguistics Psycholinguistics is concerned with linguist...

LANGUAGE AND THE BRAIN Part 1 ALS426 Week 11 Agenda 1 The Human Mind at Work Learning Outcome Understand the processes involved in speech production and comprehension The Human Mind at Work Psycholinguistics Psycholinguistics is concerned with linguistic performance or processing – that is how we use our linguistic knowledge (competence) – in speech production and comprehension When we speak, we access our lexicon to find the words, and we use the rules of grammar to construct novel sentences and to produce the sounds that express them. When we listen to speech, we also access the lexicon and grammar to assign a structure and meaning to the sequence of words we hear. Psycholinguistics Other psychological processes are also involved in the production and comprehension of language. Various mechanisms enable us to break the continuous stream of speech sounds into linguistic units such as phonemes, syllables and words in order to comprehend and compose a message. Other cognitive mechanisms determine how we pull words from the mental lexicon, while others explain how we assemble these words into a structural representation. Ordinarily we have no difficulty understanding or producing sentences as they are done without effort or conscious awareness. However, we have all had experiences of making speech errors or failing to understand a perfectly grammatical sentence. Psycholinguistics Let’s look at these three Upon hearing sentence 1, many people will judge sentences: it to be ungrammatical. Yet, they will judge it as grammatical when hearing 1) The horse raced past the sentence 2 which has the same syntactic structure. barn fell. Conversely, when hearing sentence 3, people will u 2) The bus driven past the nderstand it easily although it is an ungrammatical school stopped. sentence. 3) *The baby seems This mismatch between grammaticality and sleeping. interpretability tells us that language processing involves more than grammar. A theory of linguistic performance tries to detail the psychological mechanisms that work with the grammar to facilitate language production and comprehension. Comprehending Speech Signal Understanding a sentence involves analysis at many levels. How do we understand the individual speech sounds we hear? Comprehension begins with the perception of the acoustic speech signal. The speech signal can be described in terms of the: fundamental frequency (perceived as pitch) intensity (loudness) quality (differences in speech sounds) The speech wave can be displayed visually as a spectrogram, sometimes called a voiceprint. Speech Perception Speech is a continuous signal. In natural speech, sounds overlap and influence each other, and yet listeners have their impression that they are hearing discrete units such as words, morphemes, syllables and phonemes. A central problem of speech perception is to explain how listeners carve up the continuous speech signal into meaningful units. This is referred to as the segmentation problem. Another challenge is to understand how the listener manages to recognise particular speech sounds when they are spoken by different people and when they occur in different contexts. Speech Perception For example, how can a speaker tell that a [d] spoken by a man with a deep voice is the same unit of sound as the [d] spoken in the high-pitched voice of a child? Despite these problems, listeners are usually able to understand what they hear because our speech perception mechanisms are designed to overcome the variability and lack of discreteness in the speech signal. Experimental results show that listeners calibrate their perceptions to control for speaker differences, and can quickly adapt to foreign- accented or distorted speech. Speech Perception When listening to distorted speech, for example, listeners need to hear only two to four sentences to adjust, and can then generalise to words they have never heard before. It takes about a minute to adapt to non-native accents. These normalisation procedures enable the listener to understand a [d] as a [d] regardless of the speaker or speech rate. Listeners can exploit various acoustic cues in the signal, as well as relationships among different acoustic elements, to get around the la ck of invariance problem. Speech Perception The units we perceive depend on the language we know, especially its phonemic inventory. For example, the initial consonant in [di], [da] and [du] are physically distinct from one another because of the formant transitions from the consonant to the different vowels – a coarticulation effect. Nevertheless, speakers perceive the [d]s as instances of the same phonological unit, namely the phoneme /d/. This phenomenon is known generally as categorical perception: Speakers perceive physically distinct stimuli as belonging to the same category because their perceptions are assisted by knowledge of the underlying classificatory system. Speech Perception Apart from normalisation and categorical perception, stress and intonation can also cue syntactic constituents in speech stream. For instance, we know that the different meanings of the sentences He lives in the white house, and He lives in the White House can be signalled by differences in their stress patterns. It is also true that syllables at the end of a phrase are longer in duration than at the beginning, and intonation contours mark clause boundaries. In addition, listeners use their lexical knowledge to identify words in the signal. This process is called lexical access or word recognition. Language Comprehension Language comprehension is very fast and automatic. Successful language comprehension requires that a lot of operations take place at once – what is called parallel processing – including the following: segmenting the continuous speech signal into phonemes, morphemes, w ords and phrases, looking up the words and morphemes in the mental lexicon, finding the appropriate meanings of ambiguous words placing them in a constituent structure choosing among different possible structures when syntactic ambiguities arise interpreting the phrases and sentences making a mental model of the discourse and updating it factoring in the pragmatic context Language Comprehension Psycholinguists suggest that perception and comprehension must involve both top-down processing and bottom-up processing. In top-down processing, the listener relies on higher-level semantic, syntactic and contextual information to analyse the acoustic signal. In bottom-up processing, however, the listener uses acoustic information to build a phonological represen tation of words that he can look up in the lexicon and finally constructs a semantic interpretation. Adapted from Edwards (2007) in Stenfelt & Ronnberg (2009) Lexical Access Lexical access, or word recognition, is the process by which listeners obtain information about the meaning and syntactic properties of a word from their mental lexicon. Based on several psycholinguistic experiments conducted, it was found that lexical access depends on the word’s frequency of usage: more commonly used words such as car are responded more quickly than words that are rarely encountered such as cad. The longer time taken to respond or to make a lexical decision, the more processing is involved. Lexical Access The speed with which a listener can retrieve a particular word also depends on the size of the word’s phonological neighbourhood. A neighbourhood is comprised of all the words that are phonologically similar to the target word. A word like pat has a dense neighbourhood because there are many similar words – bat, pad, pot, pit, etc. – while a word like crib has far fewer neighbours. Words with larger neighbourhoods take longer to retrieve than words from smaller ones because more phonological information is required to single out a word in a denser neighbourhood. Lexical Access Psycholinguists believe that each word in the mental lexicon is asso ciated with a resting level of activation, with some words more active than others. Each time the listener accesses a word its level rises a little bit. Thus, more frequently used words have a higher resting level of activation, a nd listeners respond faster to these words in decision tasks. Words with larger neighbourhoods take longer to retrieve than words from smaller ones because more phonological information is required to single out a word in a denser neighbourhood. Indeed, in reading tasks, subjects appear to skip over the short, high frequency function words, so quickly are they accessed. Lexical Access Words can also be activated by hearing semantically related words. This effect is known as semantic priming. A listener will be faster at making a lexical decision on the word doctor if he has just heard nurse than if he just heard a semantically unrelated word such as flower. The word nurse is said to prime the word doctor. When we hear a priming word, related words are awake ned and become more readily accessible for a few moments. The priming effect might arise because semantically related words are near each other or linked to each other in the mental lexicon. Lexical Access A kind of semantic priming in which a morpheme of a multimorphemic word primes a related word is called morphological priming. For example, sheepdog primes wool as a result of sheep. Even when one morpheme is free and the other bound as in runner, the free morpheme run primes words like race. Syntactic Processing Understanding a sentence involves more than merely recognising its individual words. The listener must also determine the syntactic relations among the words and phrases. This mental process, referred to as parsing, is largely governed by the rules of the grammar and strongly influenced by the sequential nature of language. Listeners actively build a structural representation of a sentence as they hear it. They must therefore decide for each incoming word what its grammatical category is and how it fits into the structure that is being built. Syntactic Processing For example, the string The warehouse fires… could continue in one of two ways: 1. …were set by an arsonist. 2. …employees over sixty. Fires is a noun in sentence 1 and a verb in sentence 2. Experimental studies of such sentences show that both meanings and categories are activated when a subject encounters the ambiguous word. The ambiguity is quickly resolved based on syntactic and semantic context. Speech Production Although sound within words and words within sentences are linearly ordered, speech errors or slips of the tongue show that the pre-articulation or planning stages involve units larger than the single phonemic segment or even the word. Such errors are called spoonerism, named after William Archibald Spooner, a distinguished dean of an Oxford college in the early 1900s who is reported to have referred to Queen Victoria as “That queer old dean” instead of “That dear old queen”. He also berated his class of students by saying “You have hissed my mystery lecture. You have tasted the whole worm,” instead of the intended “You have missed my history lecture. You have wasted the whole term.” Thank you

Use Quizgecko on...
Browser
Browser