Semantics PDF
Document Details
Uploaded by Deleted User
Tags
Summary
This PDF document outlines the concept of semantics, investigating its historical context and the role of semiotics in communicating meaning. It explores different types of signs and how language uses them. The document examines issues related to the communication of meaning and explores concepts including morphology, phonology, and different aspects of meaning
Full Transcript
1.WHAT IS SEMANTICS? 1. Definition: According to Kreidler, semantics refers to how languages organize and express meaning, and meaning is that ‘something’ that was in the speaker’s mind and moves to the hearer’s mind. 1.2. History: Around the 4th century BC, Aristotle’s reflections on grammar as wel...
1.WHAT IS SEMANTICS? 1. Definition: According to Kreidler, semantics refers to how languages organize and express meaning, and meaning is that ‘something’ that was in the speaker’s mind and moves to the hearer’s mind. 1.2. History: Around the 4th century BC, Aristotle’s reflections on grammar as well as Panini’s grammar, included questions about meaning in language. Foremost of the 20 th century semantics had been shunned from linguistic studies; specially by Bloomfieldian Structuralism and Chomskyan Generativism. However, by the end of the century, some scholars rebelled against this such as Langacker and Robert Wilensky. 1.3. Problems with semantics: the main problem with semantics is that its main object of study, which is meaning, is elusive; it cannot be directly observed. 1.4 How can meaning be communicated? The study of meaning is carried out by semiotics, which refers to how can something, a sign, stand for another thing (signifier stands for signified). C.S Pierce distinguishes three types of signs: a. Icon: there is a relationship of similarity between a sign and what it represents (a portrait of a person b. Index: there is a cause-effect relationship between a sign and its meaning (smoke>fire). c. Symbol: an arbitrary, or conventional relationship between sign and its meaning. (red flag>danger). Meaning in linguistics is symbolic but there are some aspects of linguistics that can be considered iconic. Semantics can be seen as part of semiotics. 1.5 How is meaning communicated though language? PHONOLOGY: Regarding phonology, we find a phenomenon known as sound symbolism, which is also known as phonosemantics, which states that there is certain association between the sound of an utterance and its meaning. For instance: a. Diminutives: there seems to be an association between sound /i/ with small things. The reason for this is that to pronounce /i/ the tongue is raised and only a small space is left, in contrast with /o/. b. Synaesthesia: this is another example of sound symbolism. It refers to mixing information from different modalities. For instance, in an experiment, participants associated the shape of the mouth with the shape of the thing being described (‘takete’ with spiky, angular object, while ‘maluma’ with round soft object). c. Onomatopoeia is the linguistic mimicking of non-linguistic sounds. For instance, ‘meow’ of a cat. d. Phonesthesia: the sound of a word reminds us of the object or action that it describes. Plunge, crack. e. Phonesthemes: the association of a combination of sounds to a particular meaning. (random?). For instance /gl/ is associated with verbs related to light or vision: glisten, gleam, glow. f. Prosody: prosody is described as ‘suprasegmental’ and it includes phenomena such as intonation. Intonation refers to changing the pitch of an utterance. These changes might lead to changes in meaning, even in non-tonal languages such as English. In tonal languages, pitch is used in a more defined manner; it involves a change in lexical meaning (Chinese). Intonation is essential for: recovering the intention of the speaker, the emotions or attitude of the speaker, it is also connected with grammatical form and communicative intention, it helps us differentiate between old and new information, to segment sentences into phrases and understand big chunks. MORPHOLOGY: Morphology studies structure of words. It is mainly focused on bound morphemes which can be inflectional: which do not change the grammatical category of a word, or derivational morphemes, which change the grammatical category of a word (girl/girls vs. work/worker). Derivational morphemes sometimes do not change the category of a word but alter the meaning of a word (treasure/treasurer). 1. In English, the meanings associated with inflectional endings are limited. Regarding nominal morphemes, inflectional endings can indicate: a. Plurality: is indicated with the morpheme -s b. Gender: can be indicated with an inflectional morpheme (steward/stewardess), other times we can have a completely different word, and sometimes there is not a feminine form (doctor). c. Size: it can also be indicated with a morpheme such as book/booklet or pig/piglet. However, the frequent meaning of diminutives is affection not size. d. Possession: indicated with the genitive ‘s’. On the other hand, verbal inflectional morphemes mark: tense (-ed), person and number (when we add -s we have third person singular), and aspect (-ing indicates). 2. The number of derivational meanings is broader and higher. For instance, -er: the one that does X, -less: without X. However, the semantic contributions of the morpheme is not transparent. For instance -ful does not always mean ‘full of X’ like in wonderful. This does not apply to ‘spoonful’, because it means the quantity that the spoon holds, not that it is ‘full of spoons’. Besides, -ful can also attach to a verb: helpful, forgetful. LEXICON: Lexicon deals with free-morphemes, which are those that do not have to get attached. There is a distinction between open-class words, which are also known as lexical or content words, and closed-class words, which are also known as function words. Unlike closed words, open words are more numerous, less frequent, longer, and they are acquired earlier. Besides, most neologisms are open-class words. Open-class words: there are no clear limitation to the meanings that can be expressed with open-class words: nouns (things), verbs (action), adverbs (modify action and states, as well as properties) and adjectives (express qualities of things). Closed-class words: the range of meanings that closed-class words express is limited: prepositions: indicate relations of place, time, space, manner, etc. Determiners: indicate reference, conjunctions: connect chunks of meanings. SYNTAX: Syntax refers to how meaning is expressed by ordering words in a specific way. Although language is compositional; it its compositionality is not straightforward. -Syntactic bootstrapping is a phenomenon first mentioned by Roger Brown. It refers to how children use syntactic information to infer the meaning of unknown words. This is because syntactic categories are linked to more or less broad types of meaning (mass nouns are substances while count nuns are concrete). Children explore these connections to narrow down the hypothesis space when guessing the meaning of new words. Adele Goldberg explored how grammatical constructions per-se, without any lexical context, can convey meaning of their own: known as construction grammar. 2-SOME METHODS 2.1 Methods: introspective methods are the most obvious in order to analyse what is going on in our minds, however, these methods, which are based on the judgement of the analysts, are subjective and are not externally visible. In semantics, there are a series of strategies that can be used to extract meaning from linguistic expressions in order to describe and analyse semantic phenomena. These strategies are based on novel methodologies found in a discipline known as Cognitive science, which is a multidisciplinary study of conceptual strategies. It combines insight from linguistics, anthropology cognitive psychology, neuroscience, philosophy, and artificial intelligence in order to arrive at a wider understanding of cognition. A key element in cognitive science is convergent evidence, which refers to the fact that many scholars gather evidence from different sources in order to see if a series of experiments point at the same direction, and thus offer more support for a given explanation. 2.2 Introspective methods: Semantic feature analysis is an approach to the analysis of the meaning of words, that describes them as formed by “meaning components”. This system states that it is possible to analyse the meaning of a word by identifying the parts/components of meaning that shares with other words as well as those that distinguish a word from others. Components of meaning are known as semantic features. For example, the terms man and woman share (HUMAN) and what distinguishes them is the feature gender (MALE) and (FEMALE). Binary semantic feature analysis comes from the theory of “semantic fields” related to the structuralist approach, that allowed the analysis of the sounds of a language by comparing them among themselves. According to the semantic field theory, a semantic field is a group of words that are related in meaning. Each word is formed by semantic features that distinguishes it from another within the semantic field. An advantage of feature analysis is that it offers a transparent method of capturing the meaning structure of groups of words by combining the use of several semantic features= feature matrix. - PROBLEMS OF BINARY SEMANTIC FEATURES: one of the problems is that there are some words that cannot be analysed with this method (such as light or intelligence), another problem is subjectivity: the semantic features proposed by one analyst may differ from those proposed by another analyst. Besides, semantic features capture an incomplete portion of the meaning of words (we cannot capture all aspects of a word with binary features). There might be a disagreement regarding what a semantic feature is. Finally, semantic features do not capture imagistic information (duck as feet, but what about shape?). 2.3 Statistical methods: Latent semantic analysis (LSA) and Hyperspace Analogue to Language (HAL) are high-computational methods. These methods are based on Wittgenstein’s idea that how methods cooccur with each other is enough to construct the representation of the meaning of a word, thus the meaning of a word depends on the use we make of that word. This approach to meaning is known as “meaning as use”. // These models can be considered “computational models of human semantic memory”. LSA counts the frequency with which a word appears in a given context, while HAL moves a ten-word window across a text base and computes all pair-wise distances between words within the window at a given point. (the idea behind is that words that are similar in meaning appear in the same contexts). 2.1 Psychological tasks: psychological tasks examine the cognitive processes that take place when we understand or produce language. The idea is to relate the change in behaviour of a subject to the key concept being studied. There is a distinction between online measures, which examine the activity that goes on as the participants are actively processing the stimuli, this is the case of Lexical decision tasks and priming, reading times and naming tasks. On the other hand, offline measures are based on the information that can be collected once the processing of the stimuli has ended. This is the case of memory tasks and feature listing tasks. 2.1.1 Lexical decision tasks: in lexical decision tasks participants are provided with a string of letters and they have to decide if it exist in their language as quickly as possible. An important phenomenon in these tasks is that known as priming. When students were provided with two-strings of letters they were able to identify faster words that were related such as “nurse-doctor”, than those that were not “nurse-feet”. The reason for this is that there is a mental connection, the two terms are connected in our mental lexicon, when we encounter the first term, it becomes activated, and part of that activation passes on to related words in what is known as spreading activation model. 2.1.2 Memory measures: memory measures are other types of tasks that examine the type of information that people remember after processing the experimental stimuli. 2.1.3 Reading times: these tasks measure the time that participants take to read a text. This gives information about the type of processing that takes place. Since they control the speed by pressing a key, these tasks are also known as self-paced reading tasks. 2.1.4 Eye-tracking Methods: eye trackers are devices that follow the gaze of participants as they are performing an activity and record some parameters of our eye behaviour. This is interesting because our eyes do not look at a scene in a fixated manner, instead they perform rapid movements known as “saccades”, which stay at a specific spot for 200-300 milliseconds and move to another place, Eye-trackers record different parameters of our saccadic movements: fixations (spot where we stop), fixation duration (time spent in a specific spot), scanpath (the path that our eyes follow), saccade latency (lapse between a stimulus and the moment the saccadic movement starts), and backtracks (the occasions in reading where our eyes move backwards). 2.2 Methods from neuroscience: there are a variety of methods that can be used to measure what is happening in our brains. (ERPs and TMS) 2.2.1 Event-related Potentials (ERPs): ERMs are methods from neuroscience that try to link language use to the patterns of neuronal firing in our brains. P600 AND N400 are the two components of ERPs that are relevant for linguistics since they are sensitive to syntax and semantics, respectively. 2.5.3 Hemodynamic MRIs and PETs: 2.5.3 Transcraneal Magnetic Stimulation (TMS): TMS is being used to study how the brain carries out certain cognitive processes. It consists of supplying a soft electric shock to a specific brain area creating a “reversible temporary lesion”, this way it is possible to know the function of a brain area in a given cognitive process. TMS can test whether a brain area is involved in the process being investigated causality. 2.3 Methods from computer science: computational modelling: the construction of a computer model that captures the main features of a theory of explanation, and that informs us about how well the model fits with reality. It is used in hard sciences. 3. LANGUAGE AND THOUGHT 3.2 The Formal Approach: Meaning as Amodal. Formal semantics, also known as symbolic or amodal, try to describe the meaning of language by using the apparatus of formal logic. This type of semantics is concerned with how words are related to objects in the world (referential semantics), FS is based on the notion of truth condition, which assumes that the role of semantics is to describe the conditions in the world that would have to meet for an expression to be true. For example, in order to know the meaning of “John is crying”, we need to know which condition must obtain the reality for that expression to be true. Once those conditions are represented in our minds, they become symbols since they “stand for” what they represent. These representations are amodal or arbitrary since they are not connected to any perceptual or motor modalities. This type of semantics follows Frege’s principle of compositionality which states that “the meaning of the whole is a function of the meaning of the parts”. This way, syntax would be important for this type of analysis (our brains are symbol systems resembling computer programs). 3.2.1 The Language Though Hypothesis (Jerry Fodor): the language thought hypothesis is associated to formal semantics. This hypothesis states that thought is linguistic in nature. When we understand a sentence, what we do is to translate it into an internal language of thought known as mentalse. Mentalese is an inner universal language that allows us to express any meaning in any language in the world. There are a number of features related to thought that are also found in language: 1. Thought is productive (because we are able to entertain an infinite number of thoughts as well as sentences) , systematic and compositional (we understand complex thoughts because we understand the components and how to combine them). 3.2.2 Problems of the Formal Approach: the symbol grounding problem: the symbol-grounding problem is an argument against symbolic formal approaches. It questions how it is that words (symbols in general) get their meanings; how the connection between symbol and the thing it refers to is stablished. This problem is exemplified by the “Chinese Room Argument” proposed by John Searle, and it claims that pure syntactic manipulation is not enough to give meaning to symbols. 3.3 The Embodied or Cognitive Approach: the Embodied approach, which is also known as cognitive approach, focuses on the relationship between meaning structures that are activated in our minds when we understand language and our body biological characteristics. Thus, understanding a sentence entrails the activation of simulations in our brain, which consequently, lead to the re-enactment of sensorimotor, proprioceptive, and introspective information elicited by the referents of these expressions. Thus, the main idea is the format in which we store meaning is not amodal or abstracted from reality, instead it is related to the different motor, perceptual, introspective systems of our brain. 3.3.3 Glenberg’s Indexical Hypothesis: The Indexical Hypothesis (IH) states that people understand language by simulating the actions described by phrases or sentences. The Indexical Hypothesis relies on a key idea in embodied approaches: the notion of affordance. Affordances are the possible action that a given object offers to a given organism. An important fact about affordance is that these are body-specific and this species specific. For instance, chairs afford sitting for humans but not animals. Besides, the affordances that an object offers depends on our main goal: chair for sitting, standing, etc. In some cases, it is not possible to think of an object without thinking of the actions associated with it. It is also possible to derive creative affordances. According to the IH, sentences are transformed into action-based meaning in three stages. First: words are mapped at their perceptual symbols, second: affordances are derived from these symbols, Finally, there is a smooth combination or meshing of the affordances of the different words, following the instructions provided by constructions. 3.3.4 Problems of the embodied approach: abstract concepts are a problem for embodied approaches because they do not leave any sensorimotor traces in our brain, and since we cannot smell, taste, touch or feel them in any way, there is no sensory memory to re-enact. Despite this, some supporters claim that we do activate simulations of specific domains, and we transfer that information to help us conceptualize abstract concepts. For instance, in order to conceptualize the abstract domain of “importance”, we use sensorimotor domains like physical “size” to help us deal with it. An important theory dealing with this is “Conceptual Metaphor Theory” SUMMARY OF DIFFERENCES: -Symbolic approaches: symbols as amodal, abstract, and arbitrary. Classical Arbitrary Intelligence. Digital treatment of information. Language of thought hypothesis. Software matters, hardware doesn’t. Syntax matters; semantics less important // Embodied approaches: Perceptual Symbol systems; symbols as modal and embodied. Connectionism and neutral network. Analogical treatment of information. Linguistic relativity. Hardware and software important: details are important. Syntax is not enough; semantics also needed. 4. WORD MEANING: 4.2 How words mean: reference. A word has meaning because we use it to refer to an object in the world. The object picked out by a particular word is known as referent. The action by which a speaker picks an object (referent) in the world is known as reference, which can be variable since it depends on context, user, etc. There are two technical words related to reference: denotation and extension. Denotation is the relationship that exists between a word and a set of objects, its potential referents (these are stable and not user-dependent), while extension refers to all the possible objects in the world that can be picked out by a word. 4.2.1 Referring and non-referring expressions: types of expressions. It can be said that grammatical words are non-referring while nouns are referring, although this is not always like this. a. Generic reference: refers to a class of things. A chair is the most basic piece b. Specific reference: refers to a particular object. The chair was incredible. c. Indefinite reference: refers to an exemplar of the class. I need to buy a chair. 4.3 How words mean: sense. Gattlob Frege was the first one that commented on the fact that two expressions could have the same and one referent. E.g. both expressions The Evening star and the Morning star have the same referent: Planet Venus, but seen at different times of the day. What differentiates two expressions with the same referent is “sense”. The sense of a word is related to its relationship with other words in the system, and our knowledge of the word itself. Sense can also be defined as the “defining properties of a word”. While reference is the operation that identifies which object we are taking about, sense refers to the rest of the information. Another term that relates to sense is “intension”, can be defined as a “set of properties shared by all members of its extension. 4.4 Categorization of prototypes. What is a category? A category refers to the idea that when we interact with objects or events we notice their similarities and differences and we group them in a manner that will be useful for us. These collections are what we call categories. Thus, categorizing corresponds to “chunking experience”. For example, animate entities, of medium size, that have long tails, and say “meow” are grouped under the category CATS. This is useful because it allows us to generalize and transfer to the particular entity the knowledge we have about its class. 4.4.2 The classical version: categories: There are a series of characteristics that all members of the category share. There is a fixed set of necessary and sufficient conditions defining the members of each category. All members of a category have equal status. All non-members of a category have equal status. All necessary and sufficient features defining a category have equal status. Categories have clear and well- defined boundaries. Categorizing maximizes both within category similarities and between- category differences. 4.4.3 A new view on categories: Wittgenstein and Labov. The classical view of categorization was questioned by different scholars. Wittgenstein, for instance, introduced the notion of family resemblance, which suggest that when we try to characterize all the members of a category, we are going to find distributed, partial, and sectorial similarities rather than universal, necessary and sufficient conditions. Another scholar that contributed to the new view on categorization was Labov, who argued that context does affect the way in which we categorize objects. 4.4.4 Prototypes: Rosch Eleanor Rosch claimed that there are bad and good examples of categories. She argues that categories had a graded-structure: their position will depend on the similarities to the prototype. The best example within a category is known as prototype, and the rest of the category is organized around it. Central members share many features with the prototype, while peripheral members (penguins) share fewer characteristics. // A high cue validity feature is that one that makes an exemplar more likely to belong to a category; in opposite cases we speak of low-cue validity. Besides, hedges allow speakers to indicate whether an expression is to be constructed as a prototypical member or a peripheral member of a category. For instance, we can use the expressions “loosely speaking” or “technically speaking” to suggest that a member of a category is peripheral. (sort of, kind of). 4.5 Semantic domains and Frames. Thematically related elements create structures known as schemas, or frames. A word can form part of different schemas and thus acquire different meanings. (dog for pulling, dog model, dog cop). By considering the role of schemas, we can see how concepts can be stored along with their role in a typical context, which might be cultural and even user dependent. This context might be necessary to explain the meaning of a concept. We need a background and that background is what we call “frame”. (e.g. to understand Breakfast, we need a bigger context “the daily meal system”). -Lawrence Barsalou distinguished between concept and category. For him, all the information that we have about a concept forms our categorical information. Each time we use a word, certain parts of this information become activated depending on the context. Not all this information is active at the same time. Thus, Barsalou proposes that we can define the information that becomes active as “concept”. This way, one category can give rise to many concepts. Piano can evoke different concepts: sound, furniture category and material. Barsalou also commented on ad-hoc categories. These categories have several distinguishing features. The main one is that they are not well established in long-term memory, not conventional, instead they are created for a specific purpose (also called goal-directed categories). They also tend to hold members that have little in common. Some ad-hoc categories are: “things you would take out of your house in case of fire”, “ways to scape being killed by the illuminati”. All categorizing is a bit “ad-hoc”, and dependent on context. Thus, the idea is that more than things we have in our mind, concepts are things we create with our minds. Thus, the meaning of words is modulated by context. 4.6 Denotation and connotation. Denotation is equated with reference, while connotation is more related with sense and intension. Denotation can be linked with the “primary” meaning of a word and connotation is linked to other “secondary” meanings. Colour: chromatic meaning but read associated to danger. Connotations of a word are always evolving. Besides, very often connotation is linked to emotional overtones. In this sense, words are associated with two types of connotational information: negative or positive. Emotional connotations can be measured: neutral, positive, negative. A way to measure connotation is Osgood’s Semantic Differential Technique. This technique is used to find out about the attitudes toward objects, events, or persons. (Questionnaires, behavioural effects). Participants are asked to rate a given word according to a number of scales, which framed by bipolar opposing adjectives (attractive-unattractive). Euphemisms are attempts at neutralizing the connotations of some expressions using an alternative. Connotations as simulation: connotation as part of embodied simulation: right hemisphere and affect primacy hypothesis. 5. MEANING RELATIONS: 5.1 Types of association: semantic, associative, and thematic. There are several types of relationships at work within the lexicon. We establish semantic relations among words that share a common core (intra-word similarities. For example, synonyms, antonyms, hyponymy. On the other hand, we establish associative relations between words that are found together in discourse. For example, If we hear rolling, we say “stones”. Besides, thematic relation occurs when two words are associated because they co-occur in a given event or situation. For instance, food, menu, wine and waitress are linked by their participation in “eating in a restaurant”. 5.2 Words and senses: polysemy and homonymy: 5.2.1 Polysemy is when different but related concepts are grouped together under the same name. Polysemy is quite frequent in language. There is a relation between the frequency of a word and the number of its senses: the more frequent; more senses it has. Polysemy helps explain many phenomena, such as the diachronic evolution of the meaning of words. Words typically change their meanings over time, and that evolution proceeds incrementally, by polysemous links. Polysemy is interesting for its ambiguity in meaning: A word in one language might correspond to different translation equivalents depending on its senses. All types of words can by polysemous, for example, prepositions are polysemous. -There is also polysemy in morphology: for instance, different meanings of -ful. Another example is the past tense morpheme -ed. It can also as past, to indicate unreality (I thought this would be interesting, but its not), and as a pragmatic softener (I wanted to tell you…) -Grammatical constructions can be polysemous. For instance, yes/no questions can mean several things (can you play piano?). -Polysemy is also found in intonation. For instance, coffee might mean different things depending on context and intonation. The study of polysemy is complicated. Because agreeing on how many meanings a word has is complex. Even dictionaries might disagree. 5.2.2 Homonymy: homonymy occurs when two words happen to arrive at the same form, but each has a different meaning (ball: a round object to play with or a dance). In dictionaries, the different senses of polysemous items are listed within the same entry, while homonymous senses are listed as separate entries. In polysemy, the senses of a word are related and in homonymy they are not related. Two forms to differentiate between polysemy and homonymy are: speakers’ intuition and etymological information. It seems that polysemous words are processed faster while homonymous take more time. This hints at the fact that polysemous senses are represented in a similar way in our mind, while homonymous senses have distinct representations. Since it is difficult to make a distinction, some authors suggested that the distinction should be treated as graded rather than binary. Another issue is how the different senses are stored in our minds. The representation on homonyms is not problematic because the words are different senses attached to the same word so these are stored separately in our minds. On the other hand, in polysemy we might keep the different meanings separately or we can store only a general meaning and the remaining senses are generated online. There are mechanisms to extend meanings in predictable ways: this is what is known as regular polysemy. On the other hand, in irregular polysemy we have patterns of extension that apply only in certain cases. 5.3 Semantic relations: 5.3.1 Synonymy: two words or expressions that have the same meaning. Some believe that total synonyms do not exist: no two words have the exact same meaning. Thus, what we find is near-synonyms or partial-synonyms. For instance: quick and fast. In order to be complete synonyms, two words must: I. They should be synonyms across all contexts. Thus, they need to have the same collocational range (big/large mistake?). II. They should be systematically equivalent on all dimensions of meaning, including register, style, connotations (bike, bicycle and boy, lad are not the same). III. They should be equivalent in all their senses. The lexical database WordNet uses synonyms as the main type of organizational relation. WordNet groups nouns, verbs, adjective and adverbs into sets of synonyms. It also uses other types of relations such as antonymy, hyponymy and meronymy. 5.3.2 Antonymy. Antonymy is part of a wider family of relations, that of opposites. Opposites are words that are similar in most respects but differ on one respect that makes them contrast with each other. Antonyms establish a “binary” relation between two words, which become associated by a clear contrast in their meaning. It mainly functions with adjectives which are simpler than other words. Similar to polysemy, it is not clear whether the relation is stored in memory or the opposition arises contextually from specific instances of discourse use. I. Canonical antonyms are those antonymic pairs whose association has become maximally conventional and entrenched; they are stored in that dual configuration in our minds. Some examples of canonical antonyms would be slow-fast, weak-strong, bad-good. On the other hand, non-canonical antonyms are words that are constructed as opposed to each other in a context-dependent way. (for instance white-black, but white contrasts with red in the case of wine, or brown if we talk about sugar). II. Antonyms can also be classified depending on the type of opposition that they establish: a. Gradable antonyms: those that allow different levels of a particular quality. These allow the use of comparative morphemes like -er, or -est. A gradable antonym can be unmarked when it does not imply anything special, it is neutral (how old is your sister), or marked (how young is your sister). b. Ungradable antonyms: these are also known as complementaries. They do not allow variations along scale. Like dead-alive, odd-even. They do not take comparatives or any other word that implies gradation. c. Converse. These are also known as reciprocals. These are relational terms such as husband-wife. These antonyms signal a relationship between two entities. Depending on which side you want to highlight, you get one or the other. d. Reversives: these are words that describe a process of change between two states: they describe one direction or the other. For instance: dress-undress, tie-untie. e. Motion antonyms: push-pull, rise-fall. *Gradable vs. ungradable are applied to adjectives, although it is also possible in verbs like “love-hate”, or nouns like “friend-enemy”. *Converse antonyms can be nouns or prepositions (above-below). *Reversives are appropriate for verbs. 5.6 arrive-depart (M) bring-take (M) brother-sister (C) doctor-patient (C) enter-exit (M) fast-slow (G) fat-skinny (G) foolish-wise (G) full-empty (U) happy-sad (G) hard-soft (G) hired-fired (U) identical-different (U) lenghten-shorten (R) pack-unpack (R) parent-child (C) pass-fail (U) predator-prey (C) raise-lower (M) rise-fall (M) servant-master (C) single-married (U) teach- learn (C) tie-untie (R) [Key: G (gradable), U (ungradable), C (converses), R (reversives), motion (M)] 5.3.3 Hyponymy and hyperonymy: Basic-level categories. Hyponymy and, its counterpart, hyperonymy are the prototypical taxonomic relations. The notion of taxonomy is related to abstraction. Humans are able to conceptualize an entity or a situation at very different levels of specificity. For instance: Entity> organism> animal>mammal>dog> hound> beagle> Snoopy. In this taxonomic hierarchy, the degree of specificity becomes progressively greater as one reads from left to right. These taxonomic hierarchies are related to basic-level categories. Which refers to a level of abstraction that is special. For instance, “dog” instead of mammal, chihuahua, etc. Dog conveys the right level of information most effectively. There are three levels of specificity: a. Superordinate level: words that group together members that are diverse, bearing little similarity to each other. Such as furniture, vehicle, animals. This is an abstract level. b. Subordinate level: Subordinate categories provide finer-grained knowledge about a category. Like hound or sports car. c. Basic level: it provides an intermediate level of specificity. It is the highest level of abstraction at which we can generate detailed lists of attributes. Characteristics: names are shorter, children learn them first, they have a common gestalt, we have a motor program for them, most of our knowledge is sorted at this level, they are identified faster than the other levels. We say that a word is a HYPONYM of another if it is a more specific term: dog is a hyponym of animal (specific to more general). And a word is a HYERONYM of another if it is a less specific term: animal> dog (more general to less general). 5.3.4 Meronymy. We say that a word X is a meronym of word Y, if X is a part of Y. For instance: screen, keyboard, and mouse are meronyms of computer. 5.4 Associations among words: 5.4.1 Definitions: a. collocation: co-occurrence relationship between a word an other individual words. b. Colligation: co-occurrence relationship between a word or a word class and grammatical categories. c. Semantic preference: relationship between a word and a semantic set of collocates. d. Semantic prosody: connotative or affective meanings of a word with its typical collocates. 2. 6.ACQUISITION OF MEANING AND CROSS-LINGUISTIC MEANING. 6.2 Gavagai: the problem of Word Learning. The philosopher W.V.O. Quine argued that learning the correct meaning of a word is quite complex by proposing an experiment: if someone is trying to learn a particular language, and in the middle of a conversation with a native speaker, a rabbits suddenly appears, and the native screams “Gavagi”. Gavagi might refer to an unlimited number of alternative meanings: a rabbit, a concrete rabbit, animal, a noise, the colour of the rabbit, running, etc. There might be an unlimited number of alternative meanings related to a word. 6.3 How the problem is solved 6.3.1 Initial preconditions: There are certain preconditions: children must possess certain abilities before they start acquiring language. In natural speech, words are fused together, and it is not easy to distinguish how many words there are in an utterance. Children solve this by using their statistical learning abilities, these abilities are important for segmenting speech into relevant units and for other tasks such as identifying the correct repertoire of phonemes in their languages and their most typical combinations (phonosyntactics), or to keep track of co-occurrences. Scholars mention some biases in how we attend to the world. One of the most invoked is the whole object bias: objects are very salient in real life, which explains why our first guess when establishing the meaning of a word is that it corresponds to a whole object, and not a part of it, or to the material it is made of. (it is easier to learn the term table than wood). Children also have some type of innate “protoknowledge” of dialogic interactions with adults. 6.3.2 Joint attention Children start learning when they are between 10-12 months of age. Children acquire attention- sharing abilities. In other terms, they acquire the ability to establish joint attentional frames; they become aware of when their attention and the adult’s are simultaneously focused on the same thing. This sets up an initial “common ground”. 6.3.3 Intention reading They also have the ability to infer the intentions of others. This is based on the recognition that the other people are sentient beings, with goals, desires, and beliefs similar to theirs. This receives the name of theory of mind. Children use their theory of mind capacities to infer the communicative intention of other people, regarding what they do and what they say. 6.3.4 Emergence of linguistic sign The linguistic sign is established when someone manipulates the attention of the addressee and makes him attend to the object or event of interest. 6.3.5 Catalysts: Once children have started learning language, there are a series of strategies that can be used to accelerate the process known as “catalysts”. At first, they use the strategy known as lexical contrast; once they associate a meaning with a specific word, their initial guess is that any other word must have a different meaning. This can be explained with the idea that words cannot be complete synonyms and children’s understanding of the communicative intentions of the speaker. Children also use their conceptual knowledge of the world to select the attribute that should be selected as a basis for extending the meaning of a word to another possible members of the category. Another strategy to increase the learning rate of word meanings is syntactic bootstrapping: the way the syntactic environment that a new word appears in constrains the hypothesis of space. Meanings are constrained by the syntactic environment. // Syntactic bootstrapping was first mentioned by Roger Brown refer to how children use syntactic information to infer the meaning of unknown words. They use syntactic connections to narrow down the hypothesis of space when guessing the meaning on new words. 6.4 Cognitive development vs. Lexical acquisition. One idea that has generated controversy is: do children first develop concepts and once they have them in place, they learn a name for them, or is language somehow responsible for the formation of some concepts? The first idea (cognitive development and then language acquisition) is common-sensical: we first establish the concept of “chair”, and then learn its correct name. The second idea, that language guides conceptual development, is also possible. In fact, there are many studies that show how word learning influences the conceptual organization of children. Words can be considered as “invitations to form categories”, that is, a suggestion that we should focus in certain features of the members of the category and disregard others. The case of superordinates is especially relevant here: in order to form the category: ANIMAL, or FORNITURE, we have to skip over the many differences existing among the members of the category and focus on the thing they have in common. // In conclusion, cognition and language develop in close interaction in a cyclical fashion. 6.5 Linguistic relativism: the Sapir-Whorf Hypothesis Quite often the formation of categories is guided by language; in many cases it is the existence of a word which makes us pay attention to the similarities existing among members of its category, and the differences with items which lie outside its category, to the extent that the categories that we perceive and identify are a reflection of linguistic categories, it would follow that our languages make us see reality in a specific way. Our conceptual world is shaped by the categories embedded in language. This is was Linguistic Relativity Hypothesis (LHR) holds: the different languages with different categories will make us see the world in different ways and thus will have an influence on our reasoning. This is also known as the Sapir-Whorf hypothesis: language is described as a “net” through which we perceive; the words are the different ‘holes’ in the net, which segment the world in different ways. Nowadays scholars tend to distinguish two versions of this argument, depending on their strength. Linguistic relativism: different languages favour different ways of seeing the world. Though these ways are compatible and complementary; they just emphasize different features. Each language has to pay attention to the meanings that are grammatically coded in it. Linguistic determinism: language determines what we see and conceive, and therefore it is impossible to escape the mode of thought imposed by our language. /// the structure of a language affects its speakers' world view or cognition, and thus people's perceptions are relative to their spoken language 6.7 The return of the LRH: Linguistic Relativity Hypothesis For a long time, the main opponents of the LRH have been generativists, who propose a universalist view of language deemed incompatible with linguistic relativism. Scholars such as John Lucy Boroditsky or Stephen Levinson are all firm supporters of the thesis that languages influence thought. Domains that have been investigated are: colour, space, motion, gender, tense, time, and agency. 6.8 Within-speaker Relativity Effects: the Case of Bilinguals and L2 speakers. What happens when the same person switches from one language to another? What happens with bilinguals? This has been found in many experiments: depending on the language you use during the experiment you will function like other speakers of that language. Relativity effects have also been examined in the case of second language speakers. For instance, when speakers are using a foreign language and are faced with a moral dilemma, they are able to take less emotional and more rational decisions that when they are using their native language. Researchers attributed this to the psychological distance imposed by the use of a foreign language: emotional effects are especially strong in our native language. This has also been demonstrated in a different area: swearing in a second language is not half as effective as in your native language. 6.9 Within-language Relativity effects: The case of “framing”. The conscious use of language in order to present the perspective of the world most favourable to the speaker’s goals receives the name of ‘framing’. In a way, this type of effect could be considered an “internal-language” relativity effect. For example, when talking about sensitive topics such as abortion, followers of a particular opinion may call themselves ‘pro- life’ (highlighting the role of the fetus), while those of a different option call themselves ‘pro- choice’ (highlighting women’s right to choose). Framing refers to highlighting a specific perspective. For instance, an experiment carried out proved that the way in which we construe an event, and specifically, the attribution of blame can be influenced by the linguistic choices we make in its description, even in the case of eyewitnesses. // How something is presented (frame) influences peoples’ choice. Linguistic framing can be established by a number of different mechanisms, from lexical choices to grammatical choices or the use of different metaphors. In the lexical realm we have mechanisms like euphemisms. We also have ‘politically correct language’ the idea behind this is to use language to get rid of undesirable connotations, and shape how people think about issues such as race or gender. Instead of fireman or policeman: police officers, firefighters. In order to reduce the negative impact of certain notions. 7. FIGURATIVE MEANING: 7.1 Literal vs. Figurative language: Figurative language is a type of language associated with poetry, song lyrics, or literature, and does not correspond to the way people speak. There is a clear separation between what constitutes ‘normal’ or ‘literal’ language and what is figurative language. Some examples of figurative language are: understatement, hyperbole, oxymoron, idioms, irony, sarcasm, tautology, proverbs, indirect speech acts. 7.2 What is a conceptual metaphor? A conceptual metaphor is not a linguistic phenomenon rather, it is a cognitive phenomenon in which one semantic area or domain is represented conceptually with the help of another one. The domain from which we take the information is known as “source domain” , and a “target domain” is a semantic domain that is structured and understood metaphorically in terms of the source domain: TARGET DOMAIN IS SOURCE DOMAIN (abstract is concrete sth we smell, touch, feel). (embodied approach?) 7.2.1 Metaphorical expression vs. conceptual metaphor A conceptual metaphor is the connection, established between two domains, which transfers information from one to the other. It is this mental schema, of both domains connected by a number of projections which receives the name of conceptual metaphor. In the cases in which communication is linguistic, we produce a number of different ‘metaphorical expressions’, all based on the same metaphor. For instance, time is an abstract domain, we need another domain in order to conceptualize it, one of them is the domain ‘money’. This might lead to the expression: time is money. Other expressions are: I have no time to waste, can I steal two minutes of your time? These are not different metaphors; they are different metaphorical expressions of one conceptual metaphor: TIME IS MONEY. 7.3 Types of metaphor 7.3.1 Depending on the structure of the domains: Metaphors can be classified according to: whether the domains involved have a lower or higher degree of inner structure: A. IMAGE-METAPHORS: image metaphors are the simplest type of metaphor. The external appearance of an object is connected to the form of another object. For instance: ‘ITALY IS A BOOT’ another example is the computer mouse. B. SPATIAL METAPHORS: these are also known as orientational metaphors. The source domain corresponds to a spatial domain such as VERTICALITY (up/down) or PROXIMITY (near/far), and the target domain corresponds to a scalar concept such as HAPPINESS (happy/sad) or QUANTITY (more/less). For instance, in the metaphor MORE IS UP, verticality helps us to structure the domain of quantity. C. CONCEPTUAL METAPHORS: two different and complex domains are put into contact. For instance: LOVE IS A JOURNEY, which is behind metaphorical expressions such as: he wanted to go too fast, look how far we’ve come, the relationship is going anywhere. (178) 7.3.2 Depending on complexity criteria: Grady introduced the distinction between ‘primary’ and ‘complex’ metaphors: complex metaphors are those formed by simpler metaphors, while primary metaphors function as ‘atomic’ elements that can be combined to form complex metaphors. We acquire hundred of these primary metaphors during our earlies years. For instance ‘More is up’ is a primary metaphor, because it cannot be decomposed into simpler metaphors, and because this metaphor emerges from the correlation between two domains that we experience bodily. Another example is ‘affection is warmth’. On the other hand, metaphors such as ‘anger is a hot fluid in a container’ responsible for expressions such as you’re making my blood boil, is made of a combination of simpler metaphors: ‘emotions are substances’, ‘our body is a container’, and ‘intensity is heat’. 7.4 Empirical evidence for the existence of Metaphorical projections: Conceptual metaphors were postulated as mental patterns that were able to organize a large amount of textual material. However, there is an often-quoted problem in this explanation: the problem of circularity. There is a certain scepticism about the psychological existence of these metaphorical patterns. The main reason is the method used to propose these mental metaphors: linguistic evidence. There is a circular argument that can be constructed as follows: I. Many similar expressions. II. That can be organized by source and target domain. III. The resulting mental metaphor explains the presence of these sentences. IV. We know that this metaphor exists at a mental level of organization because there are…. I. The solution to scape this vicious circle is to look outside language: non-linguistic expressions, gesture studies or psycholinguistic studies. 7.5 Where do metaphors come from? There are three possibilities that have been proposed in literature: I. embodied experience, II. Our cultural systems; and III. Language itself. Another basic notion from our embodied experience that serves as a sort of ‘bridge’ linking source and target domain is that of image schema. Image schemas are very abstract and basic internal structures that are derived from our interaction with the world. An example is that of a container image-schema, which corresponds to a very basic structure with an interior, an exterior, a boundary and an entity that moves in or out of it. This structure is abstracted from experiences such as going in or out of houses, of rooms, getting out of your car, etc. 7.6 What is a conceptual metonymy? Metonymies are just as ubiquitous as metaphors; many of the expressions we use in our colloquial language are based on metonymies. They are so normal and frequent that it is sometimes hard to think of them as ‘figurative language’. For example, a. I’m parked outside (it is a car that is parked outside, not the speaker, and b. buses are on strike (workers on strike not the busses). Recent studies ascribe metonymy to the realm of cognition, like metaphors. Basically, what happens in this view of metonymy is that one conceptual entity, termed the vehicle or reference point, provides mental access to another, named target. This is possible because both entities enjoy a certain conceptual contiguity within the same domain. For example: the ham sandwich just left without paying. In this case the food is a vehicle that grants access to the target; the person. A difference between metaphor and metonymy: in metaphors there are two distinct domains involved (food and ideas, time and money), while in metonymy we only have one single domains, both target and vehicle belong to the same domain. 7.7 Applications of conceptual metaphor and metonymy theory Metonymy: Using one entity to refer to another that is related to it. Depending on the relationship between the VEHICLE and TARGET, (vehicle provides access to the target) metonymies are classified as: A. PART FOR THE WHOLE: when the portion of an object, place, or concept is used to represent its entirety. When we say that we need some "good heads" on the project, we are using "good heads" to refer to "intelligent people" B. WHOLE-FOR-PART: something is named after something of which it is only a part. E.g. Obama was the first afroamerican president of America. America is used to refer to a small part of it: The U.S. C. PART-FOR-PART: One part of the domain activates another part. Ham activates costumer. 8. SENTENTIAL MEANING 8.2 Predicates and arguments: There are two types of expressions. On one hand, there are words that denote ‘individuals’, things that are independent and can stand on their own. Thus, the reference of the word ‘London’ can be understood regardless of any other consideration of time, person. or circumstance. It can be used without being attributed to anything or anyone. (arguments) On the other hand, there are words that make reference to ‘relations’ between entities and cannot be understood except by association with an individual. For instance, ‘on’ which cannot be understood on its own, we need to specify the two entities that are put into contact. (predicates) Words that indicate relationships and are thus inherently dependent on other words are called predicates, and the independent individuals are called arguments. Predicates are associated with relations and properties; arguments are associated with the individual objects that complete the meaning of predicates. For instance: Sara is eating chocolate. Sara and chocolate are arguments, while “eating” is the predicate. Linking a predicate to its arguments is known as predication. Verbs: predicates, and nouns: arguments. Each predicate ‘needs’ or ‘selects’ a different number of arguments. For example, intransitive verbs like sleep or snore need only one argument, while transitive verbs select two arguments: eat break or attack. Some verbs like put select three arguments. There are also verbs that select no arguments such as weather verbs like rain or snow. The term for the number of arguments that a verb takes is known as adicity. 8.3 Semantic roles A semantic role is the semantic relation that holds between an argument and its predicate. Semantic roles are key elements in establishing the basic information in a sentence: the ‘who did what to whom’. Possible relationships between an argument and its predicate are: 1. Agent: the agent is the active causer of any action. Normally, the agent is human, and therefore agency is connected with will and intentionality. Bart pushed Lisa. 2. Instrument. The thing used by an agent or experiencer, in order to do something to a patient or to perceive a Content. Instruments exert no action of their own. Valdermort attacked Harry with a killing curse. 3. Stimulus. This is whatever causes a psychological response in an Experiencer. It can be positive or negative. The situation scared me. Bernadette turns Howard on. 4. Patient. The argument that is changed or affected by an action instigated by an agent or theundergoer of a process. The rebel Alliance destroyed the Death Star. 5. Experiencer: the sentient being that experiences something, aware of the action or state described by the verb but does not control it. Homer saw the boys. Homer likes beer. 6. Content: an idea, state or mental representation perceived. Woody heard the bark of a dog. 7. Beneficiary. It is the element for or against whose benefit of the action is performed. I will buy you a beer. 8. Theme: the entity which is located somewhere or that changes place moving from one place to another. The ring was in Frodo’s pocket. 9. Source: the starting point of a trajectory. Jojo left his home. 10. Goal: the endpoint of a trajectory. Sam returned home safely. 11. Path: the trajectory that lies between the source and the goal. Steve walks down the street. 12. Location. Where a situation takes place or where an object is located. He was gambling in Las Vegas. // AGENT, INSTRUMENT AND STIMULUS: function as “initiators” of actions. PATIENT, EXPERIENCER, AND BENEFICIARY: function as “logical patients” THEME, SOURCE, PATH, GOAL, AND LOCATION: are spatial concepts. // Another broad distinction can be established within semantic roles: participant roles vs. non- participant roles. Participant roles are more central and answer to the question ‘who did what to whom’. Non-participant roles are optional and answer to questions such as ‘why, where, when, how’. Obligatory arguments: arguments, while optional elements: adjuncts (non-participatory) 8.3.1 Jackendorff’s two-tier Approach For the assignment of semantic roles, Jackendoff suggested adding phrases like ‘deliberately, on purpose, in order to’, as a test for agenthood. Thus, only those entities that can be combined with such phrases can be considered agents. For instance: Sara opened the dood in order to enter, but not: the key opened the door in order to enter. In the case of motion events, especially when we include human participants, there is a conflict. We can think of these participants in different terms. For instance: in Iron man flew to the top of the building, Iron man is both the agent and the theme. This is solved by Jackendoff by positing two tiers of roles, one having to do with actions and the second with movement. Examples: a. Thor threw his hammer: Spatial tier ‘source theme’, and action tier ‘agent patient’. 8.4 Propositions as units of meaning The union of a predicate with its arguments (and adjuncts) forms a proposition. A proposition is the smallest unit of meaning that can be put in predicate-argument form; they are said to capture the abstract, deep, and explicit meanings of sentences. They summarize the ideas expressed in sentences. For instance, sentences like: Mary bakes a cake, A cake is being baked by Mary, Mary’s baking of a cake, etc. correspond to the proposition (BAKE, MARY, CAKE). The information that can be captured in this propositional form corresponds to its ‘deep meaning’ or ‘gist’. We tend to forget the specific form with which we heard something and remember only the gist. Besides, sentences with fewer propositions are understood quicker than sentences with more propositions. Nevertheless, propositions are amodal, abstract and arbitrary representations of the meanings of language and fall prey to the same problems of symbolic accounts, including circularity and ‘lack of grounding’. 8.5 Linking: Semantic role + grammatical function. How semantic roles are expressed linguistically? The most usual way to describe this is by associating them with a given grammatical function. Thus, in a verb such as ‘write’, the AGENT is normally linked with the grammatical function ‘subject’, which tells us how to express it linguistically. In most cases, agents are Subjects, Patients are Direct Objects, and Instruments are Obliques, but this is not always the case. For example, the same grammatical function can be associated with different semantic roles. Table 8.2 One grammatical function, Subject, linked to several semantic roles: agent, experiencer, patient, theme, instrument, stimulus. At the same time, the same semantic role can be associated with different semantic functions Since the mapping between semantic roles and grammatical functions is not one-to-one, but many-to-many, the precise way in which the correct correspondences are chosen must be specified. This problem is addressed by Linking Theory. One popular way to predict how a given semantic role will surface syntactically is to use what has been called the SUBJECT HIERARCHY. What this hierarchy goes is tell you which semantic role will be linked to subject; that grammatical function will be assigned to the semantic role closer to the left of this hierarchy. //8.7 Commercial transaction scene Frame. In a commercial transaction scene, we have a person, who has something wanted by someone else. Money is involved. There are different verbs that can be used to activate this frame, each of them shows a slight different perspective. E.G. BUY: buyer is the subject, goods are the object, and seller from, money for. 9. DISCOURSE MEANING AND PRAGMATICS: 9.1. Discourse, context, and use. Language is bsed on “What words mean is a matter of what people mean by them” our capacity for engaging in collaborative activies According to the decoding view, when speakers what to convey a message, they encode it using a linguistic code and then produce the appropriate vocal signals; the addressee hears these Languange as signals, and, using the same linguistic code, ‘decodes’ them back into the original message. a cooperative This is not the way language works. activity. Meaning. Not In the inferential view of language, linguistic material plays only a partial role in the whole in the speaker's process of meaning construction. Language is just a cue of the complete meaning to be words but in recovered. It is the hearer who using his knowledge of the situation (context), must make an the interaction educated guess about what the other person wants to communicate. The hearer will thus add speaker- whichever inferences are needed in order to cross the gap between what is said and what is hearer. meant. Context permeates all semantic phenomena, to the extent that there is no agreement on what type of semantic information is ‘context-free’. In the interferential view, owrd meanings are always Overachiving goal: recognition of the communicative intention dynamic, context-dependent abd ad-hoc. of the speaker. Must have shared knowledge. Must it use it to "guess" what the other person might be thinking. 9.2 Spatial-temporal context: Deixis (spatio) A basic communicative event typically takes place in some place and at some given time. There are certain linguistic elements that make reference to this immediate context: these elements are known as deictic (deixis means ‘to point out’). These expressions normally take as a reference point the act of speaking and derive considerations of space and time from this focal point. The main deictic categories involve information about three elements of this spatio- temporal scene: the physical scene where the communication is taking place, the time in which it is taking place and the persons involved in. 9.2.1 Space Deixis. The speaker is the most important element in deixis, and it is called deictic centre. Thus, we take into account the place where the speaker is, the time at which the speaker speaks and who the speaker is, and to whom is s/he speaking. In English there are two notions related to space that we can point out: distance from the speaker (near or far) and direction (towards or away from the speaker). Spatial deixis shows up in different types of words. A. Adverbs: here, there, left, right… ‘here’ could be paraphrased as ‘close to the speaker’ and ‘there’ as ‘away from the speaker. (Additional words: yonder, thither, and thence.) B. Demonstratives: this/these, that/those. These elements serve to point out locations which are either close to the speaker (this/these) or away from the speaker (that/those) C. Verbs: come, go. Verbs can indicate spatial deixis by encoding whether a given movement is produced towards or away from the speaker: ‘come’ moves ‘movement towards the place where the speaker is’ and ‘go’ means ‘movement away from the speaker’. This description can be elaborated more, because sometimes the reference point can be moved away from the place where the utterance takes place. For instance, ‘Will you come to my party?’ takes as a focal point the future location of the party, even if the speaker is not there yet. This is known as deictic projection. Displacement towards the past is also possible; we can say ‘Yesterday, while I was in my office, Jane came to say hello’. Sometimes, ‘come’ involves not just the place where the speaker is, but also the place where the hearer is. This sentence can be uttered if the speaker is at the office in the moment of speaker, if the hearer is at the office, or if either the speaker or the hearer, were in the office the day before. This deictic projection is also done in story-telling: we can easily switch the deictic centre and locate ourselves in the place where the story is being told. (I met this girl at the bar, and she said “Hey girl can I get you some coffee?). Systems of spatial deixis can be projected to other domains; this is what happens in discourse or textual deixis. For instance: in ‘Here we find again the example seen in chapter 1’ we are using deictic spatial terms to locate textual material. 9.2.2 Time deixis There are different ways of expressing time deixis in English, that is, locating some event with respect to the deictic centre created by the moment of speaking. There are words or expressions that are only used in a temporal context: ‘now, later, then, tomorrow, soon, currently’. Other expressions involve measurements of time used with demonstratives: ‘this month, next year, two centuries ago’. Grammatical tense itself is deictic, since notions such as present, past, or future depend on the moment of speaking. We can also use temporal deixis to indicate distance from reality. Thus, to indicate that something is impossible or unreal, we use the subjunctive mood, which coincides with the past tense: If I were a rich man. 9.3 Discourse/text context We can also use as context whatever has already being said in a communicative event (i.e the previous discourse). This receives sometimes a different and specific name, cotext. 9.3.1 Anaphor and Cataphor. Anaphor (back) is a way of making reference to something already mentioned without having to repeat the same word (s); personal pronouns are usually used for this. ‘Mary Poppins was a big hit in the 60s. She won everybody’s heart with her umbrella’. There are very complicated linguistic analyses that try to show how syntax constrains the range of possible co-reference between anaphor and the entity it refers to, which is called its antecedent. Other theories include a wider number of possible strategies to link an anaphor with its antecedent. Cataphor (forward) is a related phenomenon, much less frequent than anaphor: it refers to the case in which we have to look for the reference of a linguistic item in the forthcoming discourses. ‘It, bugs me that you don’t want to go to the cinema tonight. 9.3.2 Ellipsis: read 9.4 Shared/common knowledge as context. This shared knowledge is called ‘common ground’ the sources of this common ground: the fact that we are members of a community. 9.5. Communicative intention: read 9.6 Grice’s cooperative principle The philosopher Paul Grice identified a general Cooperative Principle: the assumption, held by participants in a communicative exchange, that both are trying to be cooperative in their utterances. He distinguished between three maxims: A. The maxim of Quality: be truthful a. Do not say what you believe is false. b. Do not say that for which you do not have adequate evidence. B. The maxim of Quantity: Be informative We use these maxims to build space tio3YÑÑÑ a. Make your contribution as informative as is required (nor more or less) Linguistic strategies D) The Maxim of Relation: be relevant for courtesy. C. The maxim of Manner be clear. (the most important) Implicatures with a a. Avoid ambiguity and obscurity Make your contributions relevant to the single word: topic of exange conventional b. Be brief and orderly implicatures The maxim of relation: basis for Relevance Theory. todo lo que decimos tiene asociado su 9.7 Speech acts relevancia The intentions of the speakers can be classified into different types or categories, and this is where speech act theory comes in. The theory of speech acts examines the intentions of speakers. This theory started in the fifties with the philosopher John Austin. Austin referred to performative utterances as those cases that create some change of state in the world. The circumstances that have to be presented for a given speech act to be properly performed and recognized are known as felicity conditions. In speech act theory, three different but related acts can be recognized in the production of any utterance: 1. Locutionary act: whenever we produce a meaningful linguistic expression, we are producing a locutionary act. understanding the words 2. Illocutionary act: refers to the communicative force (speaker’s intention) when producing an utterance. The same sentence can be uttered with different communicative intentions: ‘I’ll be back’ can be promise, threat… 3. Perlocutionary act: refers to the effect that we create with an utterance. For example, in order to use the verb ‘convince’ properly, somebody’s words have to have an effect on the hearer. Your hearer has to believe you and change his/her mental state accordingly. A CLASSIC WAY OF CLASYFYING SPEECH ACTS: Searle and Venderveken A. Declaratives: these speech acts bring about a change in the world via their utterance. The given felicity conditions must be met especially the special institutional role of the speaker, a specific context. ‘Referee: you’re out’ B. Assertives: These speech acts state what the speaker believes to be the case or not. Statements of some facts, assertions, descriptions, and conclusions are all examples of speakers’ beliefs about the state of the world. ‘The earth is flat’ C. Expressives: speech acts that state what the speaker feels. They express psychological states and can be statements of pleasure, pain, etc. ‘I feel so lonely’. D. Directives: the speech acts that speakers use to make someone else to something; they are commands, orders, requests, suggestions. ‘Don’t lose my number’ E. Commisives: the speech acts that the speakers use to commit themselves to some future action. They express what the speaker intends. Promises, threats, refusals. ‘I’ll be back’. 9.7.1 Direct and Indirect speech acts. There are three forms of a sentence: declarative, interrogative, and imperative. And three more general communicative intentions: statement, question, and command. In this way, we normally use declaratives to make statements, interrogatives to ask and imperatives to make a command. In the cases in which there is a direct relationship between the sentence form (d, i, i) and its function, there is direct speech. However, if this does not take place, we call it indirect speech. For instance: a declarative to make a request: It’s a bit cold here: a request to close the window. a declarative to make a command: officers eill wear evening dress./students will not speak during the class CONVERSIONAL IMPLICATURES an interrogative to make an assertion: do you think im stupid? Why don't you shut up? Whenerver two persons hold a conversation, they follow a given code of behaviour that makes the conversation possible The co-operative principle: make your contributio such as required