An Introduction to English Semantics and Pragmatics (PDF)

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

HumaneCello

Uploaded by HumaneCello

Patrick Griffiths and Chris Cummins

Tags

semantics pragmatics english language linguistics

Summary

This book provides a comprehensive introduction to the study of meaning in English, covering both semantics, which examines the literal meaning of words and sentences, and pragmatics, which studies the meaning in context. It covers topics such as sense relations, nouns, adjectives, verbs, tense and aspect, modality and figurative language, and more.

Full Transcript

An Introduction to English Semantics and Pragmatics Edinburgh Textbooks on the English Language General Editor Heinz Giegerich, Professor Emeritus of English Linguistics, University of Edinburgh Editorial Board Heinz Giegerich, University of Edinburgh – General Editor Laurie Bauer (University of W...

An Introduction to English Semantics and Pragmatics Edinburgh Textbooks on the English Language General Editor Heinz Giegerich, Professor Emeritus of English Linguistics, University of Edinburgh Editorial Board Heinz Giegerich, University of Edinburgh – General Editor Laurie Bauer (University of Wellington) Olga Fischer (University of Amsterdam) Willem Hollmann (Lancaster University) Marianne Hundt (University of Zurich) Rochelle Lieber (University of New Hampshire) Bettelou Los (University of Edinburgh) Robert McColl Millar (University of Aberdeen) Donka Minkova (UCLA) Edgar Schneider (University of Regensburg) Graeme Trousdale (University of Edinburgh) titles in the series include: An Introduction to English Syntax, Second Edition Jim Miller An Introduction to English Phonology April McMahon An Introduction to English Morphology: Words and Their Structure Andrew Carstairs-McCarthy An Introduction to International Varieties of English Laurie Bauer An Introduction to Middle English Jeremy Smith and Simon Horobin An Introduction to Old English, Revised Edition Richard Hogg, with Rhona Alcorn An Introduction to Early Modern English Terttu Nevalainen An Introduction to English Semantics and Pragmatics Patrick Griffiths An Introduction to English Sociolinguistics Graeme Trousdale An Introduction to Late Modern English Ingrid Tieken-Boon van Ostade An Introduction to Regional Englishes: Dialect Variation in England Joan Beal An Introduction to English Semantics and Pragmatics, Second Edition Patrick Griffiths and Chris Cummins An Introduction to English Phonetics, Second Edition Richard Ogden An Introduction to English Morphology, Second Edition Andrew Carstairs-McCarthy An Introduction to English Phonology, Second Edition April McMahon An Introduction to English Semantics and Pragmatics, Third Edition Patrick Griffiths and Chris Cummins Visit the Edinburgh Textbooks on the English Language website at www​.edinburghuniversitypress.com/series/ETOTEL An Introduction to English Semantics and Pragmatics Third Edition Patrick Griffiths Revised by Chris Cummins Edinburgh University Press is one of the leading university presses in the UK. We publish academic books and journals in our selected subject areas across the humanities and social sciences, combining cutting-edge scholarship with high editorial and production values to produce academic works of lasting importance. For more information visit our website: edinburghuniversitypress.com © in the first edition of this work, Literary Estate of Patrick Griffiths, 2006 © in the revisions and additional third edition material, Chris Cummins, 2023 Cover design & illustration: riverdesignbooks.com Edinburgh University Press Ltd The Tun – Holyrood Road 12(2f) Jackson’s Entry Edinburgh EH8 8PJ Typeset in 10.5/12 Janson by Cheshire Typesetting Ltd, Cuddington, Cheshire and printed and bound in Great Britain A CIP record for this book is available from the British Library ISBN 978 1 3995 0460 7 (hardback) ISBN 978 1 3995 0461 4 (paperback) ISBN 978 1 3995 0462 1 (webready PDF) ISBN 978 1 3995 0463 8 (epub) The right of Patrick Griffiths and Chris Cummins to be identified as the authors of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988, and the Copyright and Related Rights Regulations 2003 (SI No. 2498). Contents List of figures and tablesviii Preface to the third editionix 1 Studying meaning 1 Overview 1 1.1 Sentences and utterances 4 1.2 Types of meaning 5 1.3 Semantics vs pragmatics 10 Summary 14 Exercises 14 2 Sense relations 16 Overview 16 2.1 Propositions and entailment 16 2.2 Compositionality 19 2.3 Synonymy 21 2.4 Complementarity, antonymy, converseness and incompatibility23 2.5 Hyponymy 26 Summary 30 Exercises 30 Recommendations for reading 31 3 Nouns 32 Overview 32 3.1 The has-relation32 3.2 Count nouns and mass nouns 40 Summary 42 Exercises 42 Recommendations for reading 43 vi an introduction to english semantics and pr agmatics 4 Adjectives 44 Overview 44 4.1 Gradability 44 4.2 Combining adjective meanings with noun meanings 47 Summary 52 Exercises 53 Recommendations for reading 53 5 Verbs 55 Overview 55 5.1 Verb types and arguments 55 5.2 Causative verbs 57 5.3 Thematic relations 61 Summary 64 Exercises 64 Recommendations for reading 65 6 Tense and aspect 66 Overview 66 6.1 Talking about events in time 66 6.2 Tense 69 6.3 Aspect 73 Summary 79 Exercises 79 Recommendations for reading 80 7 Modality, scope and quantification 82 Overview 82 7.1 Modality 82 7.2 Semantic scope 87 7.3 Quantification 90 Summary 98 Exercises 98 Recommendations for reading 100 8 Pragmatic inference 101 Overview 101 8.1 Some ways of conveying additional meanings 103 8.2 The Gricean maxims 106 8.3 Relevance Theory 112 8.4 Presuppositions 115 Summary 119 Exercises 119 Recommendations for reading 120 contents vii 9 Figurative language 121 Overview 121 9.1 Literal and figurative usage 121 9.2 Metaphor 123 9.3 Metonymy 125 9.4 Simile 127 9.5 Irony 128 9.6 Hyperbole 129 Summary 131 Exercises 131 Recommendations for reading 131 10 Utterances in context 133 Overview 133 10.1 Tailoring utterances to the audience 133 10.2 Definiteness 134 10.3 Given and new material 136 10.4 The Question Under Discussion 143 Summary 145 Exercises 146 Recommendations for reading 146 11 Doing things with words 148 Overview 148 11.1 Speech acts 148 11.2 Indicators of speech acts 151 Summary 159 Exercises 159 Recommendations for reading 160 Suggested answers to the exercises161 References175 Index179 List of figures and tables Figures 2.1 An example of levels of hyponymy 28 2.2 Hyponym senses get successively more detailed as we go down the tree 28 2.3 Part of the hyponym hierarchy of English nouns 29 4.1 The simplest case of an adjective modifying a noun is like the intersection of sets 48 6.1 Time relations among the events described in (6.1). Time runs from left to right: open-ended boxes indicate events with no endpoint suggested  69 7.1 Venn diagrams for the meanings of (7.20a) and (7.20c) 92 Tables 3.1 Distinguishing between count and mass nouns 41 5.1 Examples of causative sentences with an entailment from each59 5.2 Three kinds of one-clause causative with an entailment from each 61 6.1 Two-part labels for tense–aspect combinations, with examples69 6.2 The compatibility of some deictic adverbials with past, present and future time 73 6.3 A range of sentences which all have habitual as a possible interpretation74 10.1 A selection of indefinite and definite forms in English 137 viii Preface to the third edition When Laura Williamson from Edinburgh University Press asked me whether I would like to produce a new edition of this textbook, my first reaction was to be flattered. But, of course, what Laura said was rich in possible interpretations. I still remember being confused by just this kind of utterance at primary school, and failing to understand that the teacher’s “Would you like to sit over there?” was actually an instruction rather than just a slightly odd question. With more experience of language under my belt, naturally I under- stood Laura’s question as an indirect request. But, inevitably, I also won- dered whether there was a shade of meaning along the lines of ‘we need a new edition, because there is so much wrong with the existing edition’. I still don’t know whether that meaning was intended – I didn’t dare ask. But the opportunity to go back and revise a text like this is always welcome, and of course I found all sorts of places where what struck me as crystal clear six years earlier strikes me as a bit murky now. So I’ve welcomed the chance to try to filter out some of these issues. I’ve also attempted to plug a few gaps in coverage, while trying not to go on at unwarranted length. This book retains much of the structure of Patrick Griffiths’s original edition, and draws on many of the examples he introduced. Therefore, much of the credit goes to him, and to the people to whom he gener- ously gave credit, notably Heinz Giegerich, Anthony Warner, Kenji Ueda, and Janet and Jane Griffiths. For my part, I’d also like to thank Ronnie Cann for his detailed comments on the second edition and Hannah Rohde for her comments on and support with the third edition. And, of course, thanks go to the team at Edinburgh University Press for their encouragement, professionalism and, above all, patience. ix 1 Studying meaning Overview This book is about how the English language allows people to convey meanings. As the title suggests, semantics and pragmatics are the two main branches of the linguistic study of meaning. Semantics is the study of the relationship between units of language and their meaning. Pragmatics is concerned with how we use language in communica- tion, and so it involves the interaction of semantic knowledge with our knowledge of the world, including the contexts in which we say things. Explanations of terms Numbers in bold print in the index point to the pages where technical terms, such as semantics and pragmatics in the paragraph above, are explained. To illustrate some of the differences between semantics and prag- matics, let’s consider example (1.1). (1.1) That’s what I’m talking about! Language enables us to communicate about the world. This works because there are conventional links between expressions in language and aspects of the world (real or imagined) that language users talk and write about: things, activities, and so on. We sometimes use the word denote to describe these links: I denotes the speaker, talking denotes an activity, and so on. An expression is any meaningful language unit or sequence of meaningful units, such as a sentence, a phrase, a word, or a meaningful part of a word (such as the s of cats, which denotes plurality, as opposed to the s of is, which doesn’t have its own separate meaning). 1 2 an introduction to english semantics and pr agmatics In (1.1), that denotes something that is assumed to be obvious to the hearer in the current context of utterance: it might be an object that is physically present, or it might be a linguistic expression that has just been uttered. The expression what I’m talking about also denotes some- thing, specifically the topic that the speaker has been discussing. And the expression ’s, in (1.1), denotes a relationship of identity between those two denotations. So one possible interpretation of (1.1) is that something which is obvious to the hearer at the moment (that) corre- sponds (’s) to the topic that the speaker was previously addressing (what I’m talking about). But actually there’s another possible interpretation for (1.1), because it also functions as a fixed idiomatic expression meaning something like ‘That’s excellent’ or ‘That’s impressive’ or ‘That’s great news’. In this case, that would still denote something that is obvious to the hearer in the moment, what I’m talking about would denote ‘excel- lent’, and ’s would denote a relationship between these two things which attaches the quality ‘excellent’ to the denotation of that (often called a predication). The important point here is that there is an interplay between seman- tics and pragmatics. Semantics provides a set of possible meanings, and pragmatics is concerned with the choice among these possibilities. Language users can take account of context, and use general knowledge about the way the world works, to build interpretations on this semantic foundation. In understanding (1.1), we use the context of the utterance not only to understand what the speaker means by that, but also to figure out which of the possible senses of the whole utterance is relevant: the one in which they mean that that matches a previous topic of discussion, or the one in which they just mean that that is excellent. This view fits with a way of thinking about communication that was pioneered by the philosopher H. P. Grice in the mid-twentieth century (see Grice 1989 for a collection of relevant work) and which has been highly influential within linguistics. According to this view, human communication is not just a matter of encoding a signal, sending it, and decoding it at the other end. Rather, it requires active collabora- tion on the part of the addressee or hearer (the person the message is directed to). The hearer’s task is to try to guess what the speaker intends to convey, and the message has been communicated when (and only when) the speaker’s intention has been recognised. (By conven- tion, I’ll refer to the ‘speaker’ and the ‘hearer’ whether we’re actually talking about speech or written communication.) The speaker’s task is to judge what needs to be said (or written) in order for this process to work: that is, so that it’s possible for the hearer to recognise the speaker’s intention. studying me aning 3 This way of looking at communication has several important consequences: The same string of words can convey different messages, because – depending on the context – there will be different clues to help the hearer recognise the speaker’s intention. Similarly, different strings of words can be used to convey the same message, because there may be lots of different ways for the speaker to give clues to their intention. Sometimes a great deal can be communicated with very few words, because of the active participation of the hearer in recovering the speaker’s meaning. Mistakes are possible: the hearer might arrive at the wrong inter- pretation. If a speaker suspects that this has happened, they may try to repair the communication by saying more. This is easy in face- to-face interactions (where the speaker can monitor the hearer’s reaction for clues that they might have been misunderstood), harder in live telephone conversations or text chats (because of the lack of feedback), harder still in emails or letters, and essentially impossible in books (where the writer may never find out whether the reader understood them as intended). The rest of this chapter introduces some other concepts that are important to the study of linguistic meaning, and indicates which later chapters develop them further. It will define a few more technical terms, but (hopefully) just enough to give you a reasonable initial grasp of semantics and pragmatics and set you up for reading introductory books in this area. In everyday life, competent users of a language generally don’t give a lot of thought to the details of what they’re doing and how it works. Linguistics researchers do: they generally operate on the assumption that there are interesting things to be learned from those details. This can appear a little obsessive at times, like in the discussion of (1.1) – why should we care what that, on its own, means? And does it make a differ- ence that ’s does a slightly different thing in the two possible interpreta- tions? But the general goal of linguistic analysis is often to try to bring all the unconscious knowledge that we have about language to con- scious awareness. It turns out that close inspection of bits of language and instances of usage – even quite ordinary ones – can sometimes give us new insights into how language really works and, by extension, how we think. 4 an introduction to english semantics and pr agmatics 1.1 Sentences and utterances Just as we distinguish semantics and pragmatics, we distinguish sentences and utterances. Utterances are the things that have meaning in our immediate experience as language users. (1.2) presents three examples. (1.2) a. I agree with you. b. The departments. c. All that glisters is not gold. Utterances are the raw data for our linguistic analysis. Just as we use the term “speaker” to mean the sender of a particular message, whether that’s in speech or writing, we use the term “utterance” to refer to that message (or a part of it), regardless of whether or not it is actually spoken out loud. Each utterance is unique, having been produced by a particular speaker in a particular situation, and can never precisely be repeated. Even when someone “says the same thing twice”, by repeat- ing themselves, the two utterances differ in potentially important ways because the context has changed (it may be relevant that the previous utterance was made, for instance). It’s easy to see how the content of what a speaker means by saying (1.2a) depends crucially on the context: after all, the speaker is more likely to mean that they agree with the hearer about the particular issue under discussion at the moment than they are to mean that they always or generally agree with the hearer. Notation When it matters, I use “” double quotes for utterances, ‘’ single quotes for meanings, and italics for sentences and words that are being considered in the abstract. I also use single quotes when quoting other authors, and double quotes as “scare quotes” around expressions that are not strictly accurate but potentially useful approximations. The abstract linguistic object on which an utterance is based is a s­ entence. For instance, if (1.2b) is uttered in response to the question “Who is responsible for lecture scheduling?”, we could think of (1.2b) as being based on the sentence The departments are responsible for lecture sched- uling. If it had been uttered in response to some other question, it would likely correspond to a different sentence, and bear a different meaning. In the case of the utterance of the proverb (1.2c), we could argue that the underlying sentence is straightforwardly All that glisters is not gold. We studying me aning 5 don’t need to rely on context to figure this out. However, that sentence is itself ambiguous: it has two possible meanings, ‘Not everything that glisters is gold’ or ‘Nothing that glisters is gold’. Sometimes a speaker will signal which interpretation is correct – in this case, for instance, emphasising not might hint at the ‘not everything’ ­interpretation – but often things are not so clear-cut. (We’ll see more examples like this in Section 7.3.5.) However, even if the speaker is not giving us a clear indi- cation of which meaning to choose, context might still help us under- stand which one is more likely to be appropriate. In the case of this proverb, the sense that is usually relevant is the weaker ‘not everything’ meaning – the speaker is more likely to want to convey the message ‘things may not be what they appear to be’ than to convey the message ‘things are never what they appear to be’. 1.2 Types of meaning When we think about meanings, there are several slightly different ideas that we might want to distinguish, among them speaker meaning, utterance meaning, and sentence meaning. We can think of speaker meaning as the meaning that the speaker intends to convey when they produce an utterance. Assuming that, as hearers, we’re interested in figuring out what people are trying to tell us – which is probably for our own good, in general – we are constantly having to make informed guesses about speaker meaning. Speaker meaning, construed this way, is a private thing: only the speaker really knows what they meant to convey. Indeed, a speaker may not wish to admit that they intended to convey certain mean- ings. Furthermore, a speaker might not be successful in conveying a ­particular meaning – or, to put it another way, the hearer might not be successful in recovering the meaning that the speaker intended. So we might want to distinguish speaker meaning from utterance meaning, which we could think of as the meaning that an utterance could be understood as conveying when interpreted by people who know the language, are aware of the context, and have whatever background knowledge could reasonably be assumed by the speaker. Utterance meaning is, in this sense, a necessary fiction or idealisation that linguists doing semantics and pragmatics have to work with. It relies on the intuitions that we have as language users about what would be likely to happen, communicatively, as a result of a particular utterance being made under a particular set of circumstances. Because utterances are instances of sentences in use, an important first step towards understanding utterance meaning is understanding 6 an introduction to english semantics and pr agmatics sentence meaning. I’ll take this to be the meaning that people familiar with the language can agree on for sentences considered in isolation. This is a good place to start because language users have readily acces- sible intuitions about sentences. I appealed to a shared intuition about sentence meaning when I said that (1.2c) was ambiguous. That was a relatively subtle example, though: lots of sentences are much more obviously ambiguous, such as (1.3a), which could mean (1.3b) or (1.3c), or (1.4a), which could mean (1.4b) or (1.4c). (1.3) a. We arranged to meet yesterday. b. Yesterday, we arranged that we would meet (on some day I am not specifying here). c. We arranged a meeting that was to take place yesterday. (1.4) a. Jane saw the guy with binoculars. b. Jane, using binoculars, saw the guy. c. Jane saw the guy who was carrying binoculars. Language users’ access to the meanings of individual words – what we call lexical meaning – is less direct. We can think of the meaning of a word as the contribution it makes to the meanings of sentences in which it appears. Of course, people know the meanings of words in their language in the sense that they know how to use the words, but this knowledge is not immediately available in the form of reliable intui- tions. Speakers of English might be willing to agree that strong means the same as powerful, or that finish means the same as stop; but these judge- ments would be at least partly wrong, as shown when we compare (1.5a) with (1.5b), and (1.5c) with (1.5d). (1.5) a. This cardboard box is strong. b. ?This cardboard box is powerful. c. Mavis stopped writing the assignment yesterday, but she hasn’t finished writing it yet. d. *Mavis finished writing the assignment yesterday, but she hasn’t stopped writing it yet. Notation Asterisks, like the one at the beginning of (1.5d), are used in semantics to indicate that an example is seriously problematic as far as meaning goes – in this case, the sentence is a contradiction. Question marks, like the one at the beginning of (1.5b), are used to signal less serious but still noticeable oddness of meaning. studying me aning 7 From the above examples, we might conclude that finishing is a special kind of stopping, specifically ‘stopping after the goal has been reached’, and that strong can mean either ‘durable’ or ‘powerful’ (among other possibilities), only one of which is applicable to cardboard boxes. 1.2.1 Denotation, sense, reference and deixis Earlier in the chapter I said that expressions in a language – sentences, words, and so on – denote aspects of the world. The denotation of an expression is whatever it denotes. For many words, their denotation is a large class of things: the noun thing itself would be an extreme example. The links between language and the world are what makes it so ­indispensable – we can use language to talk about things in the world. That being the case, it is tempting to think that the meaning of an expression is just its denotation. If I wanted to explain what window meant, I might be able to do so by uttering the word and pointing to a window. And in early childhood, our first words are probably learnt through just such a process of live demonstration and pointing, known as ostension. However, this is not plausible as a general explanation of how meaning is acquired, for several reasons including the following: After early childhood, we usually acquire word meanings through the use of language rather than the use of ostension (“A sash window is a window which you can open by sliding it upwards”). Ostension isn’t a very good way of specifying what we’re talking about. If I just said window while pointing at the window, how would the hearer know whether I meant to label the window itself, or something I could see through it, or indeed some other property that it had (‘made of glass’, ‘transparent’, etc.)? And how would the hearer know whether I meant to refer to this particular window, or windows in general? So, in practice, when we do use ostension, it is often accompanied by explanatory utterances. There are all kinds of abstract, non-existent and relational denota- tions that cannot conveniently be explained by ostension: consider memory, absence, yeti, or instead of. If we can’t think of meaning as just a matter of denotation, how can we think about it? One approach is that taken in formal semantics (“formal” because it uses systems of formal logic to set out descriptions of meaning, and theories of how the meanings of expressions relate to the meanings of their parts; see Lappin 2001). In formal semantics, it is important to consider what kind of denotation we are dealing with. Count nouns, such as tree, may be said to denote sets of things; mass 8 an introduction to english semantics and pr agmatics nouns, like honey, denote substances; singular names denote individuals; property words, like purple, denote sets of things (the things that have the property in question); spatial relation words, like in, denote pairs of things that are linked by that spatial relation; simple sentences such as Amsterdam is in Holland denote facts or falsehoods; and so on. What is of interest is the fact that the denotation is what it is – an individual, or a set of things, or a set of pairs of things, and so on. An alternative approach takes sense to be the central concept: this is a slightly elusive idea, but we can think of it as the aspects of the meaning of an expression that give it the denotation it has. Differences in sense therefore lead to differences in denotation. Ambiguous words – of which there are many in English – have multiple possible senses, and consequently might denote different things. Consider (1.6), as part of a job advertisement. (1.6) a. The applicant should list his three most recent employers. b. The applicant should list her three most recent employers. c. The applicant should list their three most recent employers. One sense of his in (1.6a) could be loosely expressed as ‘of the most con- textually salient man’, while an alternative sense could be expressed as ‘of the most contextually salient person’. Given that, in this context, his is clearly supposed to mean ‘of the applicant’, the former sense results in the inference that application is restricted to men, whereas the latter sense does not. Assuming that the advertiser does not wish to suggest that there might be such a restriction, they would probably be better advised to use an alternative formulation, such as (1.6c). The idea of sense as the core of linguistic meaning offers a helpful route into the study of semantics. This book will focus on this approach, presenting it in a version that will hopefully also form a reasonable foundation for anyone who later chooses to learn formal semantics. There are different ways in which we might try to write down “recipes” for the denotations of words. One way of doing this is in terms of sense relations, which are semantic relationships between the senses of expressions. The idea here is that, if we can explain the interconnections between words using well-defined sense relations, then it is possible for a person who knows the denotations of some words to develop an understanding of the meanings (senses) in the rest of the system. To take a trivial example, if I know that I like corian- der, and I learn that cilantro means the same as coriander, I know that I like cilantro. This approach meshes well with the observation that we commonly use language to explain meanings. Chapter 2 will start to studying me aning 9 explain the sense relations that can hold between words, and phrases, in a more systematic way. We can usefully distinguish the idea of sense from that of reference. Reference is what speakers do when they use expressions – which we call referring expressions – to pick out particular entities – which we call referents – for their audience. These referents can be of many kinds, including, for instance (with sample referring expressions in parentheses), people (“my students”), things (“the Parthenon Marbles”), times (“midday”), places (“the city centre”) and events (“her party”). Reference is strongly reliant on context. Consider (1.7), where the speaker intends to use Edinburg to refer to the city of that name in Indiana. (1.7) We drove to Edinburg today. The speaker of (1.7) would have to be sure that the hearer knows that they are in Indiana, if the utterance is not to be misunderstood as refer- ring to the Edinburg in Illinois, or the one in Texas, or even Edinburgh in Scotland. But of course there is more context-dependence in (1.7) than just this. We in (1.7) refers to the speaker plus (usually) at least one other person. Similarly, today refers to the day on which the utterance was made. Thus, in order to understand what (1.7) means, we need to know who the speaker is and when they are speaking – which will be trivial face-to-face, but not possible if we just encounter the utterance out of its original context, such as within the pages of this book. And even in the face-to-face encounter it may not be obvious precisely who the speaker means to refer to by we. A similar problem arises with the notice (1.8), once posted on a course bulletin board. (1.8) The first tutorial will be held next week. The notice was posted in week 1 of the academic year, but not dated, and the lecturer forgot to take it down. Some students read it in week 2 and missed the first tutorial because they quite reasonably interpreted next week to mean ‘week 3’. We refer to expressions like this, which have to be interpreted in rela- tion to their context of utterance, as deictic expressions (or instances of deixis). Deixis is pervasive in language, presumably because we can often save effort by appealing to context in this way: it’s often easier to say she than to specify a person’s name, easier to say here than to give a clear description of the place of utterance, easier to say tomorrow than to remember the date, and so on. There are different kinds of deixis, relating to: 10 an introduction to english semantics and pr agmatics time:  now, soon, recently, ago, tomorrow, next week... place:  here, there, two kilometres away, that side, this way, come, bring, upstairs... persons and entities: she, her, hers, he, him, his, they, it, this, that... discourse itself: this sentence, the next paragraph, that’s what she said, this is true... Our semantic knowledge of the meanings of deictic expressions guides us as to how we should interpret them in context. As always, where context is concerned, these interpretations will be guesses rather than certainties: perhaps a speaker who points to an object and says this means that specific object, but perhaps they mean some property of it, or perhaps they are referring to their hand or the pointing gesture itself. Much of language is in some sense deictic: tense, which will be discussed in Chapter 6, can also be thought of this way. Reference in general is an important topic that will recur in many of the following chapters, particularly Chapter 10. 1.3 Semantics vs pragmatics As we’ve seen, the essential difference between sentences and utter- ances is that sentences are abstract, and not tied to contexts, whereas utterances are identified by their contexts. This is also an important way of distinguishing between semantics and pragmatics. If you are dealing with meaning and there is no context to consider, then you are doing semantics, but if there is a context to consider, then you are probably engaged in pragmatics. (As we shall see later in this section, there is a bit of a grey area as to whether context can sometimes intrude into semantics.) In the terms we’ve discussed so far, pragmatics is the study of utterance meaning, while semantics is the study of sentence meaning and word meaning. As an example of how semantics and pragmatics relate to one another, consider how we might interpret (1.9). (1.9) The next bus goes to Cramond. Considering this as a sentence, we can think about its sentence meaning, drawing on the semantic information that we have from our knowledge of the language. Anyone who knows English well can explain various features of the meaning of (1.9): it means that a bus goes to Cramond, and that one bus that does this is, in some sense, the ‘next bus’ (perhaps the adjacent one physically, but more usually the next one to arrive). Also, more subtly, goes to Cramond can be understood either to be making studying me aning 11 a prediction about what the next bus will do, or stating a generalisation about where that bus habitually goes. These meanings are available without considering who might be saying or writing the words, or when or where they are doing so: essentially, no context is involved. Hence, their study falls within the domain of semantics. By contrast, if we consider (1.9) as an utterance that takes place in a particular context, we can derive a richer interpretation that goes beyond the sentence meaning (which we will sometimes refer to as the literal meaning). Suppose that a passenger steps onto a bus and asks the driver whether this bus goes to Cramond, and the driver replies by uttering (1.9). Assuming that the driver is being cooperative, we can interpret (1.9) as conveying the answer ‘no’. This understanding relies on us trying to figure out why, given the contextual and back- ground information, the driver produced the utterance (1.9). The literal meaning of (1.9) is obviously relevant to this process, but on this occasion it doesn’t relate closely to the meaning that we end up deriv- ing: whether or not the next bus goes to Cramond doesn’t really tell us anything about whether this bus does so, and the utterance of (1.9) doesn’t generally convey the meaning ‘no’. So the interpretation of (1.9) as ‘no’, in this case, is a contextually driven additional meaning that goes beyond what was literally stated – a type of meaning that we call an implicature. And because it relies on context, the study of implicature falls within the domain of pragmatics. There are also forms of meaning which do not fall very clearly within the scope of semantics or that of pragmatics. In (1.9), the ques- tion of which bus actually is the next bus is dependent on the time and place at which (1.9) is uttered. We have already seen that the resolu- tion of deictic expressions depends on context in this way. In order to understand what the speaker of (1.9) is committing to, we therefore need to figure out a complete interpretation of the utterance, by using contextual information and world knowledge to work out what is being referred to (and how to understand potentially ambiguous expressions, like next in this case). The basic interpretation of a sentence with these details spelled out is sometimes termed an explicature. We could think of explicature as part of pragmatics because it is context-dependent, or part of semantics because it is essential to the meaning of the utterance in a way that implicature is not. I will try to sidestep that theoretical debate in this book, although the notion of explicature will be used in the discussion of figurative language in Chapter 9. 12 an introduction to english semantics and pr agmatics 1.3.1 A first outline of semantics Semantics, the study of word meaning and sentence meaning abstracted away from contexts of use, is primarily a descriptive subject. It is an attempt to describe and understand the nature of the knowledge about meaning that people have as a consequence of knowing a language. It is not, however, prescriptive: that is to say, it is not about defining what words “ought to mean” or pressuring speakers into abandoning some meanings and adopting others. Nor is it about etymology, because you can know a language perfectly well without knowing its history. It may be fascinating to explore how different meanings have become associated with historically related forms – arms, armour, army, armada, ­armadillo, etc. – but that knowledge is not essential for understanding or using present-day English, so it is not covered in this book. Nor do we focus on semantic and pragmatic change, for the same reason – it is interesting that the meanings of words change over time, and that, for instance, silly originally meant ‘blessed’ and subsequently ‘innocent’, but you do not need to know this in order to understand what it means now. Having said all that, studying semantics and pragmatics is helpful in understanding some of the processes involved in historical meaning change: for instance, meanings that were once context-dependent can end up becoming part of the semantics of a word. (A case in point: armada in Spanish just meant ‘an armed force’, and its use in English to mean specifically ‘a fleet of warships’ arose because Spanish Armada denoted an armed force from Spain that happened to comprise a fleet of warships. We might want to appeal to the notion of compositionality, introduced in Section 2.2, and ideas about adjective meanings, discussed in Chapter 4, to explain how this happened.) The process of giving a semantic description of language knowledge is also different from the encyclopedia-writer’s task of cataloguing general knowledge. The words tangerine and clementine illustrate a dis- tinction that may not be part of our knowledge of English: although an expert will be able to tell these apart, most users of English will not. But our focus in this book will be on the more abstract kinds of semantic and pragmatic knowledge that underpin our ability to use language. 1.3.2 A first outline of pragmatics Pragmatics is concerned with characterising how we go beyond what was literally said, both in terms of what additional meanings get con- veyed by a speaker and in terms of how the speaker encodes them and the hearer recovers them. A crucial basis for making pragmatic infer- studying me aning 13 ences is the contrast between what might have been uttered and what actually was uttered. Consider (1.10), from the information provided to guests at a B&B. (1.10) Food and Drink. Breakfast is served from 7:30am to 9:00am. There is a fridge in the hallway with soft drinks and snacks. Payment for these is on an honesty basis. As no further information is provided under this heading, we are invited to infer that the establishment does not serve dinner and does not provide alcoholic drinks. However, we cannot be entirely certain about this: perhaps the proprietors simply didn’t want to promise that dinner or alcoholic drinks would be available. These pragmatic inferences do not have the same status as the information that is explicitly asserted: a guest who read (1.10) would have grounds to complain if the fridge in the hallway contained no soft drinks or no snacks, but they would not (usually) have grounds to complain if the fridge also contained beer. Another widely available pragmatic inference, often called a scalar implicature, arises when words can be ordered on a semantic scale, as for example the value-judgements excellent > good > OK. (1.11) A: What was the accommodation like at the camp? B: It was OK. Speaker A can draw an inference from B’s response, because if the accommodation had been better than merely OK, B could have used the word good or indeed excellent to describe it. As B does not do so, A can infer that the accommodation was no better than satisfactory. But, as in (1.10), B is not as committed to the accommodation being ‘no better than OK’ than they are to it being ‘at least OK’. Indeed, if B sounds sur- prised, the inference ‘no better than OK’ may be less readily available: B then might be assumed to mean something like ‘contrary to expecta- tions, it was acceptable, and maybe even better than that’. We’ll revisit this kind of inference in Section 8.2.2. Pragmatic inferences of this kind occur all the time in communica- tion: even though they are really just informed guesses, they are crucial to the smooth functioning of much of our communication. Because they are informed guesses, it is one of their defining features that they can be “cancelled”, in the sense that the speaker can typically deny a pragmatic inference without seeming to contradict themselves. In (1.11), B could continue “In fact, it was pretty good” without being self-contradictory, because what is ‘pretty good’ in this case is also ‘OK’. Pragmatics is the focus of the later chapters of this book, but will also figure in sections of most of the other chapters. 14 an introduction to english semantics and pr agmatics Summary Hearers (including readers) have the task of guessing what speakers (including writers) intend to communicate when they produce utter- ances. If the guess is correct, the speaker has succeeded in conveying the meaning. Pragmatics is about how we interpret utterances – and produce interpretable utterances – taking account of context and back- ground knowledge. Semantics is the study of the context-independent knowledge that users of a language have about the meanings of expres- sions, such as words and sentences. Crucially, expressions of language relate to the world outside of language. In this book, we will explore this idea through the notion of sense and of the meaning relations that hold within a language, in ways that later chapters will make clearer. Exercises 1. Here are two sets of words: {arrive, be at/in, leave} and {learn, know, forget}. There is a similarity between these two sets, in how the words relate to one another. Can you see it? Here is a start: someone who is not at a place gets to be there by arriving; what if the person then leaves? Once you have found the similarities between the two sets, answer this follow-up question: was this a semantic or a pragmatic task? 2. A student says to the tutor “How did I do in the exam?” and the tutor replies “You didn’t fail.” What the tutor opted to say allows the student to guess at the sort of grade achieved. Do you think the grade was high or low? How confident are you about this? Briefly justify your answer. In doing this, were you doing semantics or pragmatics? 3. Pick the right lock is an ambiguous sentence. State at least two mean- ings it can have. How many different propositions could be involved? 4. A common myth about the word kangaroo is that it comes from an expression in an Australian language (specifically Guugu Yimithirr) meaning ‘I don’t understand you’. This was supposedly because explorers in the eighteenth century pointed to a kangaroo and asked what it was, and a local replied kangaroo. What does this story tell us about the limits of ostension? And how would we disprove the claim that this is what kangaroo originally meant? 5. An old joke concerns someone reading a sign saying Dogs must be carried on this escalator and having to wait ages for a dog to appear so that they could use the escalator. If that sign really caused people any problems, how could you add a deictic term to it and thus resolve the ambiguity? studying me aning 15 6. Example (1.6a) is potentially problematic as an utterance because, on one interpretation of his, it suggests that only men are welcome to apply for the job. Does (1.6b) run into a similar problem? What about (1.6c)? 2 Sense relations Overview As mentioned in Chapter 1, we generally learn the meanings of our first few words through close encounters with the world, carefully mediated by our caregivers. But once we have a start in language, we learn the meanings of most other words through language itself. This might be through having meanings deliberately explained to us – for instance, we might be told that “tiny means ‘very small’”. It might also be because we draw inferences about meanings based on our knowledge of language. For instance, if we see the title of Gerald Durrell’s book My Family and Other Animals, we might infer from this that people can potentially be classified as a type of animal. In both of these cases, we are relying on the existence of relation- ships between the senses of expressions (words and phrases) within the language that we speak. In the first case, we rely on the fact that the meaning of tiny can be expressed in terms of the meanings of very and small. In the second case, we use our understanding of the meaning of other to spot the existence of a relationship between the meanings of family and animals. One task for semantics is to describe these relation- ships systematically, with the ultimate ambition of identifying exactly what it is that a competent speaker of a language – in this case, English – has to know about the meanings of sentences. Putting this together with the speaker’s knowledge about pragmatics – how we use sentences – we would then have a picture of what a speaker knows about linguistic meaning altogether. In this chapter, we make a start on this task, begin- ning in the following subsection with some technical preliminaries. 2.1 Propositions and entailment We need to account for sentence meaning in order to develop explana- tions of utterance meaning, because utterances are sentences put to use. 16 sense rel ations 17 The number of sentences in a human language is potentially infinite, so we cannot analyse sentence meaning just by listing every possible sen- tence and its interpretation (and nor do we learn sentence meanings this way, with the exception of sentences that are fixed idiomatic expres- sions). Both as linguistic scholars and as language learners, we have to generalise in order to discover the principles of sentence meaning. One important generalisation is that different sentences can carry the same meaning, as in (2.1a–c). (2.1) a. Sharks hunt seals. b. Seals are hunted by sharks. c. Seals are prey to sharks. Proposition is a term for the core meaning of a sentence: we could say that (2.1a–c) express the same proposition. Propositions are therefore not tied to particular words or sentences. They have the property that they are either true or false. That is not to say that the speaker or hearer (or anyone else) has to know whether the proposition actually happens to be true or false, as far as the real world is concerned – just that it must, in principle, be either true or false. The sentences in (2.1) are declaratives: that is to say, they follow the sentence pattern on which statements (utterances that explicitly convey factual information) are typically based. It is easy to see that they express propositions, because it is possible to affirm or deny or query the truth of these statements (“Yes, that’s true”, “No, that’s a lie”, “Is that really true?”). By contrast, such responses are not appropriate to utterances such as (2.2a–b), which are based on other sentence patterns. (2.2) a. What’s your name? b. Please help me. Although (2.2a–b) don’t actually express propositions, we could still try to analyse their meaning with reference to propositions. We could say that (2.2a) relates to a proposition of the form ‘the hearer’s name is ___’, and invites the hearer to supply their name to fill in the blank. Similarly, we could say that (2.2b) presents the proposition ‘the hearer will help the speaker’ and indicates that the speaker wants that proposition to be true. But it might be more useful to think about utterances like this in terms of the social actions that speakers are trying to perform when they produce them, as we shall see in Chapter 11 when discussing speech acts. An important relation that holds between propositions is entailment, which we can write as ⇒. We say that one proposition entails another (p ⇒ q, “p entails q”) if the first proposition being true guarantees that the second is also true. Entailment is also called “logical consequence” 18 an introduction to english semantics and pr agmatics and is one of the most fundamental concepts in logic: arguments are logically valid if and only if the starting points (premises) entail the conclusions. Although entailment holds between propositions, we often talk about it holding between sentences – which is fine as long as we make sure that we are considering each sentence with one specific propositional meaning. The examples in (2.3) illustrate some of these points. (2.3) a. Moira has arrived in Edinburgh. b. Moira is in Edinburgh. c. Moira has arrived in Edinburgh ⇒ Moira is in Edinburgh d. *Moira has arrived in Edinburgh and she is not in Edinburgh. When (2.3a) is true we can be sure that (2.3b) is also true – assuming, that is, that we are talking about the same person called Moira and the same place called Edinburgh in both sentences, as otherwise the propo- sitional meanings won’t be related. If the truth of (2.3a) guarantees the truth of (2.3b), that means that the entailment shown in (2.3c) holds. Another way of stating this is to point out that (2.3d) is a contradiction: given that (2.3a) entails (2.3b), we cannot sensibly affirm (2.3a) and the negation of (2.3b) at the same time. To put this in more general terms, if we have propositions p and q such that p ⇒ q, we know that the complex proposition ‘p & not-q’ is necessarily false. (2.3a) has other entailments, as shown in (2.4). (2.4) a. Moira has arrived in Edinburgh ⇒ Moira is not in Birmingham b. Moira has arrived in Edinburgh ⇒ Moira was travelling to Edinburgh The word arrived is an important contributor to (2.3a) having the entail- ments shown. If lived or been were substituted for arrived, the entailments would be different. If someone were to ask what arrive meant, a sen- tence like (2.3a) could be given as an example, explaining that it means that Moira travelled from somewhere else and is now in Edinburgh. However, the entailments from a sentence depend not only upon the content words in the sentence, like arrive, but also upon their grammati- cal organisation. For instance, the grammatical construction with has, sometimes called the “present perfect” construction, is crucial to the entailment in (2.3c) – if we replaced has with had, this entailment would no longer be valid, as Moira may subsequently have left Edinburgh again. There is more detailed discussion of the present perfect construc- tion, alongside other ways of expressing tense and aspect in English, in Chapter 6. sense rel ations 19 If (2.3a) is understood and accepted as true, then none of its entail- ments need to be put into words. They follow automatically, and can be inferred from (2.3a) by the hearer. It is obviously crucial to success- ful language use for speakers to make sure that the sentences that they utter have the correct entailments. In Chapter 1 I introduced the idea of sense as the aspects of the meaning of an expression that give it the denotation it has. We can think of the sense of a word in terms of the particular entailments that a sentence has as a result of containing that word. Whichever aspects of the word’s meaning are responsible for the sentences having those entailments are its sense. We can think of the entailment in (2.3c) as part of the sense of arrived, for instance: it is crucial to the meaning of arrived that if someone “has arrived” in a place, it follows that they are now “in” that place. 2.2 Compositionality Given the potentially infinite supply of distinct sentences in a lan- guage, semanticists aim to develop a compositional theory of meaning. The principle of compositionality is the idea that the meaning of a complex expression is determined by the meanings of its parts and how those parts are put together. The idea that human language is compo- sitional in this sense has a very long history, and is of great importance to linguistics: among other things, it offers a partial explanation of how we can comprehend the meanings of infinitely many different poten- tial sentences, just by knowing the meanings of finitely many different sentence parts (and their combining rules). The meaningful parts of a sentence are clauses, phrases and words, and the meaningful parts of words are morphemes. The idea that the meaning of a complex expression depends on the meaning of the parts and how they are put together is hopefully a rea- sonably intuitive one. It’s comparable to what happens in arithmetic. Several things affect the result of an arithmetical operation, as we see in (2.5): it makes a difference what numbers are involved (2.5a), what operations are performed (2.5b), and – where there are multiple opera- tions – also the order in which the operations take place (2.5c). (2.5) a. 3 + 2 = 5 but 3 + 4 = 7 b. 3 + 2 = 5 but 3 × 2 = 6 c. 3 × (2 + 4) = 18 but (3 × 2) + 4 = 10 The linguistic examples in (2.6) show something similar to what we see in (2.5c). Here we are considering a word that consists of three morphemes, un, lock and able. Our operations are not addition and 20 an introduction to english semantics and pr agmatics ­ ultiplication, but negation (or reversal) performed by the prefix un- m and the formation of “capability” adjectives by the suffix -able. (2.6) a. un(lockable) ‘not able to be locked’ b. (unlock)able ‘able to be unlocked’ As in (2.5c), the brackets in (2.6) indicate the scope of the operations: that is to say, which parts of the representation un- and -able operate on. In (2.6a), un- operates on lockable, whereas ‑able operates only on lock. In (2.6b), un- operates only on lock and -able operates on unlock. However, unlike in arithmetic, there isn’t a default order of precedence, and we don’t use brackets like this in ordinary writing or speech. In practice, unlockable is just ambiguous – and the same goes for various other expressions which take un- as a prefix and -able as a suffix, such as unbendable, undoable and unstickable. Of course, sometimes only one inter- pretation makes sense: as it is not usually possible to unbreak something, unbreakable must be understood as ‘not able to be broken’. In syntax too there can be differences in meaning depending on the order in which operations apply. We saw an example of this in the first chapter, repeated here as (2.7); (2.8) is another straightforward example. (2.7) Jane saw the guy with binoculars. (2.8) I didn’t sleep for two days. Both of these are ambiguous for essentially the same reason as unlockable: we can think of them as involving two possible “bracketings”. On one reading of (2.7), with binoculars can be bracketed together with the guy as a constituent of the sentence: on the other reading, with binoculars modi- fies saw. On one reading of (2.8), for two days is an adjunct to sleep and the sentence expresses the meaning that ‘For two days, the speaker did not sleep’; on the other reading, for two days is a complement to sleep and can be bracketed with it, and the sentence expresses the meaning ‘It is not the case that the speaker slept for two days’. As with unlockable, the ambiguity is not a one-off fact about these particular sentences, but occurs systematically with similar sentences. (2.9a–c) are additional examples in which, like in (2.8), a prepositional phrase could be an adjunct or a complement, giving rise to different meanings. Our account of how compositionality works is going to have to allow for the fact that, in cases like this, sentences systematically end up with multiple possible interpretations. (2.9) a. I won’t be in town until 4 o’clock. b. I refuse to see him twice a day. c. I agreed to contact her during the committee meeting. sense rel ations 21 At the same time, our language also contains expressions which appear to be compositional but in fact are not. These are expressions for which the meanings cannot be worked out by considering the mean- ings of the parts and how those parts fit together. We call them idioms. Classic examples in English are expressions such as head over heels, see eye to eye or kick the bucket. These simply have to be learned as wholes (see Grant and Bauer 2004 for more discussion). In this respect, idioms behave like individual words that consist of one morpheme: all we can do is to learn the association between the form and its meaning. As we have seen, for some words that consist of multiple morphemes, like unlockable, we can work out the meaning(s) of the word from the mean- ings of the individual morphemes and how they are combined. But there are also words that appear to consist of multiple morphemes but are idiomatic in their overall meaning: we can’t predict the meaning of greengrocer just from knowing the meanings of green and grocer, or that of greenhouse from knowing the meanings of green and house. 2.3 Synonymy Synonymy is equivalence of sense. The nouns mother, mom and mum are synonyms of each other. When a single word in a sentence is replaced by a synonym – a word equivalent in sense – then the literal meaning of the sentence is not changed: My mother/mom/mum is from London. Sociolinguistic differences – things like the fact that mum and mom are informal, and that they are used respectively in British English and North American English – are not relevant to the sense of the word here, because they do not affect the propositional content of the sen- tence (although sociolinguistics is a huge and important area of study in its own right, which we won’t attempt to get into in this book). Sentences with the same meaning are called paraphrases of one another. (2.10a) and (2.10b) are paraphrases, differing only by substitu- tion of the synonyms impudent and cheeky. (2.10) a. Andy is impudent. b. Andy is cheeky. c. Andy is impudent ⇒ Andy is cheeky d. Andy is cheeky ⇒ Andy is impudent e. *Andy is impudent but he isn’t cheeky. f. *Andy is cheeky but he isn’t impudent. As indicated by (2.10c), (2.10a) entails (2.10b) – again, assuming that we are talking about the same person called Andy at the same point in time. Similarly, (2.10b) entails (2.10a). If one of these entailments failed, 22 an introduction to english semantics and pr agmatics (2.10a) and (2.10b) would not be paraphrases of each other. Also, both (2.10e) and (2.10f) are contradictions: it’s not possible for someone to be cheeky if they are not impudent, or vice versa. This also follows from (2.10a) and (2.10b) being paraphrases. In order to produce a semantic description of English, we would typically start from judgements about sentences and try to use those to establish sense relations between words. If we find a pair of sentences such as (2.10a) and (2.10b) which contain a different adjective but are otherwise identical, and it transpires that each entails the other (that is, they are paraphrases), then that would be evidence that the two adjec- tives are synonyms. Similarly, if we find a pair of sentences such as (2.10e) and (2.10f) that are both contradictions, that would be evidence that the two adjectives contained in those sentences are synonyms: if they were not perfectly synonymous, at least one of these sentences could at least potentially be true. The relation of paraphrase depends upon entailment: paraphrase means that there is a two-way entailment between the sentences. We can think of this as entailments indicating the sense relations between words, and sense relations indicating the entailment potentials of words. How can we find paraphrases? We might do this by observing lan- guage in use; but we might also invent test sentences and see whether or not particular entailments are present, according to our own judgement as language users (or the judgement of other proficient speakers of the language). To make this task easier, we might be interested in develop- ing examples about which it is easy to have intuitions, such as (2.11). (2.11) a. You said Andy is cheeky, so that means he is impudent. b. You said Andy is impudent, so that means he is cheeky. So generally signals that an inference is being made. If both (2.11a) and (2.11b) are judged to be true, it appears that the entailments (2.10c) and (2.10d) both hold, and hence cheeky and impudent are synonyms. Note that we are interested in accessing our knowledge about the general pattern of entailment, not about the likely character of any specific person named Andy. We don’t want to know that if Andy is impudent then he is probably cheeky, or vice versa: we want to know that, if a speaker is committed to the idea that Andy is impudent, they are auto- matically committed to the idea that he is cheeky, and vice versa. If people accept (2.11a) and (2.11b) as reasonable arguments, they must agree with this, and we can conclude that the adjectives are synonyms. Having said that, we might need to allow for the possibility that someone will reject (2.11a) or (2.11b) – or both – on the basis that there is a difference in register between cheeky and impudent. That is to say, sense rel ations 23 the circumstances under which you would use cheeky, as a speaker, are not the same as the circumstances under which you would use impudent. Like the contrast between mum and mother, this is part of our knowledge of language, but part of our sociolinguistic knowledge rather than our semantic knowledge. So one of the challenges we have to confront when we are trying to have intuitions about entailment is whether we, as judges, are relying only on the kinds of knowledge that we are inter- ested in as analysts. Other pairs of synonymous adjectives include silent and noiseless, brave and courageous, and polite and courteous. But there are also many pairs of adjectives that have similar meanings without being synonyms. Consider (2.12). (2.12) a. The building is enormous. b. The building is big. Here, (2.12a) entails (2.12b), but the reverse is not true: the build- ing could be big without qualifying as enormous. Hence, the relation between big and enormous is not one of synonymy. Synonymy not only applies to nouns and adjectives: it is also present in other word classes. The adverbs quickly and fast are synonyms, and, in Scottish English, so are the prepositions outside and outwith. And, as shown in the mother/mom/mum example, synonymy is not restricted to pairs of words: as another example, the triplet sofa, settee and couch are all synonymous. We can tell this because each pair of words within the triplet exhibits synonymy. In fact, because of the way entailment works, synonymy is a transitive relation: that is to say, if a is a synonym of b and b is a synonym of c, then a must also be a synonym of c. 2.4 Complementarity, antonymy, converseness and incompatibility Some pairs of adjectives, such as moving and stationary, not only apply to a broad class of objects but also divide that whole class of objects into two non-overlapping sets. Everything that is capable of moving or being stationary – that is to say, any physical object – is, at a given point in time, either moving or stationary. Some other adjectives that divide their relevant domains in this way are listed in (2.13). (2.13) same different right wrong true false intact damaged connected disconnected 24 an introduction to english semantics and pr agmatics These pairs are complementary terms – so called because the comple- ment of things that are not described by one term are described by the other. That is to say, they give rise to the pattern of entailments illus- trated in (2.14). (2.14) a. Maude’s is the same as yours. b. Maude’s is different from yours. c. Maude’s is the same as yours ⇒ Maude’s is not different from yours d. Maude’s is different from yours ⇒ Maude’s is not the same as yours e. Maude’s is not the same as yours ⇒ Maude’s is different from yours f. Maude’s is not different from yours ⇒ Maude’s is the same as yours In other words – assuming that Maude’s and yours denote the same thing in (2.14a) and (2.14b) – when (2.14a) is true, (2.14b) is false, and vice versa. This is a way of expressing the obvious idea that if two things are the same, they are not different; and that if two things are different, they are not the same. Another, more formal, way of putting this is that (2.14a) entails and is entailed by the negation of (2.14b), and (2.14b) entails and is entailed by the negation of (2.14a). Hence, (2.14a) is a paraphrase of the negation of (2.14b), and (2.14b) is a paraphrase of the negation of (2.14a). So we can think of complementaries as negative synonyms. Note that we have had to change the sentence frame slightly between (2.14a) and (2.14b), so it might be more precise to say that there’s complementarity between the same as and different from (rather than just same and different). A similar sense relation, but slightly weaker than complementarity, is that of antonymy. Sometimes this term is used just to mean any kind of “oppositeness”, but most semanticists use it to apply to one particular kind of oppositeness, exemplified by the adjectives noisy and silent, as in (2.15). Note that we use the symbol ⇏ to mean that an entailment is not valid: that is, if the sentence before ⇏ is true, it doesn’t necessarily follow that the sentence after ⇏ is true. (2.15) a. The street was noisy. b. The street was silent. c. The street was noisy ⇒ The street was not silent d. The street was silent ⇒ The street was not noisy e. The street was not noisy ⇏ The street was silent f. The street was not silent ⇏ The street was noisy sense rel ations 25 Again assuming that we are referring to the same thing by the street in (2.15a) and (2.15b), we get the pattern of entailments shown above, which is different to the complementary case. If we know that (2.15a) is true then we know that (2.15b) is false, and if we know that (2.15b) is true then we know that (2.15a) is false. However, unlike the comple- mentary case, knowing that (2.15a) is false doesn’t tell us whether or not (2.15b) is true, and knowing that (2.15b) is false doesn’t tell us whether or not (2.15a) is true. This is because there is middle ground between silent and noisy, whereas there is no middle ground between the same as and different from: to say that something is not noisy is not to say that it is silent, and to say that it is not silent is not to say that it is noisy. To put it another way, pairs of antonyms typically tap into mean- ings that are at opposite extremes, but unlike complementaries, they leave gaps in the middle. Under this definition, there are many pairs of antonyms: happy and sad, full and empty, early and late, and so on. It is not a coincidence that it is easier to find pairs of antonyms than it is to find synonyms or complementaries. Synonyms can be thought of as something of a luxury: given that two synonyms (such as courteous and polite) give rise to the same entailments, we could really do without one of them in the language, and still manage to convey all the information that we need to. Having an additional term might enable us to commu- nicate in a more expressive or sociolinguistically richer style. Having words for both members of a complementary pair is arguably some- thing of a luxury too: we could get away with having just one, and use negation to convey the other (instead of false we could just say not true). However, this will not work with antonyms: to say that something is full is more than just saying that it is not empty. We need both full and empty in the language in order to talk about quantity in this way. A general feature of the adjectives that form antonym pairs is that there is also a sense relation between their comparative forms. Comparatives are formed by the suffix -­ er for some adjectives (thicker, poorer, humbler) or more generally by the construction “more + adjective” (more patient, more obstinate). The comparative forms of an antonym pair of adjectives exhibit a sense relation called converseness, illustrated in (2.16). (2.16) a. France is bigger than Germany. b. Germany is smaller than France. c. France is bigger than Germany ⇒ Germany is smaller than France d. Germany is smaller than France ⇒ France is bigger than Germany 26 an introduction to english semantics and pr agmatics Starting with (2.16a), if we replace bigger with smaller and swap the posi- tion of the noun phrases France and Germany, we obtain a paraphrase, (2.16b). Thus we say that bigger and smaller are converses. We can think of converseness as something like a version of synonymy that also requires the reordering of noun phrases. Converseness is also present in other word classes, including nouns (such as parent (of) and child (of)), verbs (such as precede and follow) and prepositions (such as above and below). In each of these cases, if we have two entities X and Y, and X stands in one of these relations to Y, it must be the case that Y stands in the converse relationship to X (for instance, if X is above Y, then Y is below X, and vice versa). Just as synonymy is not restricted to pairs of items, neither is ­antonymy. We can often identify sets of terms for which any two members are antonyms: we can say that these sets exhibit incompat- ibility. For instance, we can consider a set of colour adjectives such as {black, blue, green, yellow, red, white, grey} to be incompatible, in that if we say something is “blue”, it follows automatically that it is not black, green, yellow, etc. – assuming that we are dealing with objects with a single predominant colour. Similarly, a set of nouns denoting shapes, such as {triangle, circle, square}, might also exhibit incompatibility: if something is “a triangle”, it is not a circle or a square, and so on. A set has incompatibility if every member of the set exhibits antonymy with every other member of the set, so the diagnostics for incompatibility will be essentially the same as for antonymy. 2.5 Hyponymy The relation of hyponymy is about the different subcategories of a word’s denotation. The pattern of entailment that defines hyponymy is illustrated in (2.17). (2.17) a. There’s a house on the riverbank. b. There’s a building on the riverbank. c. There’s a house on the riverbank ⇒ There’s a building on the riverbank d. There’s a building on the riverbank ⇏ There’s a house on the riverbank If it is true that there is a house on the riverbank, it follows that there is a building on the riverbank, as indicated in (2.17c). This is because a house is one kind of building. There are other kinds of building: school, church, factory, and so on. Hence, the reverse entailment does not hold, as shown in (2.17d): the building on the riverbank could be something other than sense rel ations 27 a house. This pattern of entailment tells us that we are dealing with hyponymy, and that house is a hyponym of building, or equivalently building is a superordinate (sometimes called “hypernym”) of house. Often, as in (2.17), a sentence containing a hyponym entails the cor- responding sentence in which the hyponym has been replaced by its superordinate. However, if the sentence is negative, this pattern may be reversed, as shown in (2.18). (2.18) a. There isn’t a house on the riverbank. b. There isn’t a building on the riverbank. c. There isn’t a house on the riverbank ⇏ There isn’t a build- ing on the riverbank d. There isn’t a building on the riverbank ⇒ There isn’t a house on the riverbank To put it another way, the fact of there being a building on the river- bank is a necessary condition for there to be a house on the riverbank. Hence, it would be reasonable to say that ‘building’ is a component of the meaning of house: a house is a ‘building for living in’. 2.5.1 Hierarchies of hyponyms House is a hyponym of the superordinate building, but building is in turn a hyponym of structure, and structure is in turn a hyponym of thing. Like synonymy, hyponymy is also transitive: if a is a hyponym of b and b is a hyponym of c, then a is a hyponym of c. This means that house is also a hyponym of structure and a hyponym of thing, which is fairly obvious if we think of the definition above: if we replaced building by structure or thing, the entailment patterns in (2.17) and (2.18) would stay the same. Similarly, thing is a superordinate of house, and so on. These relations are illustrated in Figure 2.1. Incidentally, this means that we don’t have to worry about whether we’ve captured all the layers in this hierarchy. If we discovered that there was another category between structure and thing, say artefact, which was a hyponym of thing and a superordinate of structure, that would not affect the hyponymy relation between structure and thing. As we move up a hierarchy of hyponymy, the senses of the words become less specific and their denotations become larger and more general. At lower levels, the senses are more detailed and the words denote narrower ranges of things, as illustrated in Figure 2.2. In Figure 2.2 we try to capture the meaning of each hyponym as the meaning of its immediate superordinate with a modifier (e.g. ‘for living in’). This captures the key idea that a hyponym is a special case of its 28 an introduction to english semantics and pr agmatics thing superordinate of structure, building, house,... structure hyponym of thing, superordinate of building, house,... building hyponym of structure, thing, superordinate of house,... house hyponym of building, structure, thing, superordinate of... Figure 2.1 An example of levels of hyponymy thing ‘physical entity’ structure ‘thing with connections between its parts’ = ‘physical entity with connections between its parts’ building ‘structure with walls and a roof’ = ‘physical entity with connections between its parts with walls and a roof’ house ‘building for living in’ = ‘physical entity with connections between its parts with walls and a roof, for living in’ Figure 2.2 Hyponym senses get successively more detailed as we go down the tree superordinate: it’s an instance of the superordinate that has certain prop- erties that not all instances of that superordinate share. In practice, it’s not always easy to provide useful hyponym definitions in this way, because sometimes we run into circularity: a dog is a type of animal, but it’s difficult to describe what type of animal it is without using the word ‘dog’ or some related word like ‘canine’. It is, in effect, ‘an animal that is a dog’. Even so, hyponym hierarchies are useful to us as language users because they guarantee the validity of a large number of inferences. If someone tells us facts about a particular house, we know that the things they are telling us are also true of at least one structure, and at least one building, and so on. To take a marginally more practical example, if we know that platypus is a hyponym of mammal, we know that (2.19a) entails (2.19b). (2.19) a. Platypuses lay eggs. b. There are mammals that lay eggs. sense rel ations 29 entity thing time person1 idea place structure product plant animal1 student dam building vehicle tool animal2 person2 freshman (post) graduate barn house garden tool kitchen utensil saw hacksaw Figure 2.3 Part of the hyponym hierarchy of English nouns Hyponym hierarchies exist for other parts of speech, such as verbs and adjectives, as well as nouns. Amble is a hyponym of walk, which in turn is a hyponym of move; (made of) oak is a hyponym of wooden, and so on. Moreover, these hierarchies are potentially vast. WordNet is a systematic database of English word meanings, which, at the time of writing, contains entries for more than 155,000 words. In creating WordNet, Miller and Fellbaum (1991: 204) discovered that a hyponym hierarchy with twenty-six high-level superordinates (time, plant, animal, and so on) ‘provides a place for every English noun’. Figure 2.3 shows a tiny fraction of the WordNet noun hierarchy, featuring just seven of the twenty-six superordinates (and, of course, omitting the vast majority of their hyponyms). Note that some enti- ties appear twice in this hierarchy: we distinguish two senses of person (corresponding roughly to ‘human’ and ‘psychological individual’) and two senses of animal (corresponding roughly to ‘living thing that is not a plant’ and ‘living thing that is not a plant or a human’). In addition to hyponymy itself, we might try to identify other sense relations within a hyponym hierarchy. The hyponyms of a given superordinate might be linked by incompatibility – house and factory are different hyponyms of building, so we might argue that something that is a house is not a factory – but actually this does not follow from 30 an introduction to english semantics and pr agmatics the ­definition of hyponymy we have been working with here. Indeed, if house and dwelling are synonyms, then they are both hyponyms of build- ing, but they are clearly not incompatible. Summary In this chapter, we discussed the notion of entailment and its relevance to semantics. Based on this, we were able to define various distinct sense relations that can obtain between expressions in a language: syn- onymy, antonymy, complementarity, converseness, incompatibility and hyponymy. Entailments between sentences are the evidence for sense relations between words; or, going the other way, sense relations indi- cate the entailment potentials of words. The idea of sense relations is helpful to us, as analysts, when trying to give a semantic description of a language, but also when we as speakers are trying to learn the semantics of a language. Exercises 1. The word dishonest means ‘not honest’. The following five words also have ‘not’ as part of their meaning: distrust, disregard, disprove, dislike, dissuade. Write a brief gloss for the meaning of each, similar to the one given for dishonest. Thinking of sentences for the words will probably help. Use the term scope (introduced in Section 2.2) to describe the difference between the glosses. 2. Here is an unsatisfactory attempt to explain the meaning of not good enough: not good means ‘bad or average’; enough means ‘suf- ficiently’; so not good enough means ‘sufficiently bad or average’. With the aid of brackets, explain why the phrase actually means ‘inadequate’. 3. Someone once said to me, “You and I are well suited. We don’t like the same things.” The context indicated (and I checked by asking) that the speaker meant to convey that we are well suited because the things we don’t like are the same: but the sentence is in principle ambiguous. Explain the ambiguity, and comment on unambiguous alternatives. 4. Which of the following sentences entail which? (a) The students liked the course. (b) The students loved the course. (c) The rain stopped. (d) The rain ceased. sense rel ations 31 5. Provide example sentences and write out a pattern of entailments (comparable to (2.10a–d)) that establishes soundless, silent and noiseless as synonymous. 6. Are awake and asleep complementaries? Give reasons for your answer. Whether you have answered yes or no, how would you include half- awake, half-asleep and dozy in an account of the meanings of awake and asleep? 7. The hyponyms of footwear include shoes, sneakers, trainers, sandals, slip- pers, boots and galoshes. Draw up a hyponym hierarchy for these words. Is footwear the superordinate that you use for all of the hyponyms, or are there other alternatives? Recommendations for reading Cruse (2011) provides a thorough discussion of oppositeness in meaning, as well as hyponymy. Lappin (2001) provides a good article-length introduction to formal semantics, and Saeed (2015) complements this by dealing in greater detail with some relevant topics. WordNet is available and browsable online at https://wordnet.princeton.edu/. 3 Nouns Overview Nouns form the majority of English words. They typically denote enti- ties with rich and complex sets of properties. We can think of some of these properties as being associated with hyponymy relations, as dis- cussed in Section 2.5: because a dog is a type of mammal, we know that dogs possess all the properties that mammals must have. But additional sense relations apply to nouns, chief among them the has-relation, which we discuss in this chapter. The has-relation captures the fact that the things denoted by nouns can have parts, whether these are physical or conceptual, and the question of which parts a noun has may be highly relevant to its meaning. This chapter also discusses the distinction between count and mass nouns: that is, between nouns that are treated as denoting entities that can be separated and distinguished from one other, and nouns that are not. 3.1 The has-relation The everyday words square, circle and triangle are also technical terms in geometry, where they have tight definitions. We might define a square as a ‘closed figure which has four straight sides of equal length separated by 90° angles’. The fact that a square has four sides is built into its defi- nition. We can think of the entity a square as being associated with ‘four sides’ by the has-relation. Like the relations discussed in Chapter 2, the has-relation is important to semantics because it guarantees the truth of certain entailments, as illustrated in (3.1). That figure is a square ⇒ ‘That figure has four sides of equal (3.1) a.  length’ That figure is a square ⇒ ‘That figure has four internal 90° b.  angles’ 32 nouns 33 Mathematical terms are somewhat atypical of natural language because they have such unambiguous definitions. Trying to define nouns in everyday use – part of the task of linguistic semantics – isn’t always straightforward. In particular, the status of has-relations is sometimes unclear. Consider (3.2). Mary drew a face ⇒ ‘The picture that Mary drew includes (3.2) a.  eyes’ Tom drew a house ⇒ ‘The picture that Tom drew includes a b.  door’ If we think of the things that are denoted by the English word face, the examples that spring to mind probably include eyes, a nose and a mouth, among other features. That is to say, if we think of a prototype of a face – a clear, central example of the denotation of face – it probably has eyes, a nose and a mouth. Similarly, a prototype of a house probably has a door, windows, a roof, and so on. (Conversely, there are numerous features that many real faces and houses have but which are not likely to be present in the corresponding prototype – say, a goatee, or a carport, respectively.) The inferences in (3.2) are based on the relations ‘a face has eyes’ and ‘a house has a door’. But these has-relations are really only valid if we think in terms of prototypes. Something could be unambiguously a face without having eyes, and unambiguously a house without having a door. If Mary drew a picture of someone wearing shades, we would grant that “Mary drew a face”, but the inference in (3.2) wouldn’t be valid. In (3.1), then, we’re dealing with proper entailments of the kind ­introduced in Section 2.1. But in (3.2) we have weakened the entail- ments by making them conditional on prototypicality. It would be more appropriate to write them down as something more like (3.3). Mary drew a face ⇒ ‘If the picture that Mary drew is proto- (3.3) a.  typical, it includes eyes’ Tom drew a house ⇒ ‘If the picture that Tom drew is proto- b.  typical, it includes a door’ Essentially, it’s necessary to weaken things in this way because typical English words are not as tightly defined as technical words like square (in the geometric sense). But it is also useful for us to do this because knowledge about prototypical properties is an important part of our knowledge of what words mean. In fact, for many nouns there are very few properties that are com- pletely essential to the definition, even though a noun may have many prototypical properties. This point has been argued by a number 34 an introduction to english semantics and pr agmatics of influential thinkers about language, so I want to dwell on it for a moment. We can say that a square must have four sides, and anything which does not have four sides cannot correctly be called a square. But can we say anything similar about a face, or a house, or many other things? The classic example in this respect is the word game. Wittgenstein (1953) argued that there were no features at all that were shared by all the things we call a game. Not all games involve competition, or skill, or physical ability; not all games have rules; not all games have playing pieces or a scoring system. So, by this token, knowing that something “is a game” doesn’t tell you anything about the has-relations that it pos- sesses. Similarly, although we can think of a prototypical house as having windows, a door and a roof, it would not automatically cease to be a house if it had no windows, or lost its roof. Despite this lack of obligatory properties, it still seems perfectly rea- sonable to identify something as being a game or not being a game, or to talk about a building being a house or not being a house. Consequently, a useful idea is that word meanings are organised (at least in part) around prototypes rather than obligatory properties – an idea called “prototype theory”, which is particularly associated with Eleanor Rosch and col- leagues. Calling something a game does not guarantee that it possesses any one specific property associated with the prototypical game, but it does suggest that that thing possesses enough of the prototypical game properties, to a sufficient extent, to fall within the category. Similarly, calling something a house doesn’t guarantee that it has windows, a door or a roof, but it does strongly suggest (in the absence of indications to the contrary) that it has most or all of these features, along with other prototypical house features. In the following sections we will look at some of the consequences of the has-relation as applied to various aspects of noun meaning. In doing this, we’ll generally be making some tacit assumptions about proto- typicality when talking about word meanings, although we will also see some of the limitations of this approach, for instance in the discussion of the has-relation and hyponymy in Section 3.1.2. 3.1.1 Inferring existence from the has-relation By appealing to the has-relation, we can infer the existence of entities that haven’t been explicitly mentioned. Consider (3.4). (3.4) Some kids walked up to a house, knocked on the front door and ran away. nouns 35 In (3.4), we have an indefinite article, a, and a definite article, the. A noun phrase that first introduces its referent into conversation is usually indefinite, whereas subsequent mentions of the same referent will usually involve a definite noun phrase. This is why (3.5a) is reasonably natural but (3.5b) is odd (assuming that a house denotes the same house in both sentences) – (3.5b) attempts to refer to the already-established referent with an indefinite noun phrase. (3.5) a. Some kids walked up to a house. The house was old and spooky. b. Some kids walked up to a house. *A house was old and spooky. What we see in (3.4) is that, after mentioning a house, front door behaves as though it has also already been mentioned: it’s acceptable to say the front door, and it would be odd to say a front door. To put it another way, if we say a house and then say the front door, the hearer is able to infer that we probably mean ‘the front door of the just-mentioned house’. Clark (1975) introduced the term bridging inference to describe this kind of inference, as it involves connecting up the newly mentioned material to that which has been mentioned before. The pattern shown in (3.5) holds to some extent for non-prototypical features. Earlier I mentioned that carport was a non-prototypical feature that a house might have. In a context like (3.5), we can still use the with carport, but it may also be fine to use a. That doesn’t work with front door, as we see in (3.6). (The situation with door is a little more complicated because a door might suggest that the house has multiple doors.) (3.6) a. Some kids walked up to a house. The front door was to the right of them. b. Some kids walked up to a house. ?A front door was to the right of them. c. Some kids walked up to a house. The carport was to the right of them. d. Some kids walked up to a house. A carport was to the right of them. These examples show that we are conscious of the has-relations that are associated with the nouns we mention, and these can influence how we talk about things that we subsequently mention. Prototypical parts may require the use of definite articles, whereas parts that are not prototypical can be used with indefinite articles. To put it another way, the hearer can reasonably infer from the speaker’s use of a house that the door of that house exists, and they expect to encounter the 36 an introduction to english semantics and pr agmatics expression the door if the speaker intends to refer to that door. The hearer may also be willing to accommodate the use of an expression like the carport if they are willing to draw the inference that the house has a carport, but as this is not part of the prototype, using a carport is also fine. Chapter 10 will go into a little more detail about how the use of definite and indefinite articles feeds into our understanding of discourse. 3.1.2 Hyponymy, prototypes and the has-relation The has-relation is obviously not quite the same thing as hyponymy, discussed in Section 2.5, but these two relations interact in important ways. Recall that hyponymy is about categories being grouped under superordinate terms. To take another geometrical example, we could say that square is a hyponym of quadrilateral, and so are rectangle, paral- lelogram, kite, rhombus and trapezium. Hyponyms then “inherit” the parts that their superordinates have (Miller and Fellbaum 1991: 206). By defi- nition, a quadrilateral has exactly four sides – or, to put it another way, it is connected to the attribute “exactly four sides” by the has-relation. This same relation is inherited by square, rectangle, parallelogram, kite, rhombus and trapezium. Quadrilateral is in turn a hyponym of polygon: by definition, a polygon has straight sides. This relation, “has straight sides”, is inherited by all the hyponyms of polygon, including quadrilateral and all its hyponyms. This kind of inheritance is important to our semantic knowledge. As a result of it, when we learn about the hyponymy relations that a word enters into, we automatically acquire knowledge about its has-relations. However, although this idea is easy to apply to terms with clear defini- tions, such as mathematical shapes, it becomes more complicated when we are dealing with prototypes. The prototype of a hyponym does not generally inherit all the has-relations from the prototype of its superor- dinate. For instance, the Neolithic houses uncovered at Skara Brae in Orkney had no windows: if we coin the neologism skara for a house that resembles one of these houses, skara will be a hyponym of house, but the prototypical skara will have no windows. This observation applies not only to has-relations but also to other properties. A classic example is that a property of a prototypical bird is that it “can fly”. This property is inherited by most of the hyponyms of bird, but of course not all: penguin is a hyponym of bird and the prototypi- cal penguin cannot fly. We can’t even say with confidence that the prop- erties of a prototypical superordinate will be inherited by most of its hyponyms. Suppose we had 10,000 words for different kinds of penguin. nouns 37 They would all be hyponyms of bird, but it wouldn’t alter the fact that a prototypical bird can fly and a penguin cannot. A hyponym will, of course, inherit all the obligatory properties, including has-relation

Use Quizgecko on...
Browser
Browser