Document Details

ChivalrousToucan3503

Uploaded by ChivalrousToucan3503

John Paul Minda

Tags

psychology thinking reasoning cognitive science

Summary

This book, The Psychology of Thinking, 2nd Edition, by John Paul Minda, explores reasoning, decision-making, and problem-solving. It examines the role of language in cognition and memory. The book also discusses conceptual metaphors and linguistic relativity.

Full Transcript

5 Language and Thought Try to think about something commonplace. For example, try to remember what you ate for dinner last night. As you remember this, be aware of exactly how you remember this. Pay attention to what goes on in your mind while you are remembering. Try to do this right now, without d...

5 Language and Thought Try to think about something commonplace. For example, try to remember what you ate for dinner last night. As you remember this, be aware of exactly how you remember this. Pay attention to what goes on in your mind while you are remembering. Try to do this right now, without distraction, and then come back to reading this chapter. What did you notice? First, did you remember what you ate? If so, how did you do it? What was the form of the memory? How did you go about probing your memory for the information? You probably thought something like this: “OK, what did I have … Oh yeah, I had rice and some of that spicy vegetable stew”. Or maybe you thought: “Dinner, dinner, what did I have? I'm not really sure if I did have dinner”. Whatever you thought, it probably involved some kind of inner monologue or narration. You probably asked yourself the question with your inner voice (which you know is part of your working memory system). You also probably tried to answer the question using language. The thinking and recollecting that you did as you considered different memories also involved your language. In other words, your memory retrieval was guided by language. And your memories themselves also guided your inner dialogue. Each time you considered a possible memory, you may have assessed its validity by using some language. And if you had to report the results of this memory search to someone else, you would absolutely need to use your language. Just try to think of something without using any language at all. It's possible, but not easy. Language and thinking are closely intertwined. So far in this book I have been discussing the structure of mental representations and how those representations might affect the thinking process. The chapters on similarity, attention, semantic memory, and concepts were all concerned with this topic. You probably have a good idea about how complex thought and cognition depend very much on having well-formed mental representations. In this chapter, however, we are going to take a much closer look at how this interaction between representation and behaviour works. Specifically, this chapter will examine the interaction between language and thinking. Objectives On completing this chapter you should be able to achieve the following: Be able to define human language: what are some of its features, and how is it different from other forms of communication in non-human species? Understand the role of language in cognition, especially as it relates to memory, ambiguity resolution, and conceptual metaphors. Gain an understanding of linguistic relativity and linguistic determinism: what are some of the predictions of the theory and how do those predictions hold up? Language and Communication We want to understand how language is used in thinking. The psychology of language as communication is a great place to start because language seems to be a uniquely human behaviour. And it is plausible that our use of language in thinking arose from our much earlier and more primitive use of language as communication. Of course, other animals communicate with each other. Sophisticated communication is seen in bees, for example, as they rely on a system of dances and wiggles to communicate the location of nectar to other bees in the colony (Frisch, 1967). When a honeybee returns from a nectar location, it performs a dance that corresponds precisely to the direction it flew and how long it was flying. It is communicating something to the other bees that is necessary for their collective survival. But it is not thinking. It has little choice in terms of whether or not to do the dance. It would perform that dance even if no other bees were watching. We agree that the bees are communicating and behaving but they do not seem to be thinking or using language. Other animals have different modes of communicating. Songbirds obviously have a well-developed and highly evolved system of mating calls and warning songs. These bird songs are unique to each species and require exposure to other bird song in order to be acquired. Dogs communicate with barks, growls, yelps, and the wagging of tails. And anyone with a dog knows that they respond to human language and non-verbal cues. Even my cat sort of responds to some verbal and non-verbal cues. These are all sophisticated means of communication, but we do not consider these to be “language” per se. Unlike human language, communication in these non-human species is fairly limited and direct. Bee dances have only one function: signalling the location of food. Bird song has a set function related to mating and even though birds require some exposure to other birds’ song, a bird can only learn its own song. Highly intelligent birds, like the African Grey parrot, can learn to mimic the language of humans, but they are not using human language to carry on a casual conversation or advance an agenda. Even dogs, which are capable of very complex behaviours, are really not able to use communication abilities to consider new ideas, solve complex problems, and tell stories. The great apes, specifically bonobos and orangutans, are known to be able to learn complex symbol systems. But they are not spontaneously using language to direct their behaviours in the way that humans can. In other words, nonhuman communication and “language-like behaviour” are used primarily to engage in direct communication or as a response to external stimuli. Nonhuman language-like behaviour is not tied to thinking in the way that human language is. In this way, human language is remarkable and unique. One of the big questions that psychologists interested in this field want to answer is Do we think in language? Like many of the other questions that we have been trying to answer, this one is simultaneously trivial and nearly unanswerable. On the one hand, of course we think in something like a natural language. We began this chapter with a consideration of how language drives our thinking and helps us navigate our memories. We plan what we are going to say, we ruminate over things that have been said to us, we consider alternatives, we plan actions and think about how those actions will affect things around us. Most of this kind of thinking takes place in explicit conscious awareness with a heavy reliance on an “inner voice” (a part of the working memory system that was covered in Chapter 3). On the other hand, many of our actions and behaviours are influenced by nonconscious computations, and are the result of System 1 thought (Evans, 2003, 2008; Evans & Stanovich, 2013). Rapid, intuitive decisions are made without much or indeed any influence by the inner voice. For the most part, the influential, dual-system approach that we discussed in Chapter 1 (also known as the default-interventionist approach) is built on the idea that non-conscious or intuitive processes drive many thinking behaviours, and these often need to be overridden by conscious, linguistically influenced thought. What is Language? It is clearly beyond the scope of this textbook to answer definitively the question of what language is and is not. That is not only too big a topic, but also an entire field of study with its own textbooks. However, we do need to have some basic definition of human language in order to understand the interaction between language and thinking. Early in the history of cognitive psychology, the linguist Charles Hockett described 13 characteristics of human languages (Hockett, 1960). This list of design features is a reasonable starting point. Figure 5.1 shows the complete list of the 13 characteristics, along with a brief description of each feature. These are all features of human language that suggest a unique and highly evolved system designed for communication with others and also with the self (e.g., thinking). Let's consider a few of them in more detail. For example, language is a behaviour that has total feedback. Whatever you say or vocalize you can also hear. You receive feedback that is directly related to what you intended to say. According to Hockett, this is necessary for human thinking. It does not take much imagination to consider how this direct feedback might have evolved into the internalization of speech, which is necessary for many complex thinking behaviours. Figure 5.1 Language is also productive. With human language, we can express an infinite number of things and ideas. There is no limit to what we can say, what we can express, or for that matter, what we can think. But this can be achieved within a finite system. We can say things that have never been said before, yet the English language has only 26 letters. There are about 24 consonant phonemes in English, and depending on the dialect and accent there are roughly 20 vowel phonemes. Even when allowing for every variation among different speakers and accents, it is clear that this is a very limited set of units. However, the combination of these units allows for almost anything to be expressed. The phonemes combine into words, phrases, and sentences according to the rules of the language's grammar, to produce an extremely productive system. Contrast this with the kind of communication that non-human species engage in. Birds, bees, and canines are very communicative, but the range of content is limited severely by instinct and design. Another characteristic of human language is that it is somewhat arbitrary. There does not need to be a correspondence between the sound of a word and the idea it expresses. There is a small set of exceptions—words like “flit” or “bonk”—which kind of sound like what they are describing, but this is a very limited subset. Importantly, our language does not need to have a direct correspondence with the world. This is not true for all communication systems. The bee dance that I discussed earlier is an example of non-arbitrary communication. The direction of the dance indicates the direction of the nectar source relative to the hive and the duration of the waggle indicates the distance. These attributes are directly related to the environment and constrained by the environment. This is sophisticated communication but not language as Hockett would describe it. For the most part, the sounds that we use to express an idea bear no relationship to the concrete aspects of the idea. They are, in fact, mental symbols that can link together our perceptual input and concepts. Understanding Language as Cognition Language is a remarkably complex set of behaviours. At its core, the challenge of understanding language as a cognitive behaviour is trying to understand how humans are able to produce language such that an idea or thought can be converted into speech sounds that can then be perceived by another person and converted back into an idea. Communicative language is essentially a “thought transmission system”. One person uses language to transmit an idea to another person. The linguistic duality between ideas and how they are expressed is often described as a relationship between the surface structure of communication and the deep structure. Surface structure refers to the words that are used, spoken sounds, phrases, word order, grammar, written letters, etc. The surface structure is what we produce when we speak and what we perceive when we hear. Deep structure, on the other hand, refers to the underlying meaning and semantics of a linguistic entity. These are the thoughts or ideas that you wish to convey via some surface structure. These are the thoughts or ideas that you try to perceive via that surface structure. One of the challenges in terms of understanding this relationship between surface and deep structure is that very often a direct correspondence seems elusive. For example, sometimes different kinds of surface structure give rise to the same deep structure. You can say This class is boring or This is a boring class and the underlying deep structure will be approximately the same despite the slight differences in surface structure. Human language is flexible enough to allow for many ways to say the same thing. The bigger problem comes when the same surface structure can refer to different deep structures. For example, you can say Visiting professors can be interesting. In this case, a deep structure that follows from this statement is that when your class is taught by a visitor – a visiting professor – it is sure to be interesting because visiting professors can be interesting. Another deep structure that comes from exactly the same statement is that attending a party or function at a professor's house is sure to be an interesting event because visiting professors can be interesting. The surrounding context makes it easier, but also suggests a challenge when trying to map surface structure to deep structure. The challenge is how to resolve the ambiguity. Ambiguity in language Language is full of ambiguity and understanding how our cognitive system resolves that ambiguity is an incredible challenge. I once saw a headline from the Associated Press regarding a story about commercial potato farmers. It read McDonald's Fries the Holy Grail for Potato Farmers. It may look funny, but most of us are quickly able to understand the deep structure here. They are not frying the Holy Grail. Rather, the headline writer uses the term “Holy Grail” as a metaphor for something that is often elusive, but ultimately an incredible prize. In order to understand this sentence, we need to read it, construct an interpretation, decide if that interpretation is correct, activate concepts about the Holy Grail, activate our knowledge of the metaphoric use of that statement, and finally construct a new interpretation of this sentence. This usually happens in a few seconds and will happen almost immediately when dealing with spoken language – an impressive feat of cognition. Often, when the surface structure leads to the wrong deep structure, it is referred to as a garden path sentence. The garden path metaphor itself comes from the notion of taking a walk in a formal garden along a path that leads either to a dead-end or an unexpected or surprise ending. And that's how garden path sentences work. The most well-known example is the sentence The horse raced past the barn fell (Bever, 1970). When most people read this sentence, it simply does not make sense. Or rather, it makes sense right up to the word barn. As soon as you read the word fell your understanding of the sentence plummets. The explanation is that as we hear a sentence, we construct a mental model of the idea. Various theories have referred to this as serial sentence parsing (Frazier & Rayner, 1982), or constraint satisfaction (MacDonald et al., 1994). The serial model, also known as the garden path model, assumes that we use our knowledge of grammar to construct a “sentence tree” mental model of the sentence as we hear the words. If a word does not fit into the conceptual tree, then we may need to construct a new model. The constraint satisfaction theory suggests that we use our knowledge of what words and ideas co-occur and follow each other in order to arrive at a comprehension. If the more probable interpretation is not the correct one, as in a garden path sentence, then this model also predicts some confusion. In both theories, the central issue is that the representations are constructed as we hear them. As soon as you hear The horse raced you construct a mental model of a horse that was racing. You also generate an expectation or inference that something might come after. When you hear past you generate a prediction that the horse raced past a thing, which turns out to be the barn. It is a complete idea. When you hear the word fell it does not fit with the semantics or the syntactic structure that you created. However, this sentence is grammatically correct, and it does have a proper interpretation. It works within a specific context. Suppose that you are going to evaluate some horses. You ask the person at the stable to race the horses to see how well they run. The horse that was raced past the house did fine, but the horse raced past the barn fell. In this context, the garden path sentence makes sense. It is still an ill-conceived sentence, but it is comprehensible in this case. Linguistic inferences Very often we have to rely on inferences, context, and our own semantic memory in order to deal with ambiguity and understand the deep structure. The same inferential process also comes into play when interpreting the deeper meaning behind seemingly unambiguous sentences. We generate inferences to aid our understanding and these can also direct our thinking. For example, in the United States a very popular news outlet is the Fox News Network. When the network launched in the early 2000s, its original slogan was Fair and Balanced News. There is nothing wrong with wanting to be fair and balanced: it's an admirable attribute in a news organization. But think for a minute about this statement. One possible inference is that if Fox News is “fair and balanced”, then perhaps its competitors are unfair and unbalanced. It may not explicitly say that, but it may not hurt Fox News's reputation if you make that inference on your own. The slogan, like many slogans, is simple on the surface but it's designed to encourage inferences like this. In 2003, I was a postdoctoral fellow and attending interviews at many universities, seeking a faculty position. I was interviewed at many places in the United States, which made sense because I was born and raised in the United States, I went to school in the United States, and I was a postdoc in the United States. But like most prospective academics, I considered positions in many geographic areas, including Canada. One interview took place in March 2003 at the University of Western Ontario (which is where I work now). That also happened to be the month when the United States launched what it called the Shock and Awe campaign in Iraq which was the opening to the US-led military action against Saddam Hussein's government. Not only was this a very big news event, but I also followed it more closely than I might have otherwise done because the campaign began on the very day that I left the United States and flew to Canada for my interview. The war began while I was in flight, on the way to Canada. On the one hand, I felt a little awkward being outside the country and being interviewed at a Canadian institution when my own country's government had just launched an attack that was still controversial: notably, Canada's government at the time did not support the US. On the other hand, it gave me a chance to see news coverage of this event from a non-US perspective. Remember that in 2003, although many people were getting news from the internet, it was still relatively uncommon to read or see coverage from media in other parts of the world. Most of us received our news via newspapers and television. One of the things that struck me as being surprising was the terminology used by most newscasters in the United States versus the terminology used by newscasters in Canada. In the United States, media referred to this as the War in Iraq. In Canada, the newscasters were referring to it as the War on Iraq. That single letter change from “in” to “on” makes an incredible difference. In the former case, it forces an inference that the US is fighting a war against an enemy that just happens to be in Iraq. The war is not against the country of Iraq or ordinary Iraqis, and indeed you might be helping Iraqis fight a war that they wanted to fight anyway. I think that is exactly how media outlets in the United States wanted this to be perceived. It was promoted by the US as part of the larger War on Terror. The Canadian government did not support this war, and the news outlets referred to it as the War on Iraq, suggesting that the United States had essentially declared war on another sovereign country. One could argue that neither term was exactly correct, but the only point I wish to make here is that how the war was being discussed by the news media was going to have an effect on how it is perceived. It was not at all uncommon (at the time) for many Americans to feel that this was a well-justified military exercise that ultimately was about fighting international terrorists who happened to be based in Iraq. In other words, the US did not declare war on Iraq; the war was just being fought there. How we describe things, and how we talk about them, can absolutely influence how others think about them. Memory The interaction between thought and language is complex, and many studies have shown that language use affects the structure and nature of mental representations and the output of many thinking behaviours. Let's consider a classic and well-known example. The eyewitness testimony research conducted by Elizabeth Loftus in the 1970s shows that language use can affect the content of episodic memory (Loftus & Palmer, 1974). In a series of studies, Loftus investigated how memory can be manipulated and affected by the questions that are being asked. In the most well-known example, subjects were shown videos of car accidents and were then asked about these accidents in follow-up questions. The general procedure might be close to what happens when the police are interviewing eyewitnesses about an event. The accident that subjects were shown was relatively straightforward. It depicted two cars intersecting on a city street. After watching the video, subjects were asked to estimate how fast they thought the cars were going when they intersected. But they were not asked with the term intersected. Subjects instead were asked with one of five words: collided, smashed, bumped, hit, or contacted. In other words, all of the subjects saw exactly the same video and were asked about it with the same question, except for the use of a different verb. When subjects were asked to estimate the speed, not surprisingly those who were asked about how fast the cars were going when they smashed into each other estimated higher speeds than those who were asked to say how fast the cars were going when they bumped or hit each other. That one-word change seemed to have an effect on subjects’ memory. All of Loftus's subjects saw the same video and the language manipulation did not occur until afterwards. So the initial encoding of memory should have been the same. But when the memories were retrieved and subjects were asked to make an estimate, the language that was used in the question changed their estimate. The language affected their thinking and their memories. Additional questions revealed that the memories may have been forever tainted by that initial question. A week later, subjects were asked if they remembered any broken glass. The key thing to remember in this particular study is that there was no broken glass in the actual video. However, subjects who were asked to estimate speed with the term smashed were significantly more likely to falsely remember broken glass. Of course, Loftus argued that these kinds of memory errors present a problem for eyewitness testimony because questioning can affect the nature of the memory. It is not a stretch at all to imagine that a detective or police officer, even without meaning to, would permanently affect the status of the memory depending on how the eyewitness was questioned. In other words, in this example, language is clearly having an effect on the mental representations that are being used to answer the questions later on. Analogy and metaphor Language can also be used effectively via analogy and metaphor to guide understanding. A good analogy relates concepts that may have a similar deep structure even though they may be different on the surface. Generally, the listener or receiver of the analogy has to attend to the deeper structure to realize these similarities. In the simplest form, if you know that Y is good, and you were told that X is analogous to Y, you infer that there is also something good about X. We see examples all the time. I make analogies when I lecture. You probably make analogies when you to explain things to people. Let's look at an example from a movie. Most people have seen the movie Shrek from the early 2000s as either a whole movie or parts of it as video clips or memes online. In one scene, Shrek is trying to explain to Donkey why ogres are complex and difficult to understand. He says “ogres are like onions” and then later explains that “We both have layers”. Now I realize that there are few things more annoying than someone explaining why a joke is funny, but I am going to do this anyway to make a point about analogies. If you've not seen this scene, you can easily find it on YouTube. When Shrek says “ogres are like onions”, Donkey initially misunderstands and focusses on the surface similarity and the perceptual qualities of onions. Donkey wonders if ogres are like onions because they smell bad or they make people cry. He transfers the wrong properties to Shrek, but it's funny because they are also properties of ogres. Only later does he understand the analogy that Shrek is trying to make. That ogres and onions have layers, and the outside might be different from the inside. The joke works because it lets Shrek make his deeper analogy at the same time that Donkey makes humorous, surface-level analogies. More concretely, consider an often-cited example: “An atom is like the solar system” (Gentner, 1983). At a certain age, early in primary school, most children have some knowledge about the solar system. They are aware that the sun is large and sits at the centre, that planets revolve around the sun, and that possibly some physical force allows this orbit to exist. Presumably, they know less about atomic structure, but knowing that an atom is like the solar system encourages the understanding that each atom has a nucleus and is surrounded by electrons. Also, some physical force allows this structure to be maintained. These two things are very different on the surface, but there are functional and deep similarities that can be explained and understood via metaphor and analogy. In this case, the analogy allows some of the properties of one domain (the solar system) to extend to another domain (the atom). Just as the sun keeps the planets in orbit via the force of gravity, so too does the nucleus keep the electron in check with the Coulomb force. Conceptual metaphor The linguist George Lakoff has suggested that conceptual metaphors play a big role in how a society thinks of itself, and in politics (Lakoff, 1987; Lakoff & Johnson, 1980). I gave the example earlier of the war in Iraq versus the war on Iraq, each of which creates a different metaphor. One is an aggressive action against or on a country; the other is an aggressive act taking place in a country. Lakoff argues that these conceptual metaphors constrain and influence the thinking process. He gives the example of an argument. One conceptual metaphor for arguments is that an argument is like a war. If you think of arguments in this way, you might say things like I shot down his arguments, or Watch this guy totally destroy climate change denial in two minutes. These statements are likely to arise from a conceptual metaphor of arguments as some kind of analogy for warfare. Lakoff also suggests another metaphor for argument—that it is a game of chance. In this case, people might say things like You win some, you lose some. Lakoff suggests that there is an interaction between a given conceptual metaphor and produced statements and utterances, and that this interaction relates to how we understand the world. There are other examples as well. We generally think of money as a limited resource and a valuable commodity. By analogy, we often think of time in the same way. As such, many of the statements that we make about time reflect this relationship. We might say You're wasting my time or I need to budget my time better or This gadget is a real timesaver. According to Lakoff, we say the things we do because we have these underlying conceptual metaphors, and these metaphors are part of our culture. 5.1 Theory in the Real World Lakoff's theory has been influential since it was introduced in the 1980s, but it has taken on some renewed relevance recently since the 2016 election of Donald Trump in the US and also because of the general support for populism in many other counties. Lakoff has been thinking and writing about language and how it affects behaviour for decades and his most recent work discusses some of the ways that popular media, news media, and statements by politicians can shape how we think. Being aware of these things is important because we don't want to be misled or duped, but our minds sometimes make it easy. Using some examples from President Trump, Lakoff points out how we can be misled without realizing it. And although he's using President Trump as the primary example, these things can be observed in many politicians. Trump has, however, made this a central part of his governing and campaigning style. One clear example is simple repetition. President Trump repeats terms and slogans so that they will become a part of your concept. He was famous for saying/tweeting: We're going to win. We're going to win so much. We're going to win at trade, we're going to win at the border. We're going to win so much, you're going to be so sick and tired of winning, you're going to come to me and go, ‘Please, please, we can't win any more'. There are seven repetitions of the word win in that speech, and he's repeated statements like that many times since. We also hear and see repetitions of statements like FAKE NEWS and NO COLLUSION, etc. Lakoff argues that the simple repetition is the whole goal. Even if you don't believe the president, you are still taking in these words and concepts. And they are often amplified by people commenting and retweeting. The president is adept at controlling the conversation by framing people and ideas. He seems to do this in two ways. One is with his use of nicknames — Crooked Hillary for example, to refer to former presidential candidate Hillary Clinton. “Crooked” taps into a cognitive metaphor for untruthfulness in which we think that TRUTH is STRAIGHT. By calling her “crooked” and repeating that it might seem silly, but it still has the desired effect of reinforcing the concept that she is neither truthful nor trustworthy. Even the president's slogan Make America Great Again is very loaded with linguistic reference, implying that it was great in the past, that it's not great now, and that the president's actions will make it great again in the way it was great before. Lakoff suggests that we should all be aware of how these things influence our thinking. We may not agree with the president, but according to Lakoff these repetitions and the use of framing and metaphors will create the associations anyway. The more times you hear them the stronger the memory becomes. And this is not just the case with President Trump. Lakoff's message applies to other leaders, politicians and media. If you're reading this in the UK, the Netherlands, India, or South Africa, these examples can be extended, and they will likely apply as well. For better or worse, language influences how we think about things. It causes us to strengthen some representations and create new memories. It activates schemas and concepts. It causes us to make inferences and draw conclusions. And it makes us able to be coerced and misled. The best defence against being misled is knowing why this happens and how to recognize it. Sometimes metaphors can show real measurable differences in terms of behaviour. A great study by Kempton (1986) investigated the real-world consequences of a metaphor. He used the example of residential thermostats, which is a temperature-sensitive controller for home heating and cooling. He reasoned that home heating systems are relatively simple and are familiar to most people living in a climate with a distinct summer and winter. Even if you don't own your own home right now, if you live in the US, Canada, much of the UK or EU, you probably grew up in a home with a thermostat that could be used to adjust the temperature. If you live in an apartment, or a university residence hall, you may also have a thermostat that allows you to control the temperature of the room. Kempton pointed out that many people must have some understanding of how the thermostat works because they typically have to adjust them several times a day or at least several times a week. (This excludes the current versions of thermostats which are highly programmable or the so-called “smart” thermostats which eventually learn to adjust home temperature automatically.) Kempton argued that there are broadly defined metaphors or folk theories for how the thermostat works. He referred to one of these folk theories as the “feedback theory”, which suggests that the thermostat senses the temperature and turns the furnace on or off to maintain that set temperature. He referred to the other folk theory as the “valve theory”, which holds that the thermostat actually controls the amount of heat coming out – like a valve. A higher setting releases higher heat. Only one of these theories is correct, the feedback theory, but Kempton observed that both of these theories were present, sometimes within the same household, and that these theories had an effect on actual thermostat adjustment behaviour. People who seemed to rely on the valve theory tended to make more adjustments in general and also more extreme adjustments than people who relied on the feedback theory. More adjustments translate to higher energy use. In other words, a misunderstanding of how the thermostat works, having the wrong metaphor or analogy for how it works, can have measurable differences in terms of behaviour and energy use. Universal cognitive metaphors Others have suggested that across many languages and cultures there seem to exist universal cognitive metaphors. In many cases, these reflect a conceptual similarity between a physical thing and a psychological concept. For example, there are many metaphors that relate to the idea of happiness being “up”. People can be said to be upbeat or feeling down if they are not happy, music can be up tempo, a smile is up, a frown is down. All of these idioms and statements come from this same metaphor. Other examples reflect the idea that consciousness is “up”. You wake up, you go down for a nap, etc. Another common metaphor is that control is like being above something. You can be on top of the situation, you are in charge of people who work under you, the Rolling Stones recorded a popular song called Under My Thumb. These cognitive metaphors are common in English, but they are common in many other languages as well. This suggests that there is a universality to these metaphors, and a commonality among cultures between language and thought. One possible explanation for the universality of these metaphors is that they are all tied to a physical state of being. The notion of consciousness being “up” relates easily to the notion of standing up when you are awake. If you overpower someone in a physical confrontation, you may find yourself literally on top of them. You may have to hold the other person down. This physiological connection between being on top and being in power helps to explain why these kinds of metaphors are universal and not tied to a specific language. Linguistic Relativity and Linguistic Determinism The discussions above illustrate many of the ways in which language influences how you remember things, how you think about things, and how you make decisions. In short, it is clear that language and linguistic context have an effect on thinking. One version of this general claim is referred to as linguistic relativity which suggests that our native language influences how we think and behave and that there will be differences among groups of people as a function of their native language. That is, thinking is relative. This relativity depends in part on the native language a person learned to speak. The strongest form of this claim is often referred to as linguistic determinism and is also sometimes referred to as the Sapir–Whorf hypothesis, after Edward Sapir and his student Benjamin Whorf. This strong version of the hypothesis argues that language determines thought and can even place constraints on what a person can perceive. In general, both the strong version and the weak versions of this theory are widely attributed to Whorf (1956), though he referred to his theory as linguistic relativity. Before he began the study of linguistics, Whorf was a chemical engineer and worked as a fire prevention engineer. A possibly apocryphal story suggests that some of his ideas and interest in linguistics arose during the time when Whorf was a fire prevention engineer and inspector. According to this story, he noticed employees smoking near canisters of gasoline, even though they claimed that the canisters were labelled as empty. An empty canister of gasoline can be very dangerous because of the fumes, but the workers labelled and conceptualized them as being empty. That is, they were linguistically empty, but not actually empty at all. Whorf began to believe that one's native language determines what you can think about, and even your ability to perceive things. This story may or may not be accurate, but it still makes an interesting point about the difference between how one describes something linguistically and what that thing actually is. In other words, “empty” may not really be empty. Whorf's bold claim A famous quote reads: We dissect nature along lines laid down by our native language. The categories and types that we isolate from the world of phenomena we do not find there because they stare every observer in the face; on the contrary, the world is presented in a kaleidoscope flux of impressions which has to be organized by our minds – and this means largely by the linguistic systems of our minds. We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement to organize it in this way – an agreement that holds throughout our speech community and is codified in the patterns of our language […] all observers are not led by the same physical evidence to the same picture of the universe, unless their linguistic backgrounds are similar, or can in some way be calibrated. (Whorf, 1956: 213–214; my emphasis) Whorf appears to be challenging Plato's notion of cutting nature at the joints (which was discussed in Chapter 4), meaning a natural way to divide the natural world and concepts. Rather, Whorf suggests that concepts and categories are determined almost exclusively by one's native language. This is often thought of as the strongest form of linguistic relativity. In this case, the theory implies that one's native language necessarily determines thinking, cognition, and perception. Colour cognition However, this claim that language constrains or determines perception and cognition was a bold one, and in the middle of the twentieth century was very provocative. Anthropologists, psychologists, and linguists began to look for and examine ways to test this idea. One of the earliest was a study by Berlin and Kay (1969). They looked at the distribution of colour terms across many different languages. They reasoned that if language constrains thought, then native languages may constrain the types of colours that can be perceived and used. In order to be considered, they looked at colours that were monolexemic, which means that the term has only one core meaning. For example, the word “red” is monolexemic because it has only a single core meaning: “red”. The term “reddish” is not monolexmic because there are two core units of meaning: “red” and the modifier –“ish”. A basic colour term cannot be included in the description of any other colour terms; for example, indigo is not a basic term as it is a kind of blue colour. Basic colour terms cannot be restricted to a narrow class of objects; for example, the colour blonde works for hair, wood and beer, but not for many other things. Basic colour terms must have a domain-general utility in the language. A basic colour term refers to a property that can be extended to objects in many different classes. What Berlin and Kay (1969) found is shown in Figure 5.2. All languages contain terms for dark and light. Red is also fairly common, and in languages with only three terms, those languages always have a word for black, white, and red. Red is a very salient colour for humans as it is the colour of hot things and blood. As languages evolve, more terms may be added but they still keep the terms from earlier stages. Figure 5.2 Berlin and Kay's work does not really argue completely against the linguistic determinism hypothesis, and their initial claims have been softened and criticized (Saunders & van Brakel, 1997). But their work has provided a very interesting way to test it. If there are languages with only two or three words for colours, and if the linguistic relativity theory is correct, then speakers of that language should have difficulty categorizing colours with the same colour name. Eleanor Rosch (Heider, 1972) carried out a test like this with an indigenous group in Papua New Guinea. The Dani people have only two words to denote colours, and thus to linguistically define colour categories. One category is called mili and refers to cool, dark shades, such as the English colours blue, green, and black. The second category is mola, which refers to warmer or lighter colours, such as the English colours red, yellow, and white. In several experiments, Rosch asked her subjects to engage in colour learning tasks with colour cards. These cards, known as “colour chips”, were taken from the Munsell colour system which is a system of describing colour on three dimensions of hue, value (lightness), and chroma (colour purity). The Munsell system has been used since the 1930s as a standardized colour language for scientists, designers, and artists. The colour chips are small cards with a uniform colour on one side, usually with a matte finish. These look a lot like what you might find at a store that sells paint. One of the tasks Rosch used was a paired associate learning task. A paired associate task is when participants are asked to learn a list of things and each thing is paired with something they already know. So if you were asked to learn a list of new words in a paired associate task, each word would be paired with a word you already know. The word you already know serves as a memory cue. In Rosch's task, the things to be learned were the Munsell colour chips. Some of these colour chips were what are referred to as focal colours. In other words, these colour chips were at the perceptual centre of their category. They were selected as the best example of a colour category in the prior study with English speakers. When asked to pick the “best example”, Rosch found widespread agreement for colours with the highest saturation. Thus, the focal colour for red was the single chip that would be identified by most speakers of English as being the best example of red. Other chips might also be called red but were not identified as the centre of the category or the best example. And still other chips might be more ambiguous. They might be named red some of the time, and at other times might appear to be another colour. You can pick out focal colours yourself. If you go to select a new colour for text in your wordprocessing programme, you can see a wide arrangement of colours, but one seems to stick out as the best example of red, the best example of blue, the best example of green, etc. In other words, we would all probably agree which exact shade is the best example of the colour green. This would be the focal colour for green. In one of Rosch's experiments, subjects were shown a chip and taught a new name. This was done for 16 colour word pairs. Rosch reasoned that English speakers would have no difficulty learning a paired associate for a focal colour because it would already activate the prototype for an existing colour category. They should perform less well on paired associate learning for non-focal colours because they would not have a linguistic label to hang on that colour. Speakers of the Dani language should behave similarly, except they should show no advantage for most of the focal colours. That is because, if linguistic determinism is operating, the so-called focal colours would not be special in any way because speakers of the Dani language do not have the same categories. As far as linguistic determinism is concerned, they should not have the same focal colours as English speakers because they have different colour categories. Our focal colours are the centre of our colour categories. Being shown a focal red should not activate an existing linguistic category for speakers of the Dani language, and so they should show little difference between learning the paired associates for focal colours and learning the paired associates for non-focal colours. This is not what Rosch found, however. Speakers of the Dani language showed the same advantage for learning focal colours over non-focal colours that English speakers showed. This suggests that even though their language has only two words to denote colour categories, they can perceive the same differences in colours as English speakers can. Thus, this appears to be evidence against a strong interpretation of linguistic relativity. The Dani language was not constraining the perception of its speakers. In many ways, this should not be surprising because colour vision is carried out computationally at the biological level. Regardless of linguistically defined categories, we all still have the same visual system with a retina filled with photoreceptors that are sensitive to different wavelengths. Naming common objects More recent work has continued to cast doubt on the linguistic determinism theory. Barbara Malt's research (Malt et al., 1999) looked at artefacts and manufactured objects, and the linguistic differences between English and Spanish. Participants in the experiment were shown many different common objects, such as bottles, containers, jugs, and jars. For speakers of North American English, a “jug” is typically used to contain liquid, is about four litres in volume, and has a handle. A “bottle” is typically smaller, has a longer neck, and no handle. A “jar” is typically made out of glass and has a wide mouth. A “container” is usually not made of glass, but of plastic. Containers come in round and square shapes, and are usually used to contain non-liquid products. Speakers of English may vary in terms of the exact category boundaries, but most will agree on what to call a bottle, what to call a jug, etc. Whereas speakers of North American English refer to jugs separately from jars, speakers of Spanish typically label all of these things with a single term. In other words, a glass bottle, a jug, and a jar might all be labelled with the term “frasco”. If linguistic determinism held true for manufactured objects, Spanish speakers should show less ability to classify them into different categories based on surface similarity. In other words, if you speak a language that has only one term for all of these objects, you should minimize attention to the individuating features and instead tend to classify them as members of the same group. However, Malt's results did not support this prediction. English-speaking and Spanish-speaking subjects did not differ much from each other when classifying these containers via overall similarity. That is, they might have the same label for all of the different objects, but when asked to sort them into groups based on similarity, they all sorted them in roughly the same way as Englishspeaking subjects. The linguistic label did not interfere with their ability to perceive and process surface features. In short, these results do not support the strong version of the linguistic determinism theory. Count versus mass nouns Despite the failings of the linguistic determinism theory, we have already shown that language can affect memory and cognition. Language also affects perception and interpretation in some cases. For example, consider the distinction between objects and substances. In English, as in many languages, we have nouns to refer to objects and nouns to refer to collections of things or substances. So-called count nouns refer to entities, objects, and kinds. We can say “one horse, two horses, five cats, and 13 cakes”. On the other hand, mass nouns typically denote entities that are not considered individually. In other words, a substance rather than an object. We might say “a pile of leaves”, “a dash of salt”, or “a lot of mud”. Even though the substance denoted by the mass noun might be made of individual objects, we are not referring to the individual objects with the mass noun; rather, we are referring to the collection of them as a thing. A study by Soja et al. (1991) looked at when we acquire the ability to tell the difference between objects and substances. Do children learn to do this through exposure to their native language? The researchers examined English-speaking two-year-olds. The children were shown objects that were given an arbitrary name. They were then asked to pick out similar objects to the one they had just been shown and given a name for. For example, children might be shown a solid object or a non-solid object and told “This is my blicket” or “Do you see this blicket?” The children were then asked to extend the concept by being asked “Show me some more”. In extending these words to new displays, two-year-olds showed a distinction between object and substance. When the sample was a hard-edged solid object, they extended the new word to all objects of the same shape, even when made of a different material. When the sample was a non-solid substance, they extended the word to other-shaped puddles of that same substance but not to shape matches made of different materials. This suggests that the distinction may be acquired via language, because both the object and the term were new. Even two-year-old children are able to generalize according to linguistic information. Linguistic differences in time perception A final example of how language affects the thinking process is demonstrated in a study by Lera Boroditsky (2001). She noted that across different languages and cultures there are differences in the metaphors that people use to talk about time. This is related to Lakoff's ideas on conceptual metaphors (discussed earlier). English speakers often talk about time as if it is horizontal. That is, a horizontal metaphor would result in statements like “pushing back the deadline” or “moving a meeting forward”. Mandarin speakers, on the other hand, often talk about time as if it is on a vertical axis. That is, they may use the Mandarin equivalents of up and down to refer to the order of events, weeks, and months. It should be noted that this is not entirely uncommon in English, especially when considering time on a vertically oriented calendar. In fact, when I look at the Google calendar on my smartphone, it is arranged in a vertical axis with the beginning of the day at the top and the end of the day at the bottom. Although I still use terms like “I've been falling behind on this project”, I am also pretty used to thinking about time in the vertical dimension. We also have English vertical time metaphors, such as doing something “at the top of the day”. Exceptions aside, these metaphors seem to be linguistically and culturally entrenched in the idioms and statements that are produced. In order to test if the conceptual metaphor and language affects subjects’ ability to understand the scene, subjects were first shown a prime to orient them to the horizontal or vertical dimension. They were then asked to either confirm or disconfirm temporal propositions. Figure 5.3 shows an example of a prime. In the first, an English example of a horizontal prime is shown: “The black ball is ahead of the white ball”. The second shows an English example of a vertical prime: “The black ball is on top of the white ball”. Boroditsky reasoned that if a prime activated a vertical metaphor, and you spoke a language that encouraged thinking about time in a vertical dimension, you should see a processing facilitation. That is, you would be faster at judging the temporal proposition. If you saw a prime that activated the vertical metaphor, but you spoke a language that encouraged thinking about time in a horizontal dimension, then you should see a cost and would be slower at judging the temporal proposition. Figure 5.3 This is an example of a horizontal visual prime (on the left) and a vertical visual prime (on the right). If subjects are shown one or the other prime before making a temporal inference, it can facilitate or interfere with the inference depending on how the speaker conceptualizes time. This is what Boroditsky found. After seeing a vertically oriented prime, Mandarin speakers were faster to confirm or disconfirm temporal propositions compared to when they had seen the horizontal prime. She found the reverse effect for English speakers. This suggests that language differences may predict aspects of temporal reasoning by speakers. Subsequent studies showed that this default orientation can be overridden. For example, Boroditsky trained English-speaking subjects to think about time vertically, giving them examples of vertical metaphors. In this case, after the training, the English speakers exhibited the vertical rather than the former horizontal priming effect. Although this study shows a clear impact of language on thought, it is not strong evidence for linguistic determinism because the native language does not seem to determine how time is perceived. Instead, local effects of linguistic context appear to be doing most of the work. Summary Although many different species communicate with each other, only humans have developed an expansive, productive, and flexible natural language. And because language provides the primary point of access to our own thoughts, language and thinking seem completely intertwined. In Chapter 3, I argued that memory is flexible and malleable. This flexibility can occasionally be a liability as memories are not always accurate. Memories are a direct reflection of the linguistic processes used during the encoding process and the retrieval process. This point was demonstrated in the discussion of eyewitness testimony and how the language used during questioning can change the content of the memory itself. In other words, memory for events is created by and affected by the language we use when describing the event to ourselves and to others. In Chapter 4 on concepts and categories, I suggested that concepts might be represented by definitions, lists of features, or centralized prototypes. Although each of these theories makes different claims about what is represented, all of them assume that a category and a concept can have a label. The label is linguistic. Although the present chapter makes it clear that our concepts are not exclusively defined by a language, categories’ verbal labels still provide an important access point. We access categorical information in many ways by using the linguistic label. In Section 2, we will examine the role of language in reasoning. In deductive logic, language use must be precise in order to determine a valid argument from an invalid argument. We will also look at the role of language in mediating between the faster, instinct-based behaviours produced by System 1 thinking and the behavioural outcomes produced by the slower, more deliberative System 2 thinking. Linguistic ability helps to mediate between these two systems. Furthermore, System 2 is generally thought to be language-based. In Chapter 9 in Section 3, language use can influence how decisions are made by providing a context or frame. The same decision can be framed as beneficial or as a potential loss. In other words, linguistic content and semantics can have a sizeable impact on the behavioural outcome of decisions.

Use Quizgecko on...
Browser
Browser