Language in Mind: An Introduction to Psycholinguistics by Julie Sedivy PDF
Document Details

Uploaded by RecommendedSard5276
2020
Julie Sedivy
Tags
Summary
Language in Mind by Julie Sedivy introduces the science of language, encompassing topics from the origins of human language and the brain to speech perception and sentence structure. The textbook, published in 2020, provides a comprehensive overview of psycholinguistics.
Full Transcript
Language in Mind An Introduction to Psycholinguistics Second Edition JULIE SEDIVY University of Calgary About the Cover and Chapter Opener Images Bruno Mallart is one of the most talented European artists, his work having appeared in som...
Language in Mind An Introduction to Psycholinguistics Second Edition JULIE SEDIVY University of Calgary About the Cover and Chapter Opener Images Bruno Mallart is one of the most talented European artists, his work having appeared in some of the world’s premier publications: The New York Times, The Wall Street Journal, and the New Scientist, to name a few. A freelance illustrator since 1986, Mallart first worked for several children’s book publishers and advertising agencies, using a classical realistic watercolor and ink style. Some years later he began working in a more imaginative way, inventing a mix of drawing, painting, and collage. His work speaks of a surrealistic and absurd world and engages the viewer’s imagination and sense of fun. Despite the recurring use of the brain in his art, Mallart’s background is not scientific— though his parents were both neurobiologists. He uses the brain as a symbol for abstract concepts such as intelligence, thinking, feeling, ideas, and knowledge. Attracted to all that is mechanical, Mallart’s art frequently includes machine parts such as gears and wheels that imply movement and rhythm. These features together, in their abstract representation, beautifully illustrate the topics discussed in Language in Mind, Second Edition. To see more of Bruno Mallart’s art, please go to his website: www.brunomallart.com. Language in Mind, Second Edition Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America © 2020 Oxford University Press Sinauer Associates is an imprint of Oxford University Press. For titles covered by Section 112 of the US Higher Education Opportunity Act, please visit www.oup.com/us/he for the latest information about pricing and alternate formats. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer. Address editorial correspondence to: Sinauer Associates 23 Plumtree Road Sunderland, MA 01375 USA Address orders, sales, license, permissions, and translation inquiries to: Oxford University Press USA 2001 Evans Road Cary, NC 27513 USA Orders: 1-800-445-9714 Library of Congress Cataloging-in-Publication Data Names: Sedivy, Julie, author. Title: Language in mind : an introduction to psycholinguistics / Julie Sedivy, University of Calgary. Description: Second edition. | New York : Oxford University Press, | Includes bibliographical references and index. Identifiers: LCCN 2018044934 | ISBN 9781605357058 (hardcover)| ISBN 9781605358369 (ebook) Subjects: LCSH: Psycholinguistics. | Cognition. Classification: LCC BF455.S3134 2020 | DDC 401/.9--dc23 LC record available at https://lccn.loc.gov/2018044934 987654321 Printed in the United States of America For My Students Brief Contents 1 Science, Language, and the Science of Language 2 Origins of Human Language 3 Language and the Brain 4 Learning Sound Patterns 5 Learning Words 6 Learning the Structure of Sentences 7 Speech Perception 8 Word Recognition 9 Understanding Sentence Structure and Meaning 10 Speaking: From Planning to Articulation 11 Discourse and Inference 12 The Social Side of Language 13 Language Diversity Contents CHAPTER 1 Science, Language, and the Science of Language BOX 1.1 Wrong or insightful? Isaac Asimov on testing students’ knowledge 1.1 What Do Scientists Know about Language? 1.2 Why Bother? CHAPTER 2 Origins of Human Language 2.1 Why Us? BOX 2.1 Hockett’s design features of human language METHOD 2.1 Minding the gap between behavior and knowledge 2.2 The Social Underpinnings of Language RESEARCHERS AT WORK 2.1 Social scaffolding for language METHOD 2.2 Exploring what primates can’t (or won’t) do 2.3 The Structure of Language BOX 2.2 The recursive power of syntax LANGUAGE AT LARGE 2.1 Engineering the perfect language 2.4 The Evolution of Speech BOX 2.3 Practice makes perfect: The “babbling” stage of human infancy BOX 2.4 What can songbirds tell us about speaking? 2.5 How Humans Invent Languages LANGUAGE AT LARGE 2.2 From disability to diversity: Language studies and Deaf culture 2.6 Language and Genes BOX 2.5 Linguistic and non-linguistic impairments in Williams and Down syndromes 2.7 Survival of the Fittest Language? BOX 2.6 Evolution of a prayer DIGGING DEEPER Language evolution in the lab CHAPTER 3 Language and the Brain 3.1 Evidence from Damage to the Brain BOX 3.1 Phineas Gage and his brain LANGUAGE AT LARGE 3.1 One hundred names for love: Aphasia strikes a literary couple METHOD 3.1 The need for language diversity in aphasia research 3.2 Mapping the Healthy Human Brain METHOD 3.2 Comparing apples and oranges in fMRI BOX 3.2 The functional neuroanatomy of language LANGUAGE AT LARGE 3.2 Brain bunk: Separating science from pseudoscience BOX 3.3 Are Broca and Wernicke dead? 3.3 The Brain in Real-Time Action LANGUAGE AT LARGE 3.3 Using EEG to assess patients in a vegetative state BOX 3.4 A musical P600 effect RESEARCHERS AT WORK 3.1 Using ERPs to detect cross-language activation DIGGING DEEPER Language and music CHAPTER 4 Learning Sound Patterns 4.1 Where Are the Words? METHOD 4.1 The head-turn preference paradigm BOX 4.1 Phonotactic constraints across languages 4.2 Infant Statisticians BOX 4.2 ERPs reveal statistical skills in newborns 4.3 What Are the Sounds? LANGUAGE AT LARGE 4.1 The articulatory phonetics of beatboxing BOX 4.3 Vowels METHOD 4.2 High-amplitude sucking 4.4 Learning How Sounds Pattern BOX 4.4 Allophones in complementary distribution: Some crosslinguistic examples 4.5 Some Patterns Are Easier to Learn than Others RESEARCHERS AT WORK 4.1 Investigating potential learning biases DIGGING DEEPER How does learning change with age and experience? CHAPTER 5 Learning Words 5.1 Words and Their Interface to Sound 5.2 Reference and Concepts BOX 5.1 Some sources of non-arbitrariness in spoken languages LANGUAGE AT LARGE 5.1 How different languages cut up the concept pie 5.3 Understanding Speakers’ Intentions RESEARCHERS AT WORK 5.1 Assessing the accuracy of adult speakers METHOD 5.1 Revisiting the switch task 5.4 Parts of Speech 5.5 The Role of Language Input BOX 5.2 Learning from bilingual input 5.6 Complex Words LANGUAGE AT LARGE 5.2 McLanguage and the perils of branding by prefix BOX 5.3 The very complex morphology of Czech BOX 5.4 Separate brain networks for words and rules? DIGGING DEEPER The chicken-and-egg problem of language and thought CHAPTER 6 Learning the Structure of Sentences 6.1 The Nature of Syntactic Knowledge BOX 6.1 Stages of syntactic development LANGUAGE AT LARGE 6.1 Constituent structure and poetic effect BOX 6.2 Rules versus constructions BOX 6.3 Varieties of structural complexity 6.2 Learning Grammatical Categories RESEARCHERS AT WORK 6.1 The usefulness of frequent frames in Spanish and English 6.3 How Abstract Is Early Syntax? BOX 6.4 Quirky verb alterations BOX 6.5 Syntax and the immature brain 6.4 Complex Syntax and Constraints on Learning BOX 6.6 Specific language impairment and complex syntax METHOD 6.1 The CHILDES database 6.5 What Do Children Do with Input? LANGUAGE AT LARGE 6.2 Language universals, alien tongues, and learnability METHOD 6.2 What can we learn from computer simulations of syntactic learning? DIGGING DEEPER Domain-general and domain-specific theories of language learning CHAPTER 7 Speech Perception 7.1 Coping with the Variability of Sounds BOX 7.1 The articulatory properties of English consonants BOX 7.2 Variability in the pronunciation of signed languages BOX 7.3 Categorical perception in chinchillas METHOD 7.1 What can we learn from conflicting results? 7.2 Integrating Multiple Cues BOX 7.4 Does music training enhance speech perception? 7.3 Adapting to a Variety of Talkers LANGUAGE AT LARGE 7.1 To dub or not to dub? BOX 7.5 Accents and attitudes RESEARCHERS AT WORK 7.1 Adjusting to specific talkers 7.4 The Motor Theory of Speech Perception LANGUAGE AT LARGE 7.2 How does ventriloquism work? BOX 7.6 What happens to speech perception as you age? DIGGING DEEPER The connection between speech perception and dyslexia CHAPTER 8 Word Recognition 8.1 A Connected Lexicon BOX 8.1 Controlling for factors that affect the speed of word recognition METHOD 8.1 Using the lexical decision task BOX 8.2 Words: All in the mind, or in the body too? 8.2 Ambiguity BOX 8.3 Why do languages tolerate ambiguity? RESEARCHERS AT WORK 8.1 Evidence for the activation of “sunken meanings” LANGUAGE AT LARGE 8.1 The persuasive power of word associations 8.3 Recognizing Spoken Words in Real Time BOX 8.4 Do bilingual people keep their languages separate? BOX 8.5 Word recognition in signed languages 8.4 Reading Written Words BOX 8.6 Do different writing systems engage the brain differently? LANGUAGE AT LARGE 8.2 Should English spelling be reformed? DIGGING DEEPER The great modular-versus-interactive debate CHAPTER 9 Understanding Sentence Structure and Meaning 9.1 Incremental Processing and the Problem of Ambiguity BOX 9.1 Key grammatical terms and concepts in English LANGUAGE AT LARGE 9.1 Crash blossoms run amok in newspaper headlines METHOD 9.1 Using reading times to detect misanalysis 9.2 Models of Ambiguity Resolution BOX 9.2 Two common psychological heuristics BOX 9.3 Not all reduced relatives lead to processing implosions 9.3 Variables That Predict the Difficulty of Ambiguous Sentences RESEARCHERS AT WORK 9.1 Subliminal priming of a verb’s syntactic frame BOX 9.4 Doesn’t intonation disambiguate spoken language? 9.4 Making Predictions 9.5 When Memory Fails 9.6 Variable Minds BOX 9.5 The language experience of bookworms versus socialites BOX 9.6 How does aging affect sentence comprehension? LANGUAGE AT LARGE 9.2 A psycholinguist walks into a bar… DIGGING DEEPER The great debate over the “bilingual advantage” CHAPTER 10 Speaking: From Planning to Articulation 10.1 The Space between Thinking and Speaking BOX 10.1 What spoken language really sounds like LANGUAGE AT LARGE 10.1 The sounds of silence: Conversational gaps across cultures 10.2 Ordered Stages in Language Production BOX 10.2 Common types of speech errors BOX 10.3 Learning to fail at speaking 10.3 Formulating Messages RESEARCHERS AT WORK 10.1 Message planning in real time LANGUAGE AT LARGE 10.2 “Clean” speech is not better speech 10.4 Structuring Sentences METHOD 10.1 Finding patterns in real-world language LANGUAGE AT LARGE 10.3 Language detectives track the unique “prints” of language users 10.5 Putting the Sounds in Words METHOD 10.2 The SLIP technique BOX 10.4 Was Freud completely wrong about speech errors? BOX 10.5 Patterns in speech errors DIGGING DEEPER Sentence production in other languages CHAPTER 11 Discourse and Inference 11.1 From Linguistic Form to Mental Models of the World RESEARCHERS AT WORK 11.1 Probing for the contents of mental models BOX 11.1 Individual differences in visual imagery during reading METHOD 11.1 Converging techniques for studying mental models LANGUAGE AT LARGE 11.1 What does it mean to be literate? 11.2 Pronoun Problems BOX 11.2 Pronoun systems across languages 11.3 Pronouns in Real Time BOX 11.3 Pronoun types and structural constraints 11.4 Drawing Inferences and Making Connections LANGUAGE AT LARGE 11.2 The Kuleshov effect: How inferences bring life to film BOX 11.4 Using brain waves to study the time course of discourse processing 11.5 Understanding Metaphor LANGUAGE AT LARGE 11.3 The use and abuse of metaphor DIGGING DEEPER Shallow processors or builders of rich meaning? CHAPTER 12 The Social Side of Language 12.1 Tiny Mind Readers or Young Egocentrics? RESEARCHERS AT WORK 12.1 Learning through social interaction BOX 12.1 Social gating is for the birds METHOD 12.1 Referential communication tasks BOX 12.2 Does language promote mind reading? 12.2 Conversational Inferences: Deciphering What the Speaker Meant LANGUAGE AT LARGE 12.1 On lying and implying in advertising BOX 12.3 Examples of scalar implicature BOX 12.4 Using conversational inference to resolve ambiguity LANGUAGE AT LARGE 12.2 Being polite, indirectly 12.3 Audience Design 12.4 Dialogue LANGUAGE AT LARGE 12.3 Why are so many professors bad at audience design? DIGGING DEEPER Autism research and its role in mind-reading debates CHAPTER 13 Language Diversity LANGUAGE AT LARGE 13.1 The great language extinction 13.1 What Do Languages Have in Common? BOX 13.1 Language change through language contact 13.2 Explaining Similarities across Languages RESEARCHERS AT WORK 13.1 Universals and learning biases METHOD 13.1 How well do artificial language learning experiments reflect real learning? BOX 13.2 Do genes contribute to language diversity? BOX 13.3 Can social pressure make languages less efficient? 13.3 Words, Concepts, and Culture BOX 13.4 Variations in color vocabulary BOX 13.5 ERP evidence for language effects on perception 13.4 Language Structure and the Connection between Culture and Mind METHOD 13.2 Language intrusion and the variable Whorf effect BOX 13.6 Mark Twain on the awful memory-taxing syntax of German 13.5 One Mind, Multiple Languages LANGUAGE AT LARGE 13.2 Can your language make you broke and fat? DIGGING DEEPER Are all languages equally complex? Glossary Literature Cited Author Index Subject Index Preface Note to Instructors As psycholinguists, we get to study and teach some of the most riveting material in the scientific world. And as instructors, we want our students to appreciate what makes this material so absorbing, and what it can reveal about fundamental aspects of ourselves and how we interact with each other. That desire provided the impetus for this textbook. As I see it, a textbook should be a starting point—an opening conversation that provokes curiosity, and a map for what to explore next. This book should be accessible to students with no prior background in linguistics or psycholinguistics. For some psychology students, it may accompany the only course about language they’ll ever take. I hope they’ll acquire an ability to be intelligently analytical about the linguistic waters in which they swim daily, an appreciation for some of the questions that preoccupy researchers, and enough background to follow significant new developments in the field. Some students will wind up exploring the literature at close range, perhaps even contributing to it. These students need an introductory textbook that lays out important debates, integrates insights from its various subdisciplines, and points to the many threads of research that have yet to be unraveled. Throughout this book, I’ve tried to encourage students to connect theories and findings to observations about everyday language. I’ve been less concerned with giving students a detailed snapshot of the newest “greatest hits” in research than with providing a helpful conceptual framework. My goal has been to make it as easy as possible for students to read the primary literature on their own. Many students find it very difficult to transition from reading textbooks to digesting journal articles; the new Researchers at Work boxes are designed to help students by serving as a model for how to pull the key information out of a journal article. I’ve tried to emphasize not just what psycholinguists know (or think they know), but how they’ve come to know it. Experimental methods are described at length, and numerous figures and tables throughout the book lay out the procedural details, example stimuli, and results from some of the experiments discussed in the chapters. To help students actively synthesize the material, I’ve added a new series of Questions to Contemplate after each section. These may prompt students to organize their thoughts and notes, and instructors can nudge students into this conceptual work by assigning some as take-home essay questions. This edition continues to offer a mix of foundational and newer research. More examples of crosslinguistic research, including work with signed languages, have been included. I’ve also put greater emphasis on language development over the lifespan, with more detailed discussions of the role of language experience and the effects of age on language. And I’ve encouraged students to think about some current research controversies, such as the disputed connection between bilingualism and enhanced cognitive skills. Needless to say, there are many potential pages of material that I regretfully left out; my hope is that the book as a whole provides enough conceptual grounding that students can pursue additional topics while having a sense of the overall intellectual context into which they fit. I’ve also tried to give students a balanced view of the diverging perspectives and opinions within the field (even though, naturally, I subscribe to my own favorite theories), and a realistic sense of the limits to our current knowledge. And if, along the way, students happen to develop the notion that this stuff is really, really cool—well, I would not mind that one bit. Thanks! I hope that everyone understands that on the cover of this book, where it says “Julie Sedivy,” this is shorthand for “Julie Sedivy and throngs of smart, insightful people who cared enough about this book to spend some portion of their brief lives bringing it into the world.” I’m indebted to the numerous scholars who generously took time from their chronically overworked lives to read and comment on parts of this book. Their involvement has improved the book enormously. (Naturally, I’m to blame for any remaining shortcomings of the book, which the reader is warmly encouraged to point out to me so I can fix them in any subsequent editions.) Heartfelt thanks to the following reviewers, who helped me develop the original edition of this book: Erin Ament, Janet Andrews, Stephanie Archer, Jennifer Arnold, Julie Boland, Craig Chambers, Morten Christiansen, Suzanne Curtin, Delphine Dahan, Thomas Farmer, Vic Ferreira, Alex Fine, W. Tecumseh Fitch, Carol Fowler, Silvia Gennari, LouAnn Gerken, Richard Gerrig, Ted Gibson, Matt Goldrick, Zenzi Griffin, Greg Hickok, Carla Hudson Kam, Kiwako Ito, T. Florian Jaeger, Michael Kaschak, Heidi Lorimor, Max Louwers, Maryellen MacDonald, Jim Magnuson, Utako Minai, Emily Myers, Janet Nicol, Lisa Pearl, Hannah Rohde, Chehalis Strapp, Margaret Thomas, John Trueswell, Katherine White, Eiling Yee. A special thanks to Jennifer Arnold and Jan Andrews for taking this book out for a spin in the classroom in its earlier versions, to their students and my own at the University of Calgary for providing valuable comments. The following reviewers generously offered their comments on the second edition, prompting much adding, deleting, reorganizing, and useful agonizing on my part: Janet Andrews, Vassar College Iris Berent, Northeastern University Jonathan Brennan, University of Michigan Craig Chambers, University of Toronto Jidong Chen, California State University, Fresno Judith Degen, Stanford University Anouschka Folz, Bangor University Zenzi Griffin, The University of Texas at Austin Adele Goldberg, Princeton University Karin Humphreys, McMaster University Timothy Justus, Pitzer College Michael Kaschak, Florida State University Lisa Levinson, Oakland University Sophia Malamud, Brandeis University Maryellen McDonald, University of Wisconsin Katherine Midgley, San Diego State University Jared Novick, University of Maryland, College Park Eleonora Rossi, California State Polytechnic University, Pomona Gregory Scontras, University of California at Irvine Ralf Thiede, University of North Carolina at Charlotte Adam Ussishkin, University of Arizona In today’s publishing climate, it’s common to question whether book publishers contribute much in the way of “added value.” The editorial team at Sinauer Associates (Oxford University Press) has shown a single-minded devotion to producing the best book we possibly could, providing skills and expertise that are far, far outside of my domain, and lavished attention on the smallest details of the book. Thanks to: Sydney Carroll, who shared the vision for this book, and kept its fires stoked; Carol Wigg, who, as production editor, knows everything; Alison Hornbeck, who stepped into Carol’s capable shoes and shepherded the book over the finish line; Elizabeth Budd, who spotted the hanging threads and varnished up the prose; Beth Roberge Friedrichs, who designed exactly the kind of book I would have wanted to design, if only I had the talent; Elizabeth Morales, Joan Gemme, and Chris Small for their intelligent attention to the visual elements of the book; and Grant Hackett, for indexing under time pressure. Deep appreciation to all the students who’ve come through my classrooms and offices at Brown University and the University of Calgary: for your curiosity, candid questions, and mild or strident objections; for wondering what this stuff was good for; for your questions; for rising to the occasion; and for occasionally emailing me years later to tell me what you learned in one my classes. Every author inevitably closes by giving thanks to partners and family members. There’s a reason for that. A real reason, I mean, not a ceremonial one. Very few projects of this scale can be accomplished without the loving encouragement and bottomless accommodation of the people closest to you. To my endlessly curious daughter Katharine Sedivy-Haley: thanks for the discussions about science, Isaac Asimov, and teaching and learning. Thanks to my son, Ben Haley, whose insights about his own experiences as a student provided the bass line for this book. And to my husband Ian Graham: thanks for understanding completely why it was I was writing this book, even though it deprived me of many, many hours of your sweet company. This weekend, love, I’m coming out to the mountains with you. JULIE SEDIVY Media and Supplements to accompany Language in Mind An Introduction to Psycholinguistics, Second Edition eBook Language in Mind, Second Edition is available as an eBook, in several different formats. Please visit the Oxford University Press website at www.oup.com for more information. For the Student Companion Website (oup.com/us/sedivy2e) The Language in Mind Companion Website provides students with a range of activities, study tools, and coverage of additional topics, all free of charge and requiring no access code. The site includes the following resources: Web Activities “Language at Large” modules Flashcards & Key Terms Chapter Outlines Web Links Further Readings Glossary (See the inside front cover for additional details.) For the Instructor (Instructor resources are available to adopting instructors online. Registration is required. Please contact your Oxford University Press representative to request access.) Instructor’s Resource Library The Language in Mind, Second Edition Instructor’s Resource Library includes all of the textbook’s figures and tables, making it easy for instructors to incorporate visual resources into their lecture presentations and other course materials. Figures and tables are provided in both JPEG (high- and low-resolution versions) and PowerPoint formats, all optimized for in-class use. The Test Bank, revised and updated for the second edition, includes multiple-choice and short-answer questions that cover the full range of content in every chapter. All questions are referenced to Bloom’s Taxonomy, making it easier to select the right balance of questions when building assessments. Science, Language, and the 1 Science of Language B efore you read any further, stand up, hold this book at about waist height, and drop it. Just do it. (Well, if you’re reading this on an electronic device, maybe you should reach for the nearest unbreakable object and drop it instead.) Now that you’ve retrieved your book and found your place in it once more, your first assignment is to explain why it fell down when you dropped it. Sure, sure, it’s gravity—Isaac Newton and falling apples, etc. Your next assignment is to answer this question: “How do you know it’s gravity that makes things fall down?” What’s the evidence that makes you confident that gravity is better than other possible explanations—for example, you might think of the Earth as a kind of magnet that attracts objects of a wide range of materials. Chances are you find it much easier to produce the right answer than to explain why it’s the right answer. It’s possible, too, that throughout your science education you were more often evaluated on your ability to remember the right answer than on being able to recreate the scientific process that led people there. And I have to admit there’s a certain efficiency to this approach: there’s a wheel, learn what it is, use it, don’t reinvent it. The trouble with the “learn it, use it” approach is that science hardly ever has “the” right answers. Science is full of ideas, some of which stand an extremely good chance of being right, and some of which are long shots but the best we’ve got at the moment. The status of these ideas shifts around a fair bit (which partly explains why textbooks have to be revised every couple of years). If you have a good sense of the body of evidence that backs up an idea (or can identify the gaps in the evidence), it becomes much easier to tell where a certain idea falls on the spectrum of likelihood that it’s right. BOX 1.1 Wrong or insightful? Isaac Asimov on testing students’ knowledge “Y oung children learn spelling and arithmetic, for instance, and here we tumble into apparent absolutes. How do you spell ‘sugar’? Answer: s-u-g-a-r. That is right. Anything else is wrong. How much is 2 + 2? The answer is 4. That is right. Anything else is wrong. Having exact answers, and having absolute rights and wrongs, minimizes the necessity of thinking, and that pleases both students and teachers. For that reason, students and teachers alike prefer short-answer tests to essay tests; multiple-choice over blank short-answer tests; and true- false tests over multiple-choice. But short-answer tests are, to my way of thinking, useless as a measure of the student’s understanding of a subject. They are merely a test of the efficiency of his ability to memorize. You can see what I mean as soon as you admit that right and wrong are relative. How do you spell ‘sugar’? Suppose Alice spells it p-q-z-z-f and Genevieve spells it s-h-u-g-e-r. Both are wrong, but is there any doubt that Alice is wronger than Genevieve? For that matter, I think it is possible to argue that Genevieve’s spelling is superior to the “right” one. Or suppose you spell ‘sugar’: s-u-c-r-o-s-e, or C12H22O11. Strictly speaking, you are wrong each time, but you’re displaying a certain knowledge of the subject beyond conventional spelling. Suppose then the test question was: how many different ways can you spell ‘sugar’? Justify each. Naturally, the student would have to do a lot of thinking and, in the end, exhibit how much or how little he knows. The teacher would also have to do a lot of thinking in the attempt to evaluate how much or how little the student knows. Both, I imagine, would be outraged.” From Isaac Asimov (1988). The relativity of wrong. In The relativity of wrong: Essays on science. New York, NY: Doubleday. Used with permission. This was a point made by scientist and author Isaac Asimov in his well- known essay “The Relativity of Wrong” (see Box 1.1). In this 1988 essay, Asimov challenged an English student who wrote to accuse him of scientific arrogance. The letter-writer pointed out that, throughout history, scientists have believed that they understood the universe, only to be proven wrong later. Hence, concluded Asimov’s correspondent, the only reliable thing one could say about scientific knowledge is that it’s bound to be wrong. To this, Asimov countered that what matters isn’t knowing whether an idea is right or wrong, but having a sense of which ideas might be more wrong than others. He used the flat-Earth theory as an example of how scientific theories develop. In ancient times, the notion that the Earth was flat wasn’t a stupid or illogical one—it was the idea that happened to be most consistent with the available body of knowledge. Eventually, people like Aristotle and others observed things that didn’t quite mesh with the flat-Earth theory. They noticed that certain stars disappear from view when you travel north, and certain others disappear if you travel south. They saw that Earth’s shadow during a lunar eclipse is always round and that the sun casts shadows of different lengths at different latitudes. In short, the available body of evidence had expanded. The flat-Earth theory was no longer the best fit to the observations, causing it to be abandoned in favor of the notion that the Earth is a sphere. As it turned out, when even more evidence was considered, this theory too had to be abandoned: the Earth is not exactly a sphere, but an oblate spheroid, a sphere that’s been squished toward the center at the North and South Poles. As Asimov put it, “when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical, they were wrong. But if you think that thinking that the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together.” Without the distinction that one is more wrong than the other, for example, you could be left with the belief that, for all we know, in 50 years, scientists will “discover” that the oblate spheroid theory was wrong after all, and the Earth is cubical, or in the shape of a doughnut. (In actual fact, the oblate spheroid theory is wrong. The Earth is very, very slightly pear- shaped, with the South Pole being squished toward the center just a bit more than the North Pole. Still, not a cube.) Asimov’s point about scientific progression and the graded “rightness” of ideas seems fairly obvious in the context of a well-known example like the flat-Earth theory. But unfortunately, the way in which people often talk about science can blot out the subtleties inherent in the scientific process. In many important discussions, people do behave as if they think of scientific ideas as being right or wrong in an absolute sense. For example, you’ve probably heard people express frustration upon reading a study that contradicts earlier health tips they’ve heard; a common reaction to this frustration is to vow to ignore any advice based on scientific studies, on the grounds that scientists are constantly “changing their minds.” And when people talk about evolution as “just a theory” (and hence not something we need to “believe”) or object that the science of climate change “isn’t settled,” they’re failing to think about the degree to which these scientific ideas approach “rightness.” Naturally, being able to identify whether an idea is very likely to be wrong or very likely to be right calls for a much more sophisticated body of scientific knowledge than simply having memorized what the supposedly right answer is. But ultimately, the ability to evaluate the rightness of an idea leaves you with a great deal more power than does merely accepting an idea’s rightness. I have a niece who, at the age of three, was definitely on to something. Like many preschoolers, her usual response to being exposed to new information was to ask a question. But in her case, the question was almost always the same. Whether you told her that eating her carrots would make her healthy or that the sun is many, many miles away, she would demand, “How do you know?” She made a great scientific companion—in her presence, you couldn’t help but realize where your understanding of the world was at its shallowest. (Conversations with her still have a way of sending me off on an extended Google search.) One can only hope that by the time she hits college or university, she hasn’t abandoned that question in favor of another one, commonly heard from students: Which theory is the right one? 1.1 What Do Scientists Know about Language? In studying the language sciences, it’s especially useful to approach the field with the “how do you know?” mindset rather than one that asks which theory is right. The field is an exceptionally young one, and the truth is that its collection of facts and conclusions that can be taken to be nearly unshakable is really very small. (The same is true of most of the sciences of the mind and brain in general.) In fact, scientific disagreements can run so deep that language researchers are often at odds about fundamentals—not only might they disagree on which theory best fits the evidence, they may argue about what kind of cloth a theory should be cut from. Or on very basic aspects of how evidence should be gathered in the first place. Or even the range of evidence that a particular theory should be responsible for covering. It’s a little bit as if we were still in an age when no one really knew what made books or rocks fall to the ground—when gravity was a new and exciting idea, but was only one among many. It needed to be tested against other theories, and we were still trying to figure out what the best techniques might be to gather data that would decide among the competing explanations. New experimental methods and new theoretical approaches crop up every year. All this means that language science is at a fairly unstable point in its brief history, and that seismic shifts in ideas regularly reshape its intellectual landscape. But this is what makes the field so alluring to many of the researchers in it—the potential to play a key role in reshaping how people think scientifically about language is very, very real (see Box 1.2). A sizable amount of what we “know” about language stands a good chance of being wrong. Many of the findings and conclusions in this book may well be overturned within a few years. Don’t be surprised if at some point your instructor bursts out in vehement disagreement with some of the material presented here, or with the way in which I’ve framed an idea. In an intellectual climate like this, it becomes all the more important to take a “how do you know?” stance. Getting in the habit of asking this question will give you a much better sense of which ideas are likely to endure, as well as how to think about new ideas that pop up in the landscape. BOX 1.2 Language science is a team sport T he topic of this textbook is psycholinguistics, a field that uses experimental methods to study the psychological machinery that drives language learning, language comprehension, and language production. But to construct theories that stand a decent chance of being right, psycholinguists need to pay attention to the work of language scientists who work from a variety of angles and use a number of different methods. Theoretical linguists provide detailed descriptions and analyses of the structure of language. They pay close attention to the patterns found in languages, examining the intricate constraints that languages place on how sounds, words, or sentences can be assembled. Many linguists pore over data from different languages and try to come up with generalizations about the ways in which human languages are organized. Computational linguists write and implement computer programs to explore the data structure of human language or to simulate how humans might learn and use language. This approach is extremely useful for uncovering patterns that require sifting through enormous amounts of data. They also help evaluate whether general ideas about human mental operations can be successfully implemented in practice, shedding light on which theories may be more realistic than others. Neurolinguists and cognitive neuroscientists study the brain and how this complex organ carries out the mental operations that are required for learning or using language. They investigate the role of specific regions and networks in the brain, correlate specific brain responses with psychological operations, and assess the consequences of damage to the brain, all with the aim of understanding how the brain’s “hardware” is able to run the “software” involved in learning or using language. Biolinguists look deeply into our biological makeup to understand why our species seems to be the only one to use language to communicate. They are preoccupied with studying genetic variation, whether between humans and other species, or between specific human populations or individuals. They try to trace the long explanatory line that links the workings of genes with the structure of the brain and, ultimately, the mental operations needed to be competent at language. Language typologists are like naturalists, collecting data samples from many different modern languages, and historical linguists are like archeologists, reconstructing extinct ancestors and establishing the connections and relationships among existing languages. Both take a broad view of language that may help us understand some of the forces at play in shaping language within an individual’s mind and brain. For example, typologists might discover deep similarities that hold across many languages, and historical linguists might identify trends that predict the direction in which languages are most likely to change. Both of these may reflect limits on the human mental capacity for language—limits that ultimately shape language and what we are able to do with it. Collaboration across these fields is enormously important because it expands the body of evidence that a theory is accountable for. It also equips language scientists with new methodological tools and theoretical insights that they can apply to stubborn problems in their own professional corner. In the ideal world, ideas and evidence would flow easily from one field to another, so that researchers working in one field could immediately benefit from advances made in other fields. But in practice, collaboration is hard work. It requires learning enough about a new field to be able to properly understand these advances. And it requires overcoming the inevitable social barriers that make it hard for researchers to accept new knowledge from an outside group of scientists who may have very different methodological habits and even a different work culture. Like all humans, scientists tend to be more trusting of evidence produced by familiar colleagues who are similar to themselves and can be somewhat defensive of their own ways of doing things. If these tendencies aren’t kept in check, researchers in these separate disciplines risk becoming like the proverbial blind men examining an elephant (see Figure 1.1), with the various fields reaching their own mutually incompatible conclusions because they are examining only a subset of the available evidence. The field of psycholinguistics would not be where it is today without many productive collaborations across disciplines. In this textbook, you’ll come across plenty of examples of contributions made by language scientists in all of the fields mentioned here. psycholinguistics The study of the psychological factors involved in the perception, production, and acquisition of language. Figure 1.1 In the ancient Indian parable of the blind men and the elephant, each man touches only one part of the elephant and, as a result, each describes the animal in a completely different manner. The man holding the elephant’s trunk says it’s like a snake, the man touching its side describes it as a wall, the one touching the leg describes it as a tree trunk, and so on. (Illustration from Martha Adelaide Holton & Charles Madison Curry, Holton-Curry readers, 1914.) The question also brings you into the heart of some of the most fascinating aspects of the scientific process. Scientific truths don’t lie around in the open, waiting for researchers to stub their toes on them. Often the path from evidence to explanation is excruciatingly indirect, requiring an intricate circuitous chain of assumptions. Sometimes it calls for precise and technologically sophisticated methods of measurement. This is why wrong ideas often persist for long periods of time, and it’s also why scientists can expend endless amounts of energy in arguing about whether a certain method is valid or appropriate, or what exactly can and can’t be concluded from that method. In language research, many of the Eureka! moments represent not discoveries, but useful insights into how to begin answering a certain question. Language is a peculiar subject matter. The study of chemistry or physics, for example, is about phenomena that have an independent existence outside of ourselves. But language is an object that springs from our very own minds. We can have conscious thoughts about how we use or learn language, and this can give us the illusion that the best way to understand language is through these deliberate observations. But how do you intuit your way to answering questions like these: When we understand language, are we using the same kind of thinking as we do when we listen to music or solve mathematical equations? Is your understanding of the word blue exactly the same as my understanding of it? What does a baby know about language before he or she can speak? Why is it that sometimes, in the process of retrieving a word from memory, you can draw a complete blank, only to have the word pop into your mind half an hour later? What does it mean if you accidentally call your current partner by the name of your former one? (You and your partner might disagree on what this means.) What exactly makes some sentences in this book confusing, while others are easy to understand? To get at the answers to any of these questions, you have to be able to probe beneath conscious intuition. This requires acrobatic feats of imagination— not only in imagining possible alternative explanations, but also in devising ways to go about testing them. In this book, I’ve tried to put the spotlight not just on the conclusions that language researches have drawn, but also on the methods they’ve used to get there. As in all sciences, methods range from crude to clever to stunningly elegant, and to pass by them with just a cursory glance would be to miss some of the greatest challenges and pleasures of the study of language. 1.2 Why Bother? At this point, you might be thinking, “Fine, if so little is truly known about how language works in the mind, sign me up for some other course, and I’ll check back when language researchers have worked things out.” But before you go, let me suggest a couple of reasons why it might be worth your while to study psycholinguistics. Here’s one reason: Despite the fact that much of the current scientific knowledge of language is riddled with degrees of uncertainty and could well turn out to be wrong, it’s not nearly as likely to be wrong as the many pronouncements that people often make about language without really knowing much, if anything, about it (see Table 1.1). The very fact that we can have intuitions about language—never mind that many of these are easily contradicted by closer, more systematic observation—appears to mislead people into believing that these intuitions are scientific truths. Aside from those who have formally studied the language sciences or have spent a great deal of time thinking analytically about language, almost no one knows the basics of how language works or has the slightest idea what might be involved in learning and using it. It’s simply not something that is part of our collective common knowledge at this point in time. TABLE 1.1 Some things people say about language (that are almost certainly wrong) You can learn language by watching television. People whose language has no word for a concept have trouble thinking about that concept. English is the hardest language to learn. Texting is making kids illiterate. Some languages are more logical/expressive/romantic than others. People speak in foreign accents because their mouth muscles aren’t used to making the right sounds. Some languages are spoken more quickly than others. Saying “um” or “er” is a sign of an inarticulate speaker. Failure to enunciate all your speech sounds is due to laziness. Sentences written in the passive voice are a sign of poor writing. Swearing profusely is a sign of a poor vocabulary. Deaf people should learn to speak and lip-read in spoken language before they learn sign language, or it will interfere with learning a real language. Speech errors reveal your innermost thoughts. You can’t learn language by watching television. Try this: Ask your mother, or your brother, or your boyfriend, or your girlfriend, “How can you understand what I’m saying right now?” Many people happily go through their entire lives without ever asking or answering a question like this, but if pressed, they might answer something like, “I recognize the words you’re using.” Fine, but how do they even know which bunches of the sounds you’re emitting go together to form words, since there are no silences between individual words? And, once they’ve figured that out, how do they recognize the words? What do “word memories” look like, and is there a certain order in which people sort through their mental dictionaries to find a match to the sounds you’re emitting? Moreover, understanding language involves more than just recognizing the words, or people would have no trouble with the phrase “words I you’re the using recognize.” Obviously, they’re responding to the order of words as well. So, what is the right order of words in a sentence— not just for this one, but more generally? How do people know what the right order is, and how did they learn to tell whether a sentence they’ve never heard before in their lives has its words strung together in the “proper” order? Most people have a decent sense of how they digest their food, but the common knowledge of many educated people today does not contain the right equipment to begin answering a question as basic as “How can you understand what I’m saying?” Imagine how it must have been, before awareness of gravity became common knowledge, for people to be asked, “Why does a rock fall to the ground?” A typical answer might have been, “It just does”; most people probably never thought to ask themselves why. Many might have stared and stammered, much as they do now when asked about how language works. By studying the psychology of language, you’re entering a world of new questions and new ways of thinking—a world that isn’t visible to most people. You’ll be privy to discussions of ideas before they’ve become the officially received “right” answers that “everyone” knows. You might find this all so stimulating that you wind up being a language researcher yourself—but the vast majority of readers of this textbook won’t. Which brings me to the second reason to study psycholinguistics. There are few subjects you can study that will have such a broad and deep impact on your daily life as the study of language. Even if you’re unlikely to become a professional language researcher, you’re extremely likely to use language in your daily life. Inevitably, you’ll find yourself asking questions like these: How can I write this report so that it’s easier to understand? What kind of language should I use in order to be persuasive? If I sit my kid in front of the TV for an hour a day, will this help her to learn language? Why do my students seem incapable of using apostrophes correctly? How can I make my poem more interesting? Should I bother trying to learn a second language in my 30s? Why is this automated voice system so infuriating? Even a basic understanding of how language works in the mind will provide you with the tools to approach these and many, many other questions in an intelligent way. Unfortunately, those of us who are deeply immersed in studying language don’t always take the time to talk about how our accumulated body of knowledge might provide a payoff for daily users of language. This would be a poor textbook indeed if it didn’t help you answer the questions about language that will crop up throughout your life. Ultimately, whether or not you become a professional psycholinguist, you should feel well equipped to be an amateur language scientist. And to do that, you need much more than “answers” to questions that researchers have thought to ask. One of the goals of this book is to give you the conceptual framework to address the questions that you will think to ask. Throughout this book, you’ll find activities and exercises that are designed to immerse you in the scientific process of understanding language. The more deeply you engage in these, the more you’ll internalize a way of thinking about language that will be very useful to you when faced with new questions and evidence. And throughout the book, you’ll find discussions that link what can sometimes be very abstract ideas about language to real linguistic phenomena out in the world. Many more such connections can be made, and the more you learn about how language works, the more you’ll be able to generate new insights and questions about the language you see and hear all around you. Questions to Contemplate 1. What are the drawbacks of simply memorizing the most current theories and findings in psycholinguistics? 2. What kind of language scientists would you assemble in a team of researchers if you wanted to find out (a) whether language disorders are more prevalent in English-speaking populations than in Mandarin-speaking populations, and if so, why? Or (b) determine whether parrots can really learn and understand language, or if they simply mimic it. What would be the role of each specialist in the research project? GO TO oup.com/us/sedivy2e for web activities, further readings, research updates, and other features Origins of Human Language 2 A s far as we know, no other species on Earth has language; only humans talk. Sure, many animals communicate with each other in subtle and intricate ways. But we’re the only ones who gossip, take seminars, interview celebrities, convene board meetings, recite poems, negotiate treaties, conduct marriage ceremonies, hold criminal trials—all activities where just about the only thing going on is talking. Fine, we also do many other things that our fellow Earth- creatures don’t. We play chess and soccer, sing the blues, go paragliding, design bridges, paint portraits, drive cars, and plant gardens, to name just a few. What makes language so special? Here’s the thing: language is deeply distinct from these other activities for the simple reason that all humans do it. There is no known society of Homo sapiens, past or present, in which people don’t talk to each other, though there are many societies where no one plays chess or designs bridges. And all individuals within any given human society talk, though again, many people don’t play chess or design bridges, for reasons of choice or aptitude. So, language is one of the few things about us that appears to be a true defining trait of what it means to be human—so much so that it seems it must be part of our very DNA. But what, exactly, do our genes contribute? One view is that language is an innate instinct, something that we are inherently programmed to do, much as birds grow wings, elephants grow trunks, and female humans grow breasts. In its strongest version (for example, as argued by Steven Pinker in his 1994 book The Language Instinct), this nativist view says that not only do our genes endow us with a general capacity for language, they also lay out some of the general structures of language, the building blocks that go into it, the mental process of acquiring it, and so on. This view of language as a genetically driven instinct captures why it is that language is not only common to all humans but also is unique to humans—no “language genes,” no talking. nativist view The view that not only are humans genetically programmed to have a general capacity for language, particular aspects of language ability are also genetically specified. But many language researchers see it differently. The anti-nativist view is that language is not an innate instinct but a magnificent by-product of our impressive cognitive abilities. Humans alone learn language—not because we inherit a preprogrammed language template, but because we are the superlearners of the animal kingdom. What separates us from other animals is that our brains have evolved to become the equivalent of swift, powerful supercomputers compared with our fellow creatures, who are stuck with more rudimentary technology. Current computers can do qualitatively different things that older models could never aspire to accomplish. This supercomputer theory is one explanation for why we have language while squirrels and chimpanzees don’t. anti-nativist view The view that the ability of humans to learn language is not the result of a genetically programmed “language template” but is an aspect (or by-product) of our extensive cognitive abilities, including general abilities of learning and memory. But what about the fact that language is universal among humans, unlike chess or trombone-playing (accomplishments that, though uniquely human, are hardly universal)? Daniel Everett (2012), a linguist who takes a firm anti-nativist position, puts it this way in his book Language: The Cultural Tool: Maybe language is more like a tool invented by human beings than an innate behavior such as the dance of honeybees or the songs of nightingales. What makes language universal is that it’s an incredibly useful tool for solving certain problems that all humans have—foremost among them being how to efficiently transmit information to each other. Everett compares language to arrows. Arrows are nearly universal among hunter- gatherer societies, but few people would say that humans have genes that compel them to make arrows specifically, or to make them in a particular way. More likely, making arrows is just part of our general tool-making, problem-solving competence. Bows and arrows can be found in so many different societies because, at some point, people who didn’t grow their own protein had to figure out a way to catch protein that ran faster than they did. Because it was well within the bounds of human intelligence to solve this problem, humans inevitably did—just as, Everett argues, humans inevitably came to speak with each other as a way of achieving certain pressing goals. The question of how we came to have language is a huge and fascinating one. If you’re hoping that the mystery will be solved by the end of this chapter, you’ll be sorely disappointed. It’s a question that has no agreed- upon answer among language scientists, and, as you’ll see, there’s a range of subtle and complex views among scientists beyond the two extreme positions I’ve just presented. In truth, the various fields that make up the language sciences are not yet even in a position to be able to resolve the debate. To get there, we first need to answer questions like: What is language? What do all human languages have in common? What’s involved in learning it? What physical and mental machinery is needed to successfully speak, be understood, and understand someone else who’s speaking? What’s the role of genes in shaping any of these behaviors? Without doing a lot of detailed legwork to get a handle on all of these smaller pieces of the puzzle, any attempts to answer the larger question about the origins of language can only amount to something like a happy hour discussion—heated and entertaining, but ultimately not that convincing one way or the other. In fact, in 1866, the Linguistic Society of Paris decreed that no papers about the origins of language were allowed to be presented at its conferences. It might seem ludicrous that an academic society would banish an entire topic from discussion. But the decision was essentially a way of saying, “We’ll get nowhere talking about language origins until we learn more about language itself, so go learn something about language.” A hundred and fifty years later, we now know quite a bit more about language, and by the end of this book, you’ll have a sense of the broad outlines of this body of knowledge. For now, we’re in a position to lay out at least a bit of what might be involved in answering the question of why people speak. 2.1 Why Us? Let’s start by asking what it is about our language use that’s different from what animals do when they communicate. Is it different down to its fundamental core, or is it just a more sophisticated version of what animals are capable of? An interesting starting point might be the “dance language” of honeybees. The language of bees The dance language of honeybees was identified and described by Karl von Frisch (see von Frisch, 1967). When a worker bee finds a good source of flower nectar at some distance from her hive, she returns home to communicate its whereabouts to her fellow workers by performing a patterned waggle dance (see Figure 2.1). During this dance, she repetitively traces a specific path while shaking her body. The elements of this dance communicate at least three things: 1. The direction in which the nectar source is located. If the bee moves up toward the top of the hive, this indicates that the nectar source can be found by heading straight toward the sun. The angle of deviation away from a straight vertical path shows the direction relative to the sun. 2. The distance to the source. The longer the bee dances along the path from an initial starting point before returning to retrace the path again, the farther away the source is. 3. The quality of the source. If the bee has hit the nectar jackpot, she shakes with great vigor, whereas a lesser source of nectar elicits a more lethargic body wiggle. Different bee species have different variations on this dance (for example, they might vary in how long they dance along a directional path to indicate a distance of 200 meters). It seems that bees have innate knowledge of their own particular dance “dialect,” and bees introduced into a hive populated by another species will dance in the manner of their genetic ancestors, not in the style of the adopted hive (though there’s some intriguing evidence that bees can learn to interpret foreign dialects of other bees; see Fu et al., 2008). In some striking ways, the honeybee dance is similar to what we do in human language, which is presumably why von Frisch used the term language to describe it. The dance uses body movements to represent something in the real world, just as a map or a set of directions does. Human language also critically relies on symbolic representation to get off the ground—for us, it’s usually sequences of sounds made in the mouth (for example, “eat fruit”), rather than waggling body movements, that serve as the symbolic units that map onto things, actions, and events in the world. And, in both human languages and bee dances, a smaller number of communicative elements can be independently varied and combined to create a large number of messages. Just as bees can combine different intensities of wiggling with different angles and durations of the dance path, we can piece together different phrases to similar effect: “Go 3 miles northwest and you’ll find a pretty good Chinese restaurant”; or “There are some amazing raspberry bushes about 30 feet to your left.” Figure 2.1 The waggle dance of honeybees is used by a returning worker bee to communicate the location and quality of a food source. The worker dances on the surface of the comb to convey information about the direction and distance of the food source, as shown in the examples here. (A) The nectar source is approximately 1.5 km from the hive flying at the indicated angle to the sun. (B) The nectar source is closer and the dance is shorter; in this case, the flowers will be found by flying away from the sun. The energy in the bee’s waggles (orange curves along the line of the dance) is in proportion to the perceived quality of the find. Honeybee communicative behavior shows that a complex behavior capable of transmitting information about the real world can be encoded in the genes and innately specified, presumably through an evolutionary process. Like us, honeybees are highly cooperative and benefit from being able to communicate with each other. But bees are hardly among our closest genetic relatives, so it’s worth asking just how similar their communicative behavior is to ours. Along with the parallels I’ve just mentioned, there are also major differences. Most importantly, bee communication operates within much more rigid parameters than human language. The elements in the dance, while symbolic in some sense, are still closely bound to the information that’s being communicated. The angle of the dance path describes the angle of the food source to the sun; the duration of the dance describes the distance to the food source. But in human language, there’s usually a purely arbitrary or accidental relationship between the communicative elements (that is, words and phrases) and the things they describe; the word fruit, for example, is not any more inherently fruit-like than the word leg. In this sense, what bees do is less like using words and more like drawing maps with their bodies. A map does involve symbolic representation, but the forms it uses are constrained by the information it conveys. In a map, there’s always some transparent, non-arbitrary way in which the spatial relations in the symbolic image relate to the real world. No one makes maps in which, for example, all objects colored red—regardless of where they’re placed in the image—are actually found in the northeast quadrant of the real-world space being described, while the color yellow is used to signal objects in the southwest quadrant, regardless of where they appear in the image. Another severe limitation of bee dances is that bees only “talk” about one thing: where to find food (or water) sources. Human language, on the other hand, can be recruited to talk about an almost infinite variety of topics for a wide range of purposes, from giving directions, to making requests, to expressing sympathy, to issuing a promise, and so on. Finally, human language involves a complexity of structure that’s just not there in the bees’ dance language. To help frame the discussion about how much overlap there is between animal communication systems and human language, the well-known linguist Charles Hockett listed a set of “design features” that he argued are common to all human languages. The full list of Hockett’s design features is given in Box 2.1; you may find it useful to refer back to this list as the course progresses. Even though some of the features are open to challenge, they provide a useful starting point for fleshing out what human language looks like. Hockett’s design features A set of characteristics proposed by linguist Charles Hockett to be universally shared by all human languages. Some but not all of the features are also found in various animal communication systems. BOX 2.1 Hockett’s design features of human language 1. Vocal–auditory channel Language is produced in the vocal tract and transmitted as sound. Sound is perceived through the auditory channel. 2. Broadcast transmission and directional reception Language can be heard from many directions, but it is perceived as coming from one particular location. 3. Rapid fading The sound produced by speech fades quickly. 4. Interchangeability A user of a language can send and receive the same message. 5. Total feedback Senders of a message can hear and internalize the message they’ve sent. 6. Specialization The production of the sounds of language serves no purpose other than to communicate. 7 Semanticity There are fixed associations between units of language and aspects of the world. 8. Arbitrariness The meaningful associations between language and the world are arbitrary. 9. Discreteness The units of language are separate and distinct from one another rather than being part of a continuous whole. 10. Displacement Language can be used to communicate about things that are not present in time and/or space. 11. Productivity Language can be used to say things that have never been said before and yet are understandable to the receiver. 12. Traditional transmission The specific language that’s adopted by the user has to be learned by exposure to other users of that language; its precise details are not available through genetic transmission. 13. Duality of patterning Many meaningful units (words) are made by the combining of a small number of elements (sounds) into various sequences. For example, pat, tap, and apt use the same sound elements combined in different ways to make different word units. In this way, tens of thousands of words can be created from several dozen sounds. 14. Prevarication Language can deliberately be used to make false statements. 15. Reflexiveness Language can be used to refer to or describe itself. 16. Learnability Users of one language can learn to use a different language. Adapted from Hockett, 1960, Sci. Am. 203, 88; Hockett & Altmann, 1968 in Sebeok, (ed.), Animal communication: Techniques of study and results of research. Primate vocalizations If we look at primates—much closer to us genetically than bees—a survey of their vocal communication shows a pretty limited repertoire. Monkeys and apes do make meaningful vocal sounds, but they don’t make very many, and the ones they use seem to be limited to very specific purposes. Strikingly absent are many of the features described by Hockett that allow for inventiveness, or the capacity to reuse elements in an open-ended way to communicate a varied assortment of messages. For example, vervet monkeys produce a set of alarm calls to warn each other of nearby predators, with three distinct calls used to signal whether the predator is a leopard, an eagle, or a snake (Seyfarth et al., 1980). Vervets within earshot of these calls behave differently depending on the specific call: they run into trees if they hear the leopard call, look up if they hear the eagle call, and peer around in the grass when they hear the snake alarm. These calls do exhibit Hockett’s feature of semanticity, as well as an arbitrariness in the relationship between the signals and the meaning they transmit. But they clearly lack Hockett’s feature of displacement, since the calls are only used to warn about a clear and present danger and not, for example, to suggest to a fellow vervet that an eagle might be hidden in that tree branch up there, or to remind a fellow vervet that this was the place where we saw a snake the other day. There’s also no evidence of duality of patterning, in which each call would be made by combining similar units together in different ways. And vervets certainly don’t show any signs of productivity in their language, in which the calls are adapted to communicate new messages that have never been heard before but that can be easily understood by the hearer vervets. In fact, vervets don’t even seem to have the capacity to learn to make the various alarm calls; the sounds of the alarm calls are fixed from birth and are instinctively linked to certain categories of predators, though baby vervets do have to learn, for example, that the eagle alarm shouldn’t be made in response to a pigeon overhead. So, they come by these calls not through the process of cultural transmission, which is how humans learn words (no French child is born knowing that chien is the sound you make when you see a dog), but by being genetically wired to make specific sounds that are associated with specific meanings. This last point has some very interesting implications. Throughout the animal world, it seems that the exact shape of a communicative message often has a strong genetic component. If we want to say that humans are genetically wired for language, then that genetic programming is going to have to be much more fluid and adaptable than that of other animals, allowing humans to learn a variety of languages through exposure. Instead of being programmed for a specific language, we’re born with the capacity to learn any language. This very fact might look like overwhelming support for the anti-nativist view, which says that language is simply an outgrowth of our general ability to learn complex things. But not necessarily. The position of nativists is more subtle than simply arguing that we’re born with knowledge of a specific language. Rather, the claim is that there are common structural ingredients to all human languages, and that it’s these basic building blocks of language that we’re all born with, whether we use them to learn French or Sanskrit. More on this later. WEB ACTIVITY 2.1 Considering animal communication In this activity, you’ll be asked to consider a variety of situations that showcase the communicative behavior of animals. How do Hocket’s design features of language apply to these behaviors? https://oup-arc.com/access/content/sedivy-2e-student-resources/sedivy2e- chapter-2-web-activity-1 One striking aspect of primate vocalizations is the fact that monkeys and apes show much greater flexibility and capacity for learning when it comes to interpreting signals than in producing them. (A thorough discussion of this asymmetry can be found in a 2010 paper by primatologists Robert Seyfarth and Dorothy Cheney.) Oddly enough, even though vervets are born knowing which sounds to make in the presence of various predators, they don’t seem to be born with a solid understanding of the meanings of these alarms, at least as far as we can tell from their responses to the calls. It takes young vervets several months before they start showing the adult-like responses of looking up, searching in the grass, and so on. Early on, they respond to the alarm calls simply by running to their mothers, or reacting in some other way that doesn’t show that they know that an eagle call, for example, is used to warn specifically about bird-like predators. Over time, though, their ability to extend their understanding of new calls to new situations exceeds their adaptability in producing calls. For instance, vervets can learn to understand the meanings of alarm calls of other species, as well as the calls of their predators—again, even though they never learn to produce the calls of other species. Seyfarth and Cheney suggest that the information that primates can pull out from the communicative signals they hear can be very subtle. An especially intriguing example comes from an experiment involving the call behavior of baboons. Baboons, as it happens, have a very strict status hierarchy within their groups, and it’s not unusual for a higher-status baboon to try to intimidate a lower-status baboon by issuing a threat-grunt, to which the lower-ranking animal usually responds with a scream. The vocalizations of individual baboons are distinctive enough that they’re easily recognized by all members of the group. For the purpose of the study, the researchers created a set of auditory stimuli in which they cut and spliced together prerecorded threat-grunts and screams from various baboons within the group. The sounds were reassembled so that sometimes the threat-call of a baboon was followed by a scream from a baboon higher up in the status hierarchy. The eavesdropping baboons reacted to this pairing of sounds with surprise, which seems to show that the baboons had inferred from the unusual sequence of sounds that a lower-status animal was trying to intimidate a higher-status animal—and understood that this was a bizarre state of affairs. It may seem strange that animals’ ability to understand something about the world based on a communicative sound is so much more impressive than their ability to convey something about the world by creating a sound. But this asymmetry seems rampant within the animal kingdom. Many dog owners are intimately familiar with this fact. It’s not hard to get your dog to recognize and respond to dozens of verbal commands; it’s getting your dog to talk back to you that’s difficult. Any account of the evolution of language will have to grapple with the fact that speaking and understanding are not necessarily just the mirror image of each other. Can language be taught to apes? As you’ve seen, when left to themselves in the wild, non-human primates don’t indulge in much language-like vocalization. This would suggest that the linguistic capabilities of humans and other primates are markedly different. Still, a non-nativist might object and argue that looking at what monkeys and apes do among themselves, without the benefit of any exposure to real language, doesn’t really provide a realistic picture of what they can learn about language. After all, when we evaluate human infants’ capacity for language, we don’t normally separate them from competent language users—in other words, adults—and see what they come up with on their own. Suppose language really is more like a tool than an instinct, with each generation of humans benefiting from the knowledge of the previous generation. In that case, to see whether primates are truly capable of attaining language, we need to see what they can learn when they’re allowed to have many rich interactions with individuals who have already solved the problem of language. This line of thinking has led to a number of studies that have looked at how apes communicate, not with other non-linguistic apes, but with their more verbose human relatives. In these studies, research scientists and their assistants have raised young apes (i.e., chimpanzees, bonobos, orangutans, and gorillas) among humans in a language-rich environment. Some of the studies have included intensive formal teaching sessions, with a heavy emphasis on rewarding and shaping communicative behavior, while other researchers have raised the apes much as one would a human child, letting them learn language through observation and interaction. Such studies often raise tricky methodological challenges, as discussed in Method 2.1. For example, what kind of evidence is needed to conclude that apes know the meaning of a word in the sense that humans understand that word? Nevertheless, a number of interesting findings have come from this body of work (a brief summary can be found in a review article by Kathleen Gibson, 2012). First, environment matters: there’s no doubt that the communicative behavior of apes raised in human environments starts to look a lot more human-like than that of apes in the wild. For example, a number of apes of several different species have mastered hundreds of words or arbitrary symbols. They spontaneously use these symbols to communicate a variety of functions—not just to request objects or food that they want, but also to comment on the world around them. They also refer to objects that are not physically present at the time, showing evidence of Hockett’s feature of displacement, which was conspicuously absent from the wild vervets’ alarm calls. They can even use their symbolic skills to lie—for instance, one chimp was found to regularly blame the messes she made on others. Perhaps even more impressively, all of the species studied have shown at least some suggestion of another of Hockett’s features, productivity—that is, of using the symbols they know in new combinations to communicate ideas for which they don’t already have symbols. For example, Koko, a gorilla, created the combination “finger bracelet” to refer to a ring; Washoe, a chimpanzee, called a Brazil nut a “rock berry.” Sequences of verbs and nouns often come to be used by apes in somewhat systematic sequences, suggesting that the order of combination isn’t random. productivity The ability to use known symbols or linguistic units in new combinations to communicate ideas. As in the wild, trained apes show that they can master comprehension skills much more readily than they achieve the production of language-like units. In particular, it quickly became obvious that trying to teach apes to use vocal sounds to represent meanings wasn’t getting anywhere. Apes, it turns out, have extremely limited control over their vocalizations and simply can’t articulate different-sounding words. But the trained apes were able to build up a sizable vocabulary when signed language was substituted for spoken language, or when researchers adopted custom-made artificial “languages” using visual symbols arranged in systematic structures. This raises the very interesting question of how closely the evolution of language is specifically tied to the evolution of speech, an issue we will probe in more detail in Section 2.5. But even with non-vocal languages, the apes were able to handle much more complexity in their understanding of language than in their production of it. They rarely produced more than two or three symbols strung together, but several apes were able to understand commands like “make the doggie bite the snake,” and they could distinguish that from “make the snake bite the doggie.” They could also follow commands that involved moving objects to or from specific locations. Sarah, a chimpanzee, could reportedly even understand “if/then” statements. Looking at this collection of results, it becomes apparent that with the benefit of human teachers, ape communication takes a great leap toward human language—human-reared apes don’t just acquire more words or symbols than they do in the wild, they also show that they can master a number of Hockett’s design features that are completely absent from their naturalistic behavior. This is very revealing, because it helps to answer the question of when some of these features of human language—or rather, the capability for these features—might have evolved. METHOD 2.1 Minding the gap between behavior and knowledge I f a chimpanzee produces the sign for banana and appears satisfied when you retrieve one from the kitchen and give it to her, does this mean the chimp knows that the sign is a symbol for the idea of banana and is using it as you and I would use the word? It’s certainly tempting to think so, but the careful researcher will make sure not to overinterpret the data and jump to conclusions about sophisticated cognitive abilities when the same behavior could potentially be explained by much less impressive abilities. Since chimpanzees look and act so much like us in so many ways, it’s tempting to conclude that when they behave like us, it’s because they think like us. But suppose that instead of interacting with a chimp, you were observing a pigeon that had learned to peck on keys of different colors to get food rewards. How willing would you be to conclude that the pigeon is treating the colored keys as symbols that are equivalent to words? My guess is, not very. Instead, it seems easy enough to explain the pigeon’s behavior by saying that it’s learned to associate the action of pecking a particular key with getting a certain reward. This is a far cry from saying that the bird is using its action as a symbol that imparts a certain thought into the mind of the human seeing it, intending that this implanted thought might encourage the human to hand over food. In linking behavior to cognition, researchers need to be able to suspend the presumption of a certain kind of intelligence and to treat chimps and pigeons in exactly the same way. In both cases, they need to ask: What evidence do we need to have in order to be convinced that the animal is using a word in the same way that a human does? And how do we rule out other explanations of the behavior that are based on much simpler cognitive machinery than that of humans? Sue Savage-Rumbaugh is one of the leading scientists in the study of primate language capabilities. In 1980, she and her colleagues wrote a paper cautioning researchers against making overly enthusiastic claims when studying the linguistic capabilities of apes. They worried about several possible methodological flaws. One of these was a tendency to overattribute human-like cognition to simple behaviors, as discussed in the preceding paragraph. They argued that in order to have evidence that an ape is using a sign or symbol referentially, you need to be able to show that the animal not only produces the sign in order to achieve a specific result, but also shows evidence of understanding it—for example, that it can pick out the right object for a word in a complex situation that involves choosing from among many possibilities. You also need to be able to show that the ape can produce the word in a wide variety of situations, not just to bring about a specific result. Moreover, you need to look at all the instances in which the ape uses a particular sign. It’s not enough to see that sometimes the chimp uses the sign in a sensible way; if the same chimp also uses the sign in situations where it seems inappropriate or not meaningful, it lessens the confidence that the chimp truly knows the meaning of that sign. Savage-Rumbaugh and her colleagues also worried about the possibility that researchers might unknowingly provide cues that nudge the ape to produce a sign that seems sensible given the context. Imagine a possible scenario in which a chimpanzee can produce a set of signs that might be appropriate in a certain context—for example, signs that result in getting food. If the chimp doesn’t really know the meanings of any of these individual signs, it might start sloppily producing some approximate hand movements while watching the researcher’s face. The researcher might inadvertently communicate approval or disapproval, thereby steering the chimp’s signing behavior in the right direction. In order to make a fair assessment of what a chimp actually knows, the researcher has to set up rigorous testing situations in which the possibility of such inadvertent cues has been eliminated. Savage-Rumbaugh and her colleagues ultimately concluded that apes are able to learn to use symbols in a language-like way. But their remarks about good methodological practices apply to any aspect of language research with primates—including the young human variety, as we’ll see in upcoming chapters. Figure 2.2 The evolutionary history of hominids. The term hominids refers to the group consisting of all modern and extinct great apes (including humans and their more immediate ancestors). This evolutionary tree illustrates the common ancestral history and approximate times of divergence of hominins (including modern humans and the now-extinct Neanderthals) from the other great apes. Note that a number of extinct hominin species are not represented here. Biologists estimate that humans, chimpanzees, and bonobos shared a common ancestor between 5 and 7 million years ago. The last common ancestor with gorillas probably occurred between 8 and 10 million years ago, and the shared ancestor with orangutans even earlier than that (see Figure 2.2). Evidence about when the features of human language evolved helps to answer questions about whether they evolved specifically because these features support language. Among nativists, the most common view is that humans have some innate capabilities for language that evolved as adaptations. Evolutionary adaptations are genetically transmitted traits that give their bearers an advantage—specifically, an adaptive trait helps individuals with that trait to stay alive long enough to reproduce and/or to have many offspring. The gene for the advantageous trait spreads throughout a population, as over time members of the species with that trait will out-survive and out- reproduce the members without that trait. But not all adaptations that help us to use language necessarily came about because they gave our ancestors a communicative edge over their peers. Think about it like this: humans have hands that are capable of playing the piano, given instruction and practice. But that doesn’t mean that our hands evolved as they did because playing the piano gave our ancestors an advantage over non-piano-playing humans. Presumably, our nimble fingers came about as a result of various adaptations, but the advantages these adaptations provided had nothing to do with playing the piano. Rather, they were the result of the general benefits of having dexterous hands that could easily manipulate a variety of objects. Once in possession of enhanced manual agility, however, humans discovered that hands can be put to many wonderful uses that don’t necessarily help us survive into adulthood or breed successfully. evolutionary adaptation A genetically transmitted trait that gives its bearers an advantage— specifically, it helps those with the trait to stay alive long enough to reproduce and/or to have many offspring. The piano-playing analogy may help to make sense of the question, “If language-related capabilities evolved long before humans diverged from other apes, then why do only humans make use of them in their natural environments?” That is, if apes are capable of amassing bulky vocabularies and using them creatively, why are they such linguistic underachievers in the wild? The contrast between their communicative potential and their lack of spontaneous language in the wild suggests that certain cognitive skills that are required to master language—at least, those skills that are within the mental grasp of apes—didn’t necessarily evolve for language. Left to their own devices, apes don’t appear to use these skills for the purpose of communicating with each other. But when the cultural environment calls for it, these skills can be recruited in the service of language—much as in the right cultural context, humans can use their hands to play the piano. This state of affairs poses a challenge to the “language-as-instinct” view. Nevertheless, it’s entirely possible that the skills that support language fall into two categories: (1) those that are necessary to get language off the ground but aren’t really specific to language; and (2) traits that evolved particularly because they make language more powerful and efficient. It may be that we share the skills in the first category with our primate relatives, but that only humans began to use those skills for the purpose of communication. Once this happened, there may have been selective pressure on other traits that provided an additional boost to the expressive capacity of language—and it’s possible that these later skills are both language-specific and uniquely human. It seems, then, that when we talk about language evolution, it doesn’t make sense to treat language as an all-or-nothing phenomenon. Language may well involve a number of very different cognitive skills, with different evolutionary trajectories and different relationships to other, non-linguistic abilities. Throughout this book, you’ll get a much more intimate sense of the different cognitive skills that go into human language knowledge and use. As a first step, this chapter starts by breaking things down into three very general categories of language-related abilities: the ability to understand communicative intent, a grasp of linguistic structure, and the ability to control voice and/or gesture. 2.1 Questions to Contemplate 1. Why is it easier to make the case for the genetic determination of vervet calls or honeybee dances than it is for human language? 2. Which of Hockett’s design features of language would you be most surprised to see a chimpanzee master and demonstrate? 2.2 The Social Underpinnings of Language Imagine this scene from long ago: an early hominid is sitting at the mouth of his cave with his female companion when a loud roar tears through the night air. He nods soberly and says, “Leopard.” This is a word that he’s just invented to refer to that animal. In fact, it’s the first word that’s passed between them, as our male character is one of language’s very earliest adopters. It’s a breakthrough: from here on, the couple can use the word to report leopard sightings to each other, or to warn their children about the dangerous predator. But none of this can happen unless the female can clue in to the fact that the sounds in leopard were intentionally formed to communicate an idea—they were not due to a sneeze or a cough, or some random set of sounds. What’s more, she has to be able to connect these intentional and communicative sounds with what’s going on around her, and make a reasonable guess about what her companion is most likely to be trying to convey. From your perspective as a modern human, all of this may seem pretty obvious, requiring no special abilities. But it’s far from straightforward, as revealed by some very surprising tests that chimpanzees fail at miserably, despite their substantial intellectual gifts. For example, try this next time you meet a chimp: Show the animal a piece of food, and then put it in one of two opaque containers. Shuffle the two containers around so as to make it hard to tell where it’s hidden. Now stop, and point to the container with the food. The chimpanzee will likely choose randomly between the two containers, totally oblivious to the very helpful clue you’ve been kind enough to provide. This is exactly what Michael Tomasello (2006) and his colleagues found when they used a similar test with chimpanzees. Their primate subjects ignored the conspicuous hint, even though the experimenters went out of their way to establish that the “helper” who pointed had indeed proven herself to be helpful on earlier occasions by tilting the containers so that the chimp could see which container had the food (information that the chimps had no trouble seizing upon). Understanding the communicative urge Chimpanzees’ failure to follow a pointing cue is startling because chimps are very smart, perfectly capable of making subtle inferences in similar situations. For example, if an experimenter puts food in one of two containers and then shakes one of them but the shaken container produces no rattling sound, the chimpanzee knows to choose the other one (Call, 2004). Or, consider this variation: Brian Hare and Michael Tomasello (2004) set up a competitive situation between chimpanzees and a human experimenter, with both human and chimp trying to retrieve food from buckets. If the human extended her arm toward a bucket but couldn’t touch it because she had to stick her hand through a hole that didn’t allow her to reach far enough, the chimpanzees were able to infer that this was the bucket that must contain the food, and reached for it. Why can chimpanzees make this inference, which involves figuring out the human’s intended—but thwarted—goal, but not be able to understand pointing? Tomasello and his colleagues argued that, although chimpanzees can often understand the intentions and goals—and even the knowledge states—of other primates, what they can’t do is understand that pointing involves an intention to communicate. In other words, they don’t get that the pointing behavior is something that’s done not just for the purpose of satisfying the pointer’s goal, but to help the chimpanzee satisfy its goal). To some researchers, it’s exactly this ability to understand communicative intentions that represents the “magic moment” in the evolution of language, when our ancestors’ evolutionary paths veered off from those of other great apes, and their cognitive skills and motivational drives came to be refined, either specifically for the purpose of communication, or more generally to support complex social coordination. Some language scientists have argued that a rich communication system is built on a foundation of advanced skills in social cognition, and that among humans these skills evolved in a super-accelerated way, far outpacing other gains we made in overall intelligence and working memory capacity. To test this claim, Esther Hermann and her colleagues (2007) compared the cognitive abilities of adult chimpanzees, adult orangutans, and human toddlers aged two and a half. All of these primates were given a battery of tests evaluating two kinds of cognitive skills: those needed for understanding the physical world, and those for understanding the social world. For example, a test item in the physical world category might involve discriminating between a smaller and a larger quantity of some desirable reward, or locating the reward after it had been moved, or using a stick to retrieve an out-of-reach reward. The socially oriented test items looked for accomplishments like solving a problem by imitating someone else’s solution, following a person’s eye gaze to find a reward, or using or interpreting communicative gestures to locate a reward. The researchers found that in demonstrating their mastery over the physical world, the human toddlers and adult chimpanzees were about even with each other, and slightly ahead of the adult orangutans. But when it came to the social test items, the young humans left their fellow primates in the dust (with chimps and orangutans showing similar performance). There’s quite a bit of additional evidence showing that even very young humans behave in ways that are different from how other primates act in similar situations. For example, when there’s nothing obvious in it for themselves, apes don’t seem to be inclined to communicate with other apes for the purpose of helping the others achieve a goal. But little humans almost feel compelled to. In one study by Ulf Lizskowski and colleagues (2008), 12-month-olds who hadn’t yet begun to talk watched while an adult sat at a table stapling papers without involving the child in any way. At one point, the adult left the room, then another person came in, moved the stapler from the table to a nearby shelf, and left. A little later, the first adult came back, looked around, and made quizzical gestures to the child. In response, most of the children pointed to the stapler in its new location. According to Michael Tomasello (2006), apes never point with each other, and when they do “point” to communicate with humans (usually without extending the index finger), it’s because they want the human to fetch or hand them something that’s out of their reach. Skills for a complex social world So, humans are inclined to share information with one another, whereas other primates seem not to have discovered the vast benefits of doing so. What’s preventing our evolutionary cousins from cooperating in this way? One possibility is that they’re simply less motivated to engage in complex social behavior than we humans are. Among mammals, we as a species are very unusual in the amount of importance we place on social behavior. For example, chimpanzees are considerably less altruistic than humans when it comes to sharing food, and they don’t seem to care as much about norms of reciprocity or fairness. When children are in a situation where one child is dividing up treats to share and extends an offer that is much smaller than the share he’s claimed for himself, the other child is apt to reject the offer, preferring to give it up in order to make the point that the meager amount is an affront to fairness. A chimp will take what it can get (Tomasello, 2009). In fact, when you think about the daily life of most humans in comparison to a day in the life of a chimpanzee, it becomes apparent that our human experiences are shaped very profoundly by a layer of social reality, while a chimpanzee may be more grounded in the physical realities of its environment. In his book Why We Cooperate (2009), Michael Tomasello points out how different the human experience of shopping is from the chimpanzee’s experience of foraging for food: Let us suppose a scenario as follows. We enter a store, pick up a few items, stand in line at the checkout, hand the clerk a credit card to pay, take our items, and leave. This could be described in chimpanzee terms fairly simply as going somewhere, fetching objects, and returning from whence one came. But humans understand shopping, more or less explicitly, on a whole other level, on the level of institu