Nudge: The Final Edition PDF
Document Details
Uploaded by JudiciousDetroit6938
IIM Ahmedabad
2021
Richard H. Thaler and Cass R. Sunstein
Tags
Summary
Nudge: The Final Edition is a book by Richard Thaler and Cass Sunstein, originally released in 2008 and updated in 2021. It explores how choice architecture affects decision-making, providing strategies and tactics for influencing people's choices without restricting their freedom. It analyzes biases and blunders, and suggests useful approaches to nudge people in achieving favorable outcomes.
Full Transcript
Acclaim for the original edition of Nudge “Nudge has changed the world. You may not realize it, but as a result of its findings you’re likely to live longer, retire richer, and maybe even save other people’s lives.” —The...
Acclaim for the original edition of Nudge “Nudge has changed the world. You may not realize it, but as a result of its findings you’re likely to live longer, retire richer, and maybe even save other people’s lives.” —The Times (London) “Probably the most influential popular science book ever written.” —BBC Radio 4 “This gem of a book... is a must read.” —Daniel Kahneman, Nobel Prize–winning author of Thinking, Fast and Slow “Engaging and insightful... The conceptual argument is powerful, and most of the authors’ suggestions are common sense at its best.... For that we should all applaud loudly.” —The New York Times Book Review “This book is terrific. It will change the way you think, not only about the world around you and some of its bigger problems, but also about yourself.” —Michael Lewis, author of Moneyball and Liar’s Poker “One of the few books... that fundamentally changed the way I think about the world.” —Steven D. Levitt, coauthor of Freakonomics “Utterly brilliant... Nudge won’t nudge you—it will knock you off your feet.” —Daniel Gilbert, author of Stumbling on Happiness “Nudge is as important a book as any I’ve read in perhaps twenty years. It is a book that people interested in any aspect of public policy should read. It is a book that people interested in politics should read. It is a book that people interested in ideas about human freedom should read. It is a book that people interested in promoting human welfare should read. If you’re not interested in any of these topics, you can read something else.” —Barry Schwartz, The American Prospect “Engaging, informative, and thoroughly delightful.” —Don Norman, author of The Design of Everyday Things and The Design of Future Things “A wonderful book: more fun than any important book has a right to be—and yet it is truly both.” —Roger Lowenstein, author of When Genius Failed “Save the planet, save yourself. Do-gooders, policymakers, this one’s for you.” —Newsweek “Great fun to read... Sunstein and Thaler are very persuasive.” —Slate “Nudge helps us understand our weaknesses, and suggests savvy ways to counter them.” —The New York Observer “Always stimulating... An entertaining book that also deeply informs.” — Barron’s “Entertaining, engaging, and well written... Highly recommended.” —Choice “This Poor Richard’s Almanack for the twenty-first century... shares both the sagacity and the witty and accessible style of its eighteenth-century predecessor.” —Law and Politics Book Review “There are superb insights in Nudge.” —Financial Times PENGUIN BOOKS NUDGE RICHARD H. THALER was awarded the 2017 Nobel Memorial Prize in Economic Sciences for his contributions to the field of behavioral economics. He is the Charles R. Walgreen Distinguished Service Professor of Behavioral Science and Economics at the University of Chicago Booth School of Business. He is a member of the National Academy of Science and the American Academy of Arts and Sciences, and in 2015 he was the president of the American Economic Association. He has been published in numerous prominent journals and is the author of Misbehaving: The Making of Behavioral Economics. CASS R. SUNSTEIN is the Robert Walmsley University Professor at Harvard Law School, where he is the founder and director of the Program on Behavioral Economics and Public Policy. From 2009 to 2012 he served in the Obama administration as administrator of the White House Office of Information and Regulatory Affairs, and from 2020 to 2021 he served as chair of the Technical Advisory Group for Behavioural Insights and Health at the World Health Organization, and in 2021 he joined the Biden administration as senior counselor and regulatory policy officer in the Department of Homeland Security. He has testified before congressional committees, been involved in constitution-making and law reform activities in many nations, and written numerous articles and books, including Too Much Information and Noise (with Daniel Kahneman and Olivier Sibony). He is the recipient of the 2018 Holberg Prize, awarded annually to a scholar who has made outstanding contributions to research in the arts, humanities, social sciences, law, or theology. PENGUIN BOOKS An imprint of Penguin Random House LLC penguinrandomhouse.com First published in the United States of America by Yale University Press 2008 First published in Penguin Books 2009 This updated edition published in Penguin Books 2021 Copyright © 2008, 2009, 2021 by Richard H. Thaler and Cass R. Sunstein Penguin supports copyright. Copyright fuels creativity, encourages diverse voices, promotes free speech, and creates a vibrant culture. Thank you for buying an authorized edition of this book and for complying with copyright laws by not reproducing, scanning, or distributing any part of it in any form without permission. You are supporting writers and allowing Penguin to continue to publish books for every reader. “The Times They Are A-Changin’ ”: Words and music by Dylan. © Universal Tunes. Used by permission. All rights reserved. LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA Names: Thaler, Richard H., 1945– author. | Sunstein, Cass R., author. Title: Nudge : the final edition / Richard H. Thaler and Cass R. Sunstein. Description: Final edition. | [New York] : Penguin Books, an imprint of Penguin Random House LLC, 2021. | “First published in the United States of America by Yale University Press, 2008”—Title page. | Includes bibliographical references and index. | Identifiers: LCCN 2021008635 (print) | LCCN 2021008636 (ebook) | ISBN 9780143137009 (trade paperback) | ISBN 9780525508526 (ebook) Subjects: LCSH: Economics—Psychological aspects. | Choice (Psychology)—Economic aspects. | Decision making—Psychological aspects. | Consumer behavior. Classification: LCC HB74.P8 T53 2021 (print) | LCC HB74.P8 (ebook) | DDC 330.01/9—dc23 LC record available at https://lccn.loc.gov/2021008635 LC ebook record available at https://lccn.loc.gov/2021008636 Designed by Sabrina Bowers, adapted for ebook by Estelle Malmed Cover design by Matt Vee pid_prh_5.7.1_c0_r0 For France, who (still) makes everything in life better —RHT For Samantha, who knows what matters —CRS Contents Preface to the Final Edition Introduction PART I: Humans and Econs 1. Biases and Blunders 2. Resisting Temptation 3. Following the Herd PART II: The Tools of the Choice Architect 4. When Do We Need a Nudge? 5. Choice Architecture 6. But Wait, There’s More 7. Smart Disclosure 8. #Sludge PART III: Money 9. Save More Tomorrow 10. Do Nudges Last Forever? Perhaps in Sweden 11. Borrow More Today: Mortgages and Credit Cards 12. Insurance: Don’t Sweat the Small Stuff PART IV: Society 13. Organ Donations: The Default Solution Illusion 14. Saving the Planet PART V: The Complaints Department 15. Much Ado About Nudging Epilogue Acknowledgments Notes Index Preface to the Final Edition The original version of Nudge was published in the spring of 2008. While we were writing it, Thaler got his first iPhone and Sunstein his first BlackBerry. In his first term as a United States senator, our former University of Chicago colleague Barack Obama had decided to challenge Hillary Clinton for the Democratic nomination for president. Senator Joe Biden was also doing that, without a whole lot of success. Real estate developer and reality television star Donald Trump was proclaiming that Clinton was “fantastic” and would “make a great president.” 1 A financial crisis was emerging. Taylor Swift was nineteen years old (and had not yet won a Grammy), and Greta Thunberg was just five. To say the least, a few things have happened in the intervening years. But Nudge continues to attract interest, and we have not been much inclined to tinker with it. Why a revision now? As we discuss in the book, status quo bias is a strong force. Very much in keeping with the book’s spirit, we were induced to emerge from our slumber by a seemingly small matter. The contracts for the American and British paperback editions had expired, and new ones had to be agreed upon. Editors asked whether we might want to add a new chapter or possibly make other changes. Our immediate reaction was to say no. After all, Thaler is famously lazy and Sunstein could have written an entirely new book in the time it would take to get the slow-fingered Thaler to agree to anything. Besides, we were proud of the book, and why mess with a good thing? But then we started thumbing through copies we managed to find in our home offices, where we found ourselves during the year of COVID-19. The first chapter mentions the then-snazzy but now-obsolete iPod. Jeez, that seems a bit dated. And an entire chapter is devoted to what we still think was an excellent solution to the problem of making it possible for same-sex couples to marry. Since then, many countries somehow managed to solve that very problem in a way we had not imagined was politically possible. They just passed laws making such marriages legal. So, yeah, maybe some parts of the book could use a bit of tidying up. So, it came to pass that in the summer of 2020, a summer like no other in our lifetimes, we decided to poke around the manuscript and see if we wanted to make some changes. It helped that Thaler managed to find a set of Microsoft Word files that had been used for what we called the international edition, and those files were (barely) usable. Without those files, this edition would not exist, because we would never have wanted to start over from scratch. We admit to then falling into a bit of a trap. We are supposedly experts on biases in human decision making, but that definitely does not mean we are immune to them! Just the opposite. We are not sure that this particular trap has a name, but it is familiar to everyone. Let’s call it the “while we are at it” bias. Home improvement projects are often settings where this bias is observed. A family decides that after twenty years of neglect, the kitchen really needs to be upgraded. The initial to-do list includes new appliances and cabinets, but of course, the floor will be ruined during the construction, so we’d better replace that, and gosh, if we just pushed that wall out a bit, we could add a new window, which looks out on the patio, but oh dear, who wants to look at that patio... In the military this is called mission creep. Here we plead guilty to book revision creep. The revision that we planned to knock off during the summer was not given to the publisher until late November. However, to continue the home remodeling analogy, in spite of our slow pace, what we have here is definitely not a gut rehab. The book feels very much like the old one. All the walls remain, and we have not expanded the footprint. But we have gotten rid of a bunch of old pieces of electronics that have been collecting dust and replaced them with newer gadgets. More specifically, the first four chapters of the book have not much changed. They set out the basic framework of our approach, including the term libertarian paternalism, which only its authors love. Examples and references are updated, but the songs remain the same. If it were a record album, we would call this section “remastered,” whatever that means. If you have read the original edition, you can probably skim those chapters pretty quickly. After that, however, even previous readers will find many new themes, and perhaps some surprises. Two important topics are given new chapters early on. The first is what we call Smart Disclosure. The idea is that governments should consider the radical thought of moving at least into the twentieth century in the way they disclose important information. Sure, listing ingredients on the side of food packages is useful, especially for those with very good eyesight, but shouldn’t Sunstein be able to search online for foods that contain shellfish, given that they can make him very sick? The Internet is not exactly a cutting-edge technology. Widespread use of Smart Disclosure would make it possible to create online decision-making tools that we call choice engines, which can make many tasks as easy as it has become to find the best route to get to a new restaurant. We have also added a new chapter on what we call sludge, which is nasty stuff that makes it more difficult to make wise choices. (Sludge is everywhere; you’ll see.) The use of Smart Disclosure is one way to reduce sludge. So is sending everyone a tax return that has already been completed and can be filed with one click. So is reducing the length of those forms you have to fill out to get licenses, permits, visas, health care, or financial aid, or to get reimbursed for a trip you take for your employer. Every organization should create a seek-and- destroy mission for unnecessary sludge. The rest of the book also has numerous substantive changes and what we hope is fresh thinking. We introduce several choice architecture concepts, in addition to “sludge,” that are new to this edition. These include personalized defaults; make it fun; and curation. These concepts play a large role in the chapters about financial decision making. We have increased the space we devote to climate change and the environment. We highlight both the limits of choice architecture (preview: we can’t solve the problem just with nudges) and the many ways in which nudges can help us succeed on a project that demands the deployment of every possible tool. And, oh, we do have a few things to say about the COVID-19 pandemic. Some topics that we originally covered get a fresh look. The passage of years has created the chance to evaluate how policies work over time. A good example is the Swedish launch (in 2000) of a national retirement savings program, which allowed investors to create their own portfolios. In the original edition, we discussed the initial design of that plan. Now, two decades after the launch, we can provide some insights about how long nudges last. (Preview: some of them can last almost forever.) We have also rewritten the chapter on organ donation, because everyone thought we supported a policy we actually oppose. We did state our policy in what we thought was plain language in the first version of the book, and we tried to make it a bit clearer in the paperback editions. But still our message wasn’t getting through, so we are trying once again. In case this is as far as you get in the book, please take note: we do not support the policy called “presumed consent.” Feel free to skip ahead to see why. We really do believe in freedom of choice. Other topics with fresh looks are devoted to helping consumers make better choices with their money. People have amassed staggering amounts of credit card debt, and then fail to take some simple steps to reduce the costs of maintaining those large balances. Consumers also make demonstrably bad choices in picking mortgages, insurance, and health care plans. You may well be one of the people who could save a lot of money in these domains. But more importantly, we hope that our discussion of these issues will provoke others to make behaviorally informed policy changes in an assortment of domains that we have not explored. We emphasize that the concepts and approaches discussed here are fully applicable to the private sector. Firms should explicitly recognize that their employees and customers and competitors are human beings, and design policies and strategies accordingly. We will offer many specific ideas for how to do this. It is important to stress what we have not done. We make no attempt to bring readers up to date on the remarkable nudge-related activity, reform, and research that have come about in recent years. Governments all over the world have been nudging, often for good, and the private sector has also been exceptionally inventive. Academic research has grown by leaps and bounds. To explore these developments would take an entirely new book, and in fact many such books have been written, some even by Sunstein. Indeed, Sunstein has coedited a four- volume collection of papers on this topic. (Sunstein thinks editing a four-volume collection of papers on the topic of nudging is fun; Thaler would rather be counting backward from ten million.) We have some things to say about objections to nudges, and in fact we devote a whole chapter to that topic, but we do not respond systematically to critics. What we hope to offer is a book that will feel fresher, more fun, and less dusty to those reading it for the first time, or even to those returning for another look, as we have spent the past months doing ourselves. Finally, a word about our decision to call this version of the book the Final Edition. One of the earliest topics to be studied by behavioral economists was self-control problems. Why do people continue to do things they think are dumb (both in foresight and in hindsight)? These include acts such as running up credit card bills, getting more than a bit chubby, and continuing to smoke. One strategy people use to deal with such problems is to adopt commitment strategies, in which some tempting (but ill-advised) options are made unavailable. For example, some people with a gambling problem volunteer to put their name on a list of people who will not be allowed into a casino. Using this title is our commitment strategy to prevent us from ever tinkering with this book again. We have loved working on it, and we might even have gotten addicted to it, but we pledge, right here and right now, that there will be no “post-final” edition of Nudge. And one of us actually believes that pledge. RICHARD H. THALER CASS R. SUNSTEIN JANUARY 2021 Introduction The Cafeteria Imagine that a friend of yours, Carolyn, is the director of food services for a large city school system. She is in charge of hundreds of schools, and hundreds of thousands of kids eat in her cafeterias every day. Carolyn has formal training in nutrition (a master’s degree from the state university), and she is a creative type who likes to think about things in nontraditional ways. One evening, over a good bottle of wine, she and her friend Adam, a statistically oriented management consultant who has worked with supermarket chains, hatched an interesting idea. Without changing any menus, they would run some experiments in her schools to determine whether the way the food is displayed and arranged might influence the choices kids make. Carolyn gave the directors of dozens of school cafeterias specific instructions on how to display the food choices. In some schools the desserts were placed first, in others last, and in still others in a separate line. The locations of various food items varied from one school to another. In some schools the french fries were at eye level, but in other schools it was the carrot sticks that were made more salient. From his experience in designing supermarket floor plans, Adam suspected that the results would be significant. He was right. Simply by rearranging the cafeteria, Carolyn was able to noticeably increase or decrease the consumption of many food items. From this experience she learned a big lesson: small changes in context can greatly influence schoolchildren, just as they can greatly influence adults. The influence can be exercised for better or for worse. For example, Carolyn knows that she can increase consumption of healthy foods and decrease consumption of unhealthy ones. With hundreds of schools to work with, and a team of graduate-student volunteers recruited to collect and analyze the data, Carolyn now understands that she has considerable power to influence what kids eat. She is pondering what to do with her newfound power. Here are some suggestions she has received from her usually sincere but occasionally mischievous friends and coworkers: 1. Arrange the food to make the students best off, all things considered. 2. Choose the food order at random. 3. Try to arrange the food to get the kids to pick the same foods they would choose on their own. 4. Maximize the sales of the items from the suppliers who are willing to offer the largest bribes. 5. Maximize profits, period. Option 1 has obvious appeal, yet it does seem a bit intrusive, even paternalistic. But the alternatives are worse! Option 2, arranging the food at random, could be considered fair-minded and principled, and it is in one sense neutral. But a random order makes no sense in a cafeteria. On efficiency grounds, the salad dressing should be placed next to the salad, not with the desserts. Also, if the orders are randomized across schools, then the children at some schools will have less healthy diets than those at other schools. Is this desirable? Should Carolyn choose that kind of neutrality, if she can easily make most students better off, in part by improving their health? Option 3 might seem to be an honorable attempt to avoid intrusion: try to mimic what the children would choose for themselves. Maybe that is really the neutral choice, and maybe Carolyn should neutrally follow people’s wishes (at least where she is dealing with older students). But a little thought reveals that this is a difficult option to implement. Carolyn’s experiment with Adam proves that what kids choose depends on the order in which the items are displayed. What, then, are the “true preferences” of the children? What does it mean to say that Carolyn should try to figure out what the students would choose “on their own”? In a cafeteria, it is impossible to avoid some way of organizing food. And many of the same considerations would apply if she were serving adults rather than children. Option 4 might appeal to a corrupt person in Carolyn’s job, and manipulating the order of the food items would put yet another weapon in the arsenal of available methods to exploit power. But Carolyn is honorable and honest, so she does not give this option any thought. (Not everyone would be this principled, alas.) Like Options 2 and 3, Option 5 has some appeal, especially if Carolyn thinks the best cafeteria is the one that makes the most money. But should she really try to maximize profits if the result is to make children less healthy, especially when she works for the school district? Carolyn is what we call a choice architect. A choice architect has the responsibility for organizing the context in which people make decisions. Although Carolyn is a figment of our imagination, many real people turn out to be choice architects, most without realizing it. Some of them even run cafeterias. If you are a doctor and describe the alternative treatments available to a patient, you are a choice architect. If you create the forms or the website that new employees use to choose among various employee benefits, you are a choice architect. If you design the ballot voters use to choose candidates, you are a choice architect. If you organize a drugstore or a grocery, you are a choice architect (and you confront many of the questions that Carolyn did). If you are a parent describing possible educational options to your son or daughter, you are a choice architect. If you are a salesperson, you are a choice architect (but you already knew that). There are many parallels between choice architecture and more traditional forms of architecture. A crucial parallel is that there is no such thing as a “neutral” design. Consider the job of designing a new office building. The architect is given some requirements. There must be room for a lobby, 120 offices, thirteen conference rooms of various sizes, a room large enough to have everyone meet together, and so forth. The building must sit on a specified site. Hundreds of other constraints will be imposed—some legal, some aesthetic, some practical. In the end, the architect must come up with an actual building with doors, stairs, windows, and hallways. As good architects know, seemingly arbitrary decisions, such as where to locate the bathrooms, will have subtle influences on how the people who use the building interact. Every trip to the bathroom creates an opportunity to run into colleagues (for better or for worse). A good building is not merely attractive; it also “works.” As we shall see, small and apparently insignificant details can have major impacts on people’s behavior. A good rule of thumb is to assume that everything matters. In many cases, the power of these small details comes from focusing people’s attention in a particular direction. A wonderful example of this principle comes from, of all places, the men’s toilets at Schiphol Airport in Amsterdam. At one point, the authorities etched the image of a black housefly into each urinal. It seems that men often do not pay much attention to where they aim, which can create a bit of a mess, but if they see a target, attention and therefore accuracy are much increased. According to the man who came up with the idea, it works wonders. “It improves the aim,” says Aad Kieboom. “If a man sees a fly, he aims at it.” Kieboom, an economist, directed Schiphol’s building expansion. He reports that etchings reduced “spillage” by 80 percent, a number we are unable to verify. However, we can report that after this example appeared in the first edition of this book, we began seeing those flies in other airports around the world. And yes, we are aware of the availability heuristic, to be discussed later. The insight that everything matters can be both paralyzing and empowering. Good architects realize that although they can’t build the perfect building, they can make some design choices that will have beneficial effects. The location of the coffee machines, for example, may influence workplace interaction. Policymakers can often do the equivalent of painting a fly—for example, by telling people clearly and conspicuously, on their credit card bills, that they might be subject to late fees and overuse fees. If you paint lines on the sidewalk where people wait to enter a supermarket during a pandemic, you can promote social distancing. And just as a building architect must eventually produce the plans for an building, a choice architect like Carolyn must choose a particular arrangement of the food options at lunch, and by so doing she can influence what people eat. She can nudge.* Libertarian Paternalism If, all things considered, you think that Carolyn should take the opportunity to nudge the kids toward food that is better for them, Option 1, then we welcome you to our movement: libertarian paternalism. We are keenly aware that this term is not one that many readers will find immediately endearing. Both words are somewhat off-putting, weighed down by stereotypes from popular culture and politics that make them unappealing to many. Even worse, the concepts seem to be contradictory! Why combine two reviled and contradictory concepts? We argue that if the terms are properly understood, both concepts show a lot of good sense—and they are far more attractive together than alone. The problem with the terms is that they have been captured by dogmatists. The libertarian aspect of our strategies lies in the straightforward insistence that much of the time, and so long as they are not harming others, people should be free to do what they like—and to opt out of arrangements they deem undesirable if that is what they want to do. To borrow a phrase from the late Milton Friedman, libertarian paternalists urge that people should be “free to choose.” We strive to design policies that maintain or increase freedom of choice. When we use the term libertarian to modify the word paternalism, we simply mean liberty-preserving. And when we say liberty-preserving, we really mean it. Libertarian paternalists want to make it easy for people to go their own way; they do not want to burden those who want to exercise their freedom. (We emphasize that when people are inflicting harm on others, freedom of choice is not the best idea—but even in such cases, nudges can play an important role. We’ll get to that. We also acknowledge that if people are making really terrible choices and harming their future selves, nudges might not be enough. We’ll get to that, too.) The paternalistic aspect lies in the claim that it is legitimate for choice architects to try to influence people’s behavior in order to make their lives longer, healthier, and better. In other words, we argue for self-conscious efforts, by institutions in the private sector and by government, to steer people’s choices in directions that will improve their lives. We are aware that many people, including many philosophers, have devoted a lot of effort to defining the term paternalism, and to exploring what might be right or wrong with it. The paternalistic policies that we favor aim to influence choices in a way that will make choosers better off, as judged by the choosers themselves. This is a paternalism of means, not of ends; those policies help people reach their own preferred destination. We know from decades of behavioral science research that people often make poor decisions in laboratory experiments. People also make plenty of mistakes in real life, which reinforces the view well stated by the Beatles: “we get by with a little help from our friends.” Our goal, in short, is to help people make the choices that they would have made if they had paid full attention and possessed complete information, unlimited cognitive ability, and complete self-control. (That doesn’t mean people shouldn’t sometimes stay out late, overeat, and have fun. As they say, “enjoy life now; this is not a rehearsal.”) Libertarian paternalism is a relatively weak, soft, and nonintrusive type of paternalism, because choices are not blocked, fenced off, or significantly burdened. If people want to smoke cigarettes, eat a lot of candy, choose an unsuitable health care plan, or fail to save for retirement, libertarian paternalists will not force them to do otherwise—or even make things hard for them. Still, the approach we recommend does count as paternalistic, because in important contexts, private and public choice architects should not merely track or implement people’s anticipated choices. Rather, they should attempt to move people in directions that will make their lives better. They should nudge. A nudge, as we will use the term, is any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not taxes, fines, subsidies, bans, or mandates. Putting the fruit at eye level counts as a nudge. Banning junk food does not. Many of the policies we recommend can be and have been implemented by the private sector (with or without a nudge from the government). Employers, for example, are important choice architects in many of the examples we discuss in this book. In areas involving health care and retirement plans, we think that employers can give employees far more helpful nudges (for example, through sensible default rules, clear presentation of information, and helpful hints). Private companies that want to make money and to do good can benefit by creating environmentally friendly nudges, helping to reduce air pollution and the emission of greenhouse gases. But, of course, companies can also use the concepts we discuss to increase sales in unsavory ways. They might impose sludge. We strive to reduce the sludge produced in both the public and private sectors. See Chapter 8. Econs and Humans: Why Nudges Can Help Those who reject paternalism often claim that human beings do a terrific job of making choices, or if not terrific, certainly better than anyone else would do (especially if that someone else works for the government). Whether or not they have ever studied economics, many people seem at least implicitly committed to the idea of Homo economicus, or economic man—the notion that each of us thinks and chooses unfailingly well, and thus fits within the usual depiction of human beings that is offered by economists. If you look at economics textbooks, you will learn that Homo economicus can think like Albert Einstein, store as much memory as Google does in the cloud, and exercise the willpower of Mahatma Gandhi. Really. But the folks we know are not like that. Real people have trouble with long division if they don’t have a calculator, sometimes forget their spouse’s birthday, and have a hangover on New Year’s Day. They are not Homo economicus; they are Homo sapiens. To keep our Latin usage to a minimum, we will hereafter refer to these imaginary and real species as Econs and Humans. Consider the issue of obesity. Rates of adult obesity in the United States are over 40 percent,1 and more than 70 percent of American adults are considered either obese or overweight.2 Worldwide, there are some 1 billion overweight adults, 300 million of whom are obese. Rates of obesity range from below 6 percent in Japan, South Korea, and some African nations to more than 75 percent in American Samoa.3 According to the World Health Organization, obesity rates have risen threefold since 1980 in some areas of North America, the United Kingdom, Eastern Europe, the Middle East, the Pacific Islands, Australia, and China. There is overwhelming evidence that obesity increases the risk of heart disease and diabetes, frequently leading to premature death. It would be quite fantastic to suggest that everyone is choosing their best possible diet, or a diet that is preferable to what might be produced with a few nudges. Of course, sensible people care about the taste of food, not simply about health, and eating is a source of pleasure in and of itself. We do not claim that everyone who is overweight is necessarily failing to act rationally, but we do reject the proposition that all or almost all people are choosing their diet optimally. What is true for diets is true for other risk-related behavior, including smoking and drinking, which produce hundreds of thousands of premature deaths each year in the United States alone. With respect to diet, smoking, and drinking, people’s current choices cannot always be said to be the best means of promoting their own well-being (to put it lightly). Indeed, many smokers, drinkers, and overeaters are willing to pay third parties to help them make better decisions. These findings complement those of the emerging science of choice, consisting of an extensive body of research over the past half-century. Much of the initial research in this field was conducted with laboratory experiments, but a substantial and rapidly growing amount comes from studies of real-world behavior, including archival studies of choices made in natural settings and randomized controlled trials. This research has raised serious questions about the soundness and wisdom of many judgments and decisions that people make. To qualify as Econs, people are not required to make perfect forecasts (that would require omniscience), but they are required to make unbiased forecasts. That is, forecasts can be wrong, but they can’t repeatedly err in a predictable direction. Unlike Econs, Humans make predictable mistakes. Take, for example, the planning fallacy—the systematic tendency toward unrealistic optimism about the time it takes to complete projects. It will come as no surprise to anyone who has ever hired a contractor to learn that everything takes longer than you think, even if you know about the planning fallacy.* Thousands of studies confirm that human forecasts are flawed and biased. Human decision making is not so great either. Again, to take just one example, consider what is called the status quo bias, a fancy name for inertia. For a host of reasons, which we shall explore, people have a strong tendency to go along with the status quo or default option. When you get a new smartphone, for example, you have a series of choices to make, from the background on the screen to the ringtone to the number of times the phone rings before the caller is sent to voice mail. The manufacturer has picked one option as the default for each of these choices. Research shows that whatever the default choices are, many people stick with them, even when the stakes are much higher than choosing the sound your phone makes when it rings. We provide many examples of the use of default options, and we will see that defaults are often quite powerful. If private companies or public officials favor one set of outcomes, they can greatly influence people by choosing it as the default. You can often increase participation rates by 25 percent, and sometimes by a lot more than that, simply by shifting from an opt-in to an opt-out design. As we will show, setting default options, and adopting other similar, seemingly trivial menu-changing strategies, can have huge effects on outcomes, from increasing savings to combating climate change to improving health care to reducing poverty. At the same time, we show that there are important situations in which people exercise their freedom and reject defaults. When they feel strongly about something, for example, they might overcome the strength of inertia and the power of suggestion (defaults are often perceived to be hints that they are the recommended option). Changing the default can be an effective nudge, but it is decidedly not the answer to every problem. The usually large effects of well-chosen default options provide just one illustration of the gentle power of nudges. In accordance with our definition, nudges include interventions that significantly alter the behavior of Humans, even though they would be ignored by Econs. Econs respond primarily to incentives. If the government taxes candy, Econs will buy less candy, but they are not influenced by such “irrelevant” factors as the order in which options are displayed. Humans respond to incentives too, but they are also influenced by nudges.* By properly deploying both incentives and nudges, we can improve our ability to improve people’s lives, and help solve many of society’s major problems. And we can do so while still insisting on everyone’s freedom to choose. A False Assumption and Two Misconceptions Many people who favor freedom of choice reject any kind of paternalism. They want the government to let citizens choose for themselves. The standard policy advice that stems from this way of thinking is to give people as many choices as possible, and then let them choose the one they like best (with as little government intervention or nudging as possible). The beauty of this way of thinking is that it offers a simple solution to many complex problems: Just Maximize Choices—full stop! This policy has been pushed in many domains, from education to health care to retirement savings programs. In some circles, Just Maximize Choices has become a policy mantra. Sometimes the only alternative to this mantra is thought to be a government mandate that is derided as one-size-fits-all. Those who favor Just Maximize Choices don’t realize there is plenty of room between their preferred policy and a single mandate. They oppose paternalism, or think they do, and they are skeptical about nudges. We believe that their skepticism is based on a false assumption and two misconceptions. The false assumption is that almost all people, almost all the time, make choices that are in their best interest or at the very least are better than the choices that would be made by someone else. We claim that this assumption is false—indeed, obviously false. In fact, we do not think that anyone actually believes it on reflection. Suppose a chess novice were to play against an experienced player. Predictably, the novice would lose precisely because he made inferior choices— choices that could easily be improved by some helpful hints. In many areas, ordinary consumers are novices, interacting in a world inhabited by experienced professionals trying to sell them things. More broadly, how well people choose is an empirical question, one whose answer is likely to vary across domains. Generally, people make good choices in contexts in which they have lots of experience, good information, and prompt feedback—say, choosing among familiar ice cream flavors. People know whether they like chocolate, vanilla, coffee, or something else. They do less well in contexts in which they are inexperienced and poorly informed, and in which feedback is slow or infrequent—say, in saving for retirement or in choosing among medical treatments or investment options. If you are given fifty different insurance policies from which to choose, with multiple and varying features, you might benefit from a little help. So long as people are not choosing perfectly, some changes in the choice architecture could make their lives go better (as judged by them, not by some bureaucrat). As we will try to show, it is not only possible to design choice architecture to make people better off; in many cases, it is easy to do so. The first misconception is that it is possible to avoid influencing people’s choices. In countless situations, some organization or agent must make a choice that will affect the behavior of some other people. There is, in those situations, no way of avoiding nudging in some direction, and these nudges will affect what people choose. Choice architecture is inevitable. As illustrated by the example of Carolyn’s cafeterias, people’s choices are pervasively influenced by the design elements selected by choice architects. No website, and no grocery store, lacks a design. It is true, of course, that some nudges are unintentional; employers may decide (say) whether to pay employees monthly or biweekly without intending to create any kind of nudge, but they might be surprised to discover that people save more if they get paid biweekly, because twice a year they get three paychecks in one month, and many bills come monthly. It is also true that private and public institutions can strive for one or another kind of neutrality—by, for example, choosing randomly, or by trying to figure out what most people want. But unintentional nudges can have major effects, and in some contexts, these forms of neutrality are unattractive; we shall encounter many examples. It is true as well that choice architects can insist on active choosing—by, for example, saying that if you want to work for the government, you have to specify the health care plan you prefer. But active choosing is itself a form of choice architecture, and it is not one that everyone will prefer, especially when options are numerous and decisions are difficult. In a French restaurant where customers are presented with a cart loaded with what seems like hundreds of varieties of cheese, it can be a blessing to have the option of asking the server to suggest a selection. People do not always like to be told to choose, and if they are forced to do that, they might not be at all happy. Some people will gladly accept these points for private institutions but strenuously object to government efforts to influence choice with the goal of improving people’s lives. They worry that governments cannot be trusted to be competent or benign. They fear that elected officials and bureaucrats will be ignorant, will place their own interests first, or will pay excessive attention to the narrow goals of self-interested private groups. We share these concerns. In particular, we emphatically agree that for government, the risks of mistake, bias, and overreaching are real and sometimes serious. That is why we generally favor nudges over commands, requirements, and prohibitions (except when people are harming others). But governments, no less than cafeterias (which governments frequently run), have to provide starting points of one or another kind. This is not avoidable. As we shall emphasize, they do so every day through the policies they establish, in ways that inevitably affect some choices and outcomes. In this respect, the anti-nudge position is a logical impossibility—a literal nonstarter. The second misconception is that paternalism always involves coercion. In the cafeteria example, the choice of the order in which to present food items does not force a particular diet on anyone, yet Carolyn, and others in her position, might select some arrangement of food on grounds that are paternalistic in the sense that we use the term. Would anyone object to putting the fruit and salad before the desserts at an elementary school cafeteria if the result were to induce kids to eat more apples and fewer brownies? Is this question fundamentally different if the customers are teenagers, or even adults? Is a GPS device an intrusion on freedom, even if it is paternalistic, in the sense that it tries to tell you how to get to your preferred destination? When no coercion is involved, we think that some types of paternalism should be acceptable even to those who most embrace freedom of choice. In domains as varied as savings, health, consumer protection, organ donation, climate change, and insurance, we will offer specific suggestions in keeping with our general approach. And by insisting that choices remain unrestricted, we think that the risks of inept or even corrupt designs are reduced. Freedom to choose is the best safeguard against bad choice architecture. Choice Architecture in Action Choice architects can make major improvements to the lives of others by designing user-friendly environments. Many of the most successful companies have succeeded in the marketplace for exactly that reason. Sometimes the choice architecture is highly visible, and consumers and employees appreciate the value it provides. Apple’s iPhone became an enormous economic success in part because of its elegant style, but mostly because users found it easy to get the device to do what they want. Sometimes the choice architecture is neglected and could benefit from some careful attention. Consider an illustration from the American workplace. (If you live elsewhere, please take pity on our plight.) Most large employers offer a range of benefits, including such things as life and health insurance and retirement savings plans. Once a year in late fall, there is an open enrollment period when employees are allowed to revise the selections that they made the previous year. Employees are required to make their choices online. They typically receive, by mail, a package of materials explaining the choices they have and instructions on how to log on to make these choices. They also receive various reminders. Because employees are human, some neglect to log on, so it is crucial to decide what the default options are for these busy, absentminded, and perhaps even overwhelmed employees. Usually, the default is one of two options: employees can be given the same option they chose the previous year, or their choice can be set back to “zero.” Call these the “status quo” and “back-to-zero” options. How should the choice architect choose between these defaults? Libertarian paternalists would like to set the default by asking what thoughtful and well-informed employees would actually want. Although this principle may not always lead to a clear choice, it is certainly better than choosing the default at random, or making either status quo or back to zero the default for everything. For example, it is a good guess that most employees would not want to cancel their heavily subsidized health insurance. So, for health insurance the status quo default (same plan as last year) seems strongly preferable to the back-to-zero default (which would mean going without health insurance). Compare this to an employee’s flexible spending account, a peculiarly cruel “benefit” that we believe exists only in the United States. An employee can contribute money into this account each month that can then be used to pay for certain expenditures (such as uninsured medical or childcare expenses). The cruel feature is that money put into this account has to be spent by March 31 of the following year or it is lost, and the predicted expenditures might vary greatly from one year to the next (for example, medical expenses might go up in a year in which a family welcomes a newborn, or childcare expenses might go down when a child enters school). In this case, the back-to-zero default probably makes more sense than the status quo. This problem is not hypothetical. Some time ago, Thaler had a meeting with three of the top administrative officers of his employer, the University of Chicago, to discuss similar issues, and the meeting happened to take place on the final day of the open enrollment period. He mentioned this coincidence and teasingly asked whether the administrators had remembered to log on and adjust their benefits package. One sheepishly said that he was planning on doing it later that day and was glad for the reminder. Another admitted to having forgotten, and the third said that he was hoping his wife had remembered to do it! The group then turned to the topic of the meeting, namely what the default should be for an option with the uninviting name “supplementary salary reduction program” (it’s better than it sounds; it’s actually a tax-sheltered savings program). At that time the default was the back-to-zero option, and Thaler had arranged the meeting hoping to convince the administrators to change the default to “same as last year.” After their own absentminded behavior was made salient to them, the administrators quickly agreed to the change. We are confident that many university employees will have more comfortable retirements as a result. This example illustrates some basic principles of good choice architecture. Choosers are human, so designers should make life as easy as possible. Send reminders (but not too many!) and then try to minimize the costs imposed on those who, despite your (and their) best efforts, space out. As we will see, these principles (and many more) can be applied in both the private and public sectors, and there is much room for going beyond what is now being done. Large companies and governments, please take note. (Also universities and small companies.) A New Path We shall have a great deal to say about nudges from private institutions. But many of the most important applications of libertarian paternalism are for governments, and we will offer a number of recommendations for public policy and law. Our hope when we originally wrote this book was that those recommendations might appeal to both sides of the political divide. Indeed, we believed that the policies suggested by libertarian paternalism could be embraced by conservatives and liberals. We are pleased to report that, far more than we could have anticipated, that belief has been vindicated. In the United Kingdom, former Prime Minister David Cameron, the leader of the Conservative Party, embraced nudging and created the world’s first team devoted solely to this effort, officially called the Behavioural Insights Team, but often called the Nudge Unit.* In the United States, former President Barack Obama, a Democrat and a liberal, also embraced the basic idea, directed his agencies to adopt numerous nudges, and created a nudge unit of his own (originally known as the Social and Behavioral Sciences Team, and now called the Office of Evaluation Sciences). The United States Agency for International Development has an assortment of programs that use behavioral science and insights. In the years since the book was originally published, governments around the world, spanning the political landscape, have incorporated these and related ideas in an effort to make their programs more efficient and effective. There are behavioral insights teams or nudge units of various kinds in numerous nations, including Australia, New Zealand, Germany, Canada, Finland, Singapore, the Netherlands, France, Japan, India, Qatar, and Saudi Arabia. A great deal of relevant work is being done by the World Bank, the United Nations, and the European Commission. In 2020, the World Health Organization created a Behavioral Insights Initiative focusing on numerous public health issues, including pandemics, vaccination uptake, and risk-taking by young people. Although the world seems to be becoming increasingly polarized, we continue to believe that libertarian paternalism can be a promising foundation for bipartisanship and for simple problem-solving. Better governance often requires less in the way of government coercion and more in the way of freedom to choose. Mandates and prohibitions have their place (and behavioral science can help to identify them), but when incentives and nudges replace requirements and bans, government will be both smaller and more modest. So, to be clear: this book is not a call for more bureaucracy, or even for an increased role of government. We just strive for better governance. In short, libertarian paternalism is neither left nor right. For all their differences, we hope that people with very different political convictions might be willing to converge in support of gentle nudges. HUMANS AND ECONS 1 Biases and Blunders Have a look, if you would, at the two tables shown in the figure below: Figure 1.1. Two tables (Adapted from Shepard ) Suppose that you are thinking about which one would work better as a coffee table in your living room. What would you say are the shapes of the two tables? Take a guess at the ratio of the length to the width of each. Just eyeball it. If you are like most people, you think that the table on the left is much longer and narrower than the one on the right. Typical guesses are that the ratio of the length to the width is 3:1 for the left table and 1.5:1 for the right table. Now take out a ruler and measure each table. You will find that the shapes of the two tabletops are identical. Measure them until you are convinced, because this is a case where seeing is not believing. (When Thaler showed this example to Sunstein at their usual lunch haunt, Sunstein grabbed his chopstick to check.) What should we conclude from this example? If you see the left table as longer and thinner than the right one, you are certifiably human. There is nothing wrong with you (well, at least not that we can detect from this test). Still, your judgment in this task was biased, and predictably so. No one thinks that the right table is narrower! Not only were you wrong; you were probably confident that you were right. If you like, you can put this visual to good use when you encounter others who are equally human and who are disposed to gamble away their money, say, at a bar. Figure 1.2. Tabletops (Adapted from Shepard ) Now consider Figure 1.2. Do these two shapes look the same or different? Again, if you are human and have decent vision, you probably see these shapes as being identical, as they are. But these two shapes are just the tabletops from Figure 1.1, removed from their legs and reoriented. Both the legs and the orientation facilitate the illusion that the tabletops are different in Figure 1.1, so removing these distracters restores the visual system to its usual, amazingly accurate state.* These two figures capture the key insight that behavioral economists have borrowed from psychologists. Normally the human mind works remarkably well. We can recognize people we have not seen in years, understand the complexities of our native language, and run down a flight of stairs without falling. Some of us can speak twelve languages, improve the fanciest computers, or create the theory of relativity. However, even Albert Einstein, Bill Gates, and Steve Jobs would probably be fooled by those tables. That does not mean something is wrong with us as humans, but it does mean that our understanding of human behavior can be improved by appreciating how and when people systematically go wrong. Knowing something about the visual system allowed Roger Shepard, a psychologist and artist, to draw those deceptive tables.1 In the spirit of those tables, this chapter will spell out some of the most important ways that human judgment and decision making diverge from the predictions of models based on optimization. Before we get started, though, we want to stress that we are not saying that people are irrational. We avoid using that unhelpful and unkind term, and we certainly don’t think that people are dumb. Rather, the problem is that we are fallible and life is hard. If every time we went food shopping, we tried to solve the problem of choosing the very best possible combination of items to buy, we would never get out of the store. Instead, we take sensible shortcuts, and we try to get home before we start eating the things in our cart. We are human. Rules of Thumb To deal with life, we use rules of thumb. They are handy and useful. Their variety is nicely illustrated by Tom Parker’s fascinating 1983 book, Rules of Thumb. Parker wrote the book by asking friends to send him examples. These include: “One ostrich egg will serve 24 people for brunch.” “Ten people will raise the temperature of an average size room by one degree per hour.” And one to which we will return: “No more than 25 percent of the guests at a university dinner party can come from the economics department without spoiling the conversation.” Although rules of thumb can be very helpful, their use can also lead to systematic biases. This insight, first stated decades ago by two of our heroes, the psychologists Daniel Kahneman and Amos Tversky, changed how psychologists (and eventually economists, lawyers, policymakers, and many others) think about thinking. Their early work identified three common rules of thumb or heuristics—anchoring, availability, and representativeness—and the biases that are associated with each. Their research program became known as the “heuristics and biases” approach to the study of human judgment. This approach has been an inspiration for behavioral economics in general, and especially for this book. Anchoring Suppose we were asked to guess the population of Milwaukee, a city about a two-hour drive north of Chicago, where we both lived when we wrote the first edition of this book. Neither of us knows much about Milwaukee, but we believe it is the biggest city in Wisconsin. How should we go about guessing? Well, a good thing to do is to start with something we do know, such as the population of Chicago, which is roughly three million. And we know that Milwaukee is a big enough city to have professional baseball and basketball teams, but clearly not as big as Chicago, so, hmmm, maybe it is one-third the size, say one million. Now consider someone from Green Bay, Wisconsin, who is asked the same question. She also doesn’t know the answer, but she does know that Green Bay has about one hundred thousand people and that Milwaukee is larger, so she guesses, say, three times larger—three hundred thousand. This process is called “anchoring and adjustment.” You start with some anchor, a number you know, and adjust in the direction you think is appropriate. So far, so good. The bias occurs because the adjustments are typically insufficient. Experiments repeatedly show that in problems similar to our example, people from Chicago are likely to make a high guess (based on their high anchor), while those from Green Bay guess low (based on their low anchor). As it happens, Milwaukee has about 590,000 people. Even obviously irrelevant anchors creep into the decision-making process. Try this one yourself. Think about the last three digits of your phone number. Write the number down if you can. Now, when do you think Attila the Hun sacked Europe? Was it before or after that year? What is your best guess? Even if you do not know much about European history, you do know enough to know that whenever Attila did whatever he did, the date has nothing to do with your phone number. Still, when we conduct this experiment with our students, we get answers that are more than three hundred years later from students who start with high anchors rather than low ones. (The right answer is 452.) Anchors can even influence how you think your life is going. In one experiment, college students were asked two questions: (a) How happy are you? (b) How often are you dating? When the two questions were asked in this order, the correlation between the two questions was quite low (.11). But when the question order was reversed, so that the dating question was asked first, the correlation jumped to.62. Apparently, when prompted by the dating question, the students use what might be called the “dating heuristic” to answer the question about how happy they are. “Gee, I can’t remember when I last had a date! I must be miserable.” Similar results can be obtained from married couples if the dating question is replaced by a lovemaking question.2 In the language of this book, anchors serve as nudges. One example comes from tipping behavior in taxicabs. Taxi drivers were initially reluctant to adopt the technology to accept credit cards in their cabs, because the credit card companies take a cut of roughly 3 percent. But those who did install the technology were pleasantly surprised to learn that their tips increased! This was partly due to some anchoring. When customers elected to use their card to pay, they would often be confronted with tip options that looked something like this: 15% 20% 25% Choose your own amount. Notice this screen is nudging people toward higher tips by offering precalculated amounts that start at these percentages. (And when in doubt, people often choose the middle option—in this case 20 percent, which is higher than the 15 percent many customers previously chose without this intervention.) Also, the option to choose your own tip is a bit of an illusion. The screen appears only when the trip is over; the customer is ready to leave, others may be waiting to get into the cab, and entering one’s own amount requires some calculations and a couple extra steps. By contrast, just clicking one of the buttons is easy! Nevertheless, it is tricky to figure out what the best defaults would be from the perspective of the driver. This is shown in a careful study by behavioral economist Kareem Haggag. Haggag was able to compare the tips from two cab companies, one of which offered 15, 20, and 25 percent tip suggestions, whereas the other had defaults of 20, 25, and 30 percent. On balance, the screen with the relatively higher default tips significantly increased drivers’ earnings, because they increased the average tip. But interestingly, they also provoked an increase in the number of riders who offered no tip at all. Some people were evidently put off by the aggressive defaults, and they refused to give anything.3 This is connected with the behavioral phenomenon of reactance: when people feel ordered around, they might get mad and do the opposite of what is being ordered (or even suggested). Still, the evidence shows that, within reason, the more you ask for, the more you tend to get. Haggag’s headline is that because of the higher on-screen default tips, taxi drivers ended up with a decent increase in their annual earnings. Lawyers who sue companies sometimes win astronomical amounts, in part because they have successfully induced juries to anchor on multimillion-dollar figures (such as a company’s annual earnings). Clever negotiators often get amazing deals for their clients by producing an opening offer that makes their adversary thrilled to pay half that very high amount. But keep that notion of reactance in mind. If you get greedy, you might end up with nothing. Availability A quick quiz: In the United States, are more gun deaths caused by homicides or suicides? In answering questions of this kind, most people use what is called the availability heuristic. They assess the likelihood of risks by asking how readily examples come to mind. Because homicides are much more heavily reported in the news media, they are more available than suicides, and so people tend to believe, wrongly, that guns cause more deaths from homicide than from suicide. (There are about twice as many gun-inflicted suicides as homicides.) An important lesson can be found here: people often buy a gun thinking they want to protect their family, but it is much more likely that they will increase the chance that a family member successfully commits suicide. Accessibility and salience are closely related to availability, and they are important as well. If you have personally experienced a serious earthquake, you’re more likely to believe that a flood or an earthquake is likely than if you read about it in a weekly magazine. Thus, vivid and easily imagined causes of death (for example, tornadoes) often receive inflated estimates of probability, and less-vivid causes (for example, asthma attacks) receive low estimates, even if they occur with a far greater frequency (here a factor of twenty). So, too, recent events have a greater impact on our behavior, and on our fears, than earlier ones. The availability heuristic helps to explain much risk-related behavior, including both public and private decisions to take precautions. Whether people buy insurance for natural disasters is greatly affected by recent experiences.4 In the aftermath of a flood, purchases of new flood insurance policies rise sharply —but purchases decline steadily from that point, as vivid memories recede. And people who know someone who has experienced a flood are more likely to buy flood insurance for themselves, regardless of the flood risk they actually face.5 Biased assessments of risk can perversely influence how we prepare for and respond to crises, business choices, and the political process. When technology stocks have done very well, people might well buy technology stocks, even if by that point they’ve become a bad investment. People might overestimate some risks, such as a nuclear power accident, because of well-publicized incidents such as Chernobyl and Fukushima. They might underestimate others, such as strokes, because they do not get much attention in the media. Such misperceptions can affect policy, because some governments will allocate their resources in a way that fits with people’s fears rather than in response to the most likely dangers. When availability bias is at work, both private and public decisions may be improved if judgments can be nudged back in the direction of true probabilities. A good way to get people to take more precautions about a potential hazard is to remind them of a related incident in which things went wrong; a good way to increase people’s confidence is to remind them of a similar situation in which everything worked out for the best. Representativeness The third of the original three heuristics bears an unwieldy name: representativeness. Think of it as the similarity heuristic. The idea is that when asked to judge how likely it is that A belongs to category B, people answer by asking themselves how similar A is to their image or stereotype of B (that is, how “representative” A is of B). Like the other two heuristics we have discussed, this one is used because it often works. Stereotypes are sometimes right! Again, biases can creep in when similarity and frequency diverge. The most famous demonstration of such biases involves the case of a hypothetical woman named Linda. In an experiment, subjects were told the following: “Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice and also participated in antinuclear demonstrations.” Then people were asked to rank, in order of the probability of their occurrence, eight possible futures for Linda. The two crucial answers were “bank teller” and “bank teller and active in the feminist movement.” Most people said that Linda was less likely to be a bank teller than to be a bank teller and active in the feminist movement.6 This is an obvious logical mistake. It is, of course, not logically possible for any two events to be more likely than one of them alone. It just has to be the case that Linda is more likely to be a bank teller than a feminist bank teller, because all feminist bank tellers are bank tellers. The error stems from the use of the representativeness heuristic: Linda’s description seems to match “bank teller and active in the feminist movement” far better than “bank teller.” As Stephen Jay Gould once observed, “I know [the right answer], yet a little homunculus in my head continues to jump up and down, shouting at me ‘but she can’t just be a bank teller; read the description!’ ” 7 Like the availability heuristic, the representativeness heuristic often works well, but it can lead to major errors. Optimism and Overconfidence Before the start of Thaler’s class in managerial decision making, students fill out an anonymous survey on the course website. One of the questions is “In which decile do you expect to fall in the distribution of grades in this class?” Students can check the top 10 percent, the second 10 percent, and so forth. Since these are MBA students, they are presumably well aware that in any distribution, half the population will be in the top 50 percent and half in the bottom. And only 10 percent of the class can, in fact, end up in the top decile. Nevertheless, the results of this survey reveal a high degree of unrealistic optimism about performance in the class. Typically less than 5 percent of the class expects their performance to be below the median (the 50th percentile) and more than half the class expects to perform in one of the top two deciles. Invariably, the largest group of students put themselves in the second decile. We think this is most likely explained by modesty. They really think they will end up in the top decile but are too modest to say so. MBA students are not the only ones overconfident about their abilities. The “above-average” effect is pervasive. In some studies, 90 percent of drivers say they are above average behind the wheel. And nearly everyone thinks they have an above-average sense of humor, including some people who are rarely seen smiling. (That is because they know what is funny!) This applies to professors, too. One study found that about 94 percent of professors at a large university believed they were better than the average professor, and there is every reason to think that such overconfidence applies to professors in general.8 (Yes, we admit to this particular failing.) People are unrealistically optimistic even when the stakes are high. In the United States, about 40 to 50 percent of marriages end in divorce, and this is a statistic most people have heard. (The precise number is hard to nail down.) But around the time of the ceremony, almost all couples have been found to believe that there is approximately a zero percent chance that their marriage will end in divorce—even those who have already been divorced!9 (Second marriage, Samuel Johnson once quipped, “is the triumph of hope over experience.”) A similar point applies to entrepreneurs starting new businesses, in which the failure rate is at least 50 percent. In one survey of people starting new businesses (typically small businesses, such as contracting firms, restaurants, or salons), respondents were asked two questions: (a) What do you think is the chance of success for a typical business like yours? (b) What is your chance of success? The most common answers to these questions were 50 percent and 90 percent, respectively, and many said 100 percent in response to the second question.10 Unrealistic optimism can explain a lot of individual risk-taking, especially in the domain of risks to life and health. Asked to envision their future, students typically say that they are far less likely than their classmates to be fired from a job, to have a heart attack or get cancer, to be divorced after a few years of marriage, or to have a drinking problem. Older people underestimate the likelihood that they will be in a car accident or suffer major diseases. Smokers are aware of the statistical risks and often even exaggerate them, but most believe that they are less likely to be diagnosed with lung cancer and heart disease than most nonsmokers. Lotteries are successful partly because of unrealistic optimism.11 Unrealistic optimism is a pervasive feature of human life; it characterizes most people in most social categories. When they overestimate their personal immunity to harm, people may fail to take sensible preventive steps. During the pandemic of 2020 and 2021, some people failed to take precautions, including mask-wearing, because of optimism about their personal risks. If people are running risks because of unrealistic optimism, they might be able to benefit from a nudge. In fact, we have already mentioned one possibility: if people are reminded of a bad event, they may not continue to be so optimistic. Gains and Losses People hate losses. In more technical language, people are “loss averse.” Roughly speaking, the prospect of losing something makes you twice as miserable as the prospect of gaining the same thing makes you happy. How do we know this? Consider a simple experiment.12 Half the students in a class are given a coffee mug with the insignia of their home university embossed on it. The students who do not get a mug are asked to examine their neighbors’ mugs. Then mug owners are invited to sell their mugs and nonowners are invited to buy them. They do so by answering this question: “At each of the following prices, indicate whether you would be willing to (give up your mug/buy a mug).” The results show that those with mugs demand roughly twice as much to give them up as others are willing to pay to get one. Thousands of mugs have been used in dozens of replications of this experiment, but the results are nearly always the same. Once you have a mug, you don’t want to give it up. But if you don’t have one, you don’t feel an urgent need to buy one. What this means is that people do not assign specific values to objects; it often matters whether they are selling or buying. It is also possible to measure loss aversion with gambles. Suppose I ask you whether you want to make a bet. Heads you win $X, tails you lose $100. How much does X have to be for you to take the bet? For many people, the answer to this question is somewhere around $200. This implies that the prospect of winning $200 just offsets the prospect of losing $100. Loss aversion helps produce inertia, meaning a strong desire to stick with your current holdings. If you are reluctant to give up what you have because you do not want to incur losses, then you will turn down trades you might have otherwise made. In another experiment, half the students in a class received coffee mugs (of course) and half got large chocolate bars. The mugs and the chocolate cost about the same, and in pretests students were as likely to choose one as the other. Yet when offered the opportunity to switch from a mug to a candy bar or vice versa, only one in ten switched. Loss aversion has a lot of relevance to public policy. If you want to discourage the use of plastic bags, should you give people a small amount of money for bringing their own reusable bag, or should you ask them to pay the same small amount for a plastic bag? The evidence suggests that the former approach has no effect at all, but that the latter works; it significantly decreases use of plastic bags. People don’t want to lose money, even if the amount is trivial.13 (Environmentalists, please remember this point.) Status Quo Bias For lots of reasons, people have a general tendency to stick with their current situation. One reason is loss aversion; giving up what we have is painful. But the phenomenon has multiple causes. William Samuelson and Richard Zeckhauser have dubbed this behavior status quo bias, and it has been demonstrated in numerous situations.14 Most teachers know that students tend to sit in the same seats in class, even without a seating chart. But status quo bias can occur even when the stakes are much larger, and it can cost people a lot of money. For example, in retirement savings plans most participants pick an asset allocation when they join the plan and then forget about it. A study conducted in the late 1980s looked at the decisions of participants in a pension plan that covered many college professors in the United States. The median number of changes in the asset allocation over a lifetime was, believe it or not, zero.15 In other words, over the course of their careers, more than half of the participants made exactly no changes to the way their contributions were being allocated. Perhaps even more telling, many married participants who were single when they joined the plan still had their mothers listed as their beneficiaries! As we will see, inertia in investing behavior is alive and well in Sweden. (See Chapter 10.) Status quo bias is easily exploited. A true story: Many years ago, American Express wrote Sunstein a cheerful letter telling him that he could receive, for free, three-month subscriptions to five magazines of his choice.* What a great deal! Free subscriptions seem like a bargain, even if the magazines rarely got read, so Sunstein happily made his choices. What he didn’t realize was that unless he took some action to cancel his subscription, he would automatically keep receiving the magazines after the three-month period, automatically paying for them at the normal rate. For more than a decade, he continued to subscribe to magazines that he hardly ever read and that he mostly despised. They tended to pile up around the house. He kept intending to cancel those subscriptions, but somehow never got around to it. It was not until he started working on the original edition of this book that he canceled them. One of the causes of status quo bias is a lack of attention. Many people often adopt what we call the “yeah, whatever” heuristic. A good illustration is the carryover effect that occurs when people start binge-watching a television series. On most streaming networks if you do nothing when you reach the end of one episode, the next one just starts showing. At that point many viewers (implicitly) say, “yeah, whatever,” and keep watching. Many an intended short evening has dragged long into the night as a result, especially on shows with cliffhanger endings. Nor is Sunstein the only victim of automatic renewal of magazine subscriptions, which has now been extended to virtually every online service. Those who are in charge of circulation know that when renewal is automatic, and particularly when people have to make a phone call to cancel, the likelihood of renewal is much higher than it is when people have to indicate that they actually want to continue to receive the magazine. (We will return to this point in Chapter 8 in connection with sludge.) The combination of loss aversion and mindless choosing is one reason why if an option is designated as the default, it will usually (but not always!) attract a large market share. Default options thus act as powerful nudges. For this and other reasons, setting the best possible defaults is a theme we explore often in this book. Framing Suppose that you are suffering from serious heart disease and your doctor proposes a grueling operation. You’re understandably curious about the odds of surviving this ordeal. The doctor says, “Of one hundred patients who have this operation, ninety are alive after five years.” What will you do? That statement might feel pretty comforting, making you confident about having the operation. But suppose the doctor frames his answer in a somewhat different way. Suppose he says, “Of one hundred patients who have this operation, ten are dead after five years.” If you’re like most people, the doctor’s statement will sound pretty alarming and you might not have the operation. Instinctively, you might think: “A significant number of people are dead, and I might be one of them!” In numerous experiments, people react very differently to the information that “ninety of one hundred are alive” than to “ten of one hundred are dead”—even though the content of the two statements is exactly the same. Even experts are subject to framing effects. When doctors are told that “ninety of one hundred are alive,” they are more likely to recommend the operation than if told that “ten of one hundred are dead.” 16 Framing matters in many domains. When credit cards started to become popular forms of payment in the 1970s, some retail merchants wanted to charge different prices to their cash and credit card customers. To prevent this, credit card companies adopted rules that prohibited their retailers from doing so. When a bill was introduced in Congress to forbid credit card companies to impose such rules and the bill seemed likely to pass, the credit card lobby turned its attention to language. Its preference was that if a company charged different prices to cash and credit customers, the credit price should be considered the “normal” (default) price and the cash price a discount—rather than the alternative of making the cash price the usual price and adding a surcharge for credit card customers. The credit card companies had a good, intuitive understanding of what psychologists would come to call framing. The idea is that choices depend, in part, on the way in which problems are described. The point matters a great deal for public policy. Energy conservation is now rightly receiving a lot of attention, so consider the following information campaigns: (a) If you use energy conservation methods, you will save $350 per year; (b) If you do not use energy conservation methods, you will lose $350 per year. There is evidence that information campaign (b), framed in terms of loss, might be more effective than information campaign (a). If the government wants to encourage energy conservation, option (b) looks like a stronger nudge. Much like status quo bias, framing effects are exacerbated by the Human tendency occasionally to be somewhat mindless, passive decision makers. Few of us bother to check to make sure reframing the decisions we face would produce a different answer. One reason why we don’t check for consistency may be that we wouldn’t know what to make of a contradiction. This implies that frames can be powerful nudges, and must be selected with care and caution. How We Think: Two Systems It goes without saying that the biases we have described in this chapter do not apply to everyone the same way. Yes, most people are overconfident and optimistic—but not everyone. In fact we have a good friend who has the opposite traits—he is never confident and is always worried about something, or many things. That friend happens to be Daniel Kahneman, with whom we have both had the privilege of being coauthors. A draft of a paper or book chapter that seemed good last week suddenly can look terrible to him this week. He is constantly rethinking everything. And this is particularly true of his own work. This trait led him to take an unusual step when he won the Nobel Memorial Prize in Economic Sciences in 2002. Laureates are asked to deliver a lecture during their week in Stockholm. Most choose to discuss the work that earned the prize in a manner accessible to a lay audience. Kahneman did that, but in his own, unique way. He presented an entirely new way of looking at his joint work with Amos Tversky (who would have shared the prize had he been alive), using a concept from cognitive psychology that had not played even the slightest role in creating the research. Only Kahneman would take the already frantic two months between the announcement of the prize and the ceremony to rethink his life’s work completely. This rethinking was later refined and expanded in his bestselling book Thinking, Fast and Slow. The title of the book cleverly states the main idea, to which we devote the rest of this chapter. It is useful to imagine the workings of the brain as consisting of two components or systems. One is fast and intuitive; the other is slow and reflective. Kahneman adopts the terminology of the psychology literature on which he draws, and calls these two components System 1 and System 2. One of us had trouble remembering which one is the fast one (it is 1), so we prefer to use names that remind the reader what they are. We call them the Automatic System and the Reflective System. Using this framework can help us understand a puzzle about human thought. How can we be so ingenious at some tasks and so clueless at others? Beethoven wrote his incredible Ninth Symphony after he had become deaf, an amazing feat, but one would be hardly surprised to learn that he often misplaced his house keys. Was he a genius or an imbecile? The answer is some of both. The work of psychologists and neuroscientists on whose work Kahneman relied converged on a description of the brain’s functioning that helps us make sense of these seeming contradictions. The approach involves a distinction between two kinds of thinking.17 Two Cognitive Systems: AUTOMATIC SYSTEM Uncontrolled Effortless Associative Fast Unconscious Skilled REFLECTIVE SYSTEM Controlled Effortful Deductive Slow Self-aware Rule-following Here is a story that illustrates how the two systems work. Sunstein has a son named Declan, who was, at age nine, unable to resist toy stores. Whenever the two of them passed such a store, Declan would clamor to go inside and buy something, even though he would predictably be bored with the new toy in a day or two. Sunstein, of course, dealt with this dilemma by giving Declan a short primer on the two systems. It was Declan’s System 1 that urgently wanted to go into toy stores, although his System 2 fully knew he had enough toys. For a few weeks, the explanation appeared to work, and Declan could pass by toy stores without uttering a word. But one day, he looked seriously at his father and asked, “Daddy, do I even have a System Two?” As Declan now knows, the Automatic System is rapid and instinctive, and acts without reliance on what we usually associate with the word thinking. When you duck because a ball is thrown at you unexpectedly, or get nervous when your airplane hits turbulence, or smile when you see a cute puppy, it is your Automatic System at work. Though the neuroscience is complicated here, brain scientists are able to say that the activities of the Automatic System are associated with the oldest parts of the brain, the parts we share with lizards (as well as puppies).18 The Reflective System is more deliberate and self-conscious. We use this system when we are asked, “How much is 411 times 317?” Most people are also likely to use the Reflective System when deciding which route to take for a trip to an unfamiliar place and whether to go to law school or business school. When we are writing this book we are (mostly) using our Reflective Systems, but sometimes ideas pop into our heads when we are in the shower or taking a walk and not thinking at all about the book, and these probably are coming from our Automatic Systems. (Many voters, by the way, seem to rely heavily on their Automatic System.19 A candidate who makes a bad first impression, or who tries to win votes through complex arguments and statistical demonstrations, may well run into trouble.)*20 Most people in the world have an Automatic System reaction to a temperature given in Celsius but have to use their Reflective System to process a temperature given in Fahrenheit; for Americans, the opposite is true. People speak their native languages using their Automatic Systems and tend to struggle to speak another language using their Reflective Systems. Being truly bilingual means that you speak two languages using the Automatic System. Accomplished chess players have pretty fancy intuitions; their Automatic Systems allow them to size up complex situations rapidly and respond with both amazing accuracy and exceptional speed. One way to think about all this is that the Automatic System is your gut reaction and the Reflective System is your conscious thought. Gut feelings can be quite accurate, but we often make mistakes because we rely too much on our Automatic System. The Automatic System says, “The airplane is shaking, I’m going to die,” while the Reflective System responds, “Plane crashes are extremely rare!” The Automatic System says, “That big dog is going to hurt me,” and the Reflective System replies, “Most dogs are quite sweet.” The Automatic System starts out with no idea how to kick a ball accurately or shoot a basketball into a faraway hoop. Note, however, that countless hours of practice enable accomplished athletes to avoid reflection and to rely on their Automatic Systems—so much so that good athletes know the hazards of thinking too much and might well do better to “trust the gut,” or “just do it.” The Automatic System can be trained with lots of repetition—but such training takes a great deal of time and effort. One reason why teenagers are such risky drivers is that their Automatic Systems have not had much practice, and using the Reflective System is much slower. Sunstein is hopeful that Declan will develop a fully functional Reflective System before he is old enough to get a driver’s license. To see how intuitive thinking works, try the following little test. For each of the three questions, begin by writing down the first answer that comes to your mind. Then pause to reflect. 1. A bat and ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? 2. You are one of three runners in a race. At the end, you overtake the runner who was in second place. In what place did you finish? 3. Mary’s mother had four children. The youngest three are named: Spring, Summer, and Autumn. What is the eldest child’s name? What were your initial answers? Most people say 10 cents, first place, and Winter. But all these answers are wrong. If you think for a minute, you will see why. If the ball costs 10 cents and the bat costs one dollar more than the ball, meaning $1.10, then together they cost $1.20, not $1.10. No one who bothers to check whether his initial answer of 10 cents could possibly be right would give that as an answer, but research by Shane Frederick (who calls series of questions such as these the Cognitive Reflection Test) finds that these are the most popular answers even among bright college students.21 The correct answers are 5 cents, second place, and Mary, but you knew that, or at least your Reflective System did if you bothered to consult it. Econs never make an important decision without checking with their Reflective Systems (if they have time). But Humans sometimes go with the answer the lizard inside is giving without pausing to think. If you are a television fan, think of Mr. Spock of Star Trek fame as someone whose Reflective System is always in control. (Captain Kirk: “You’d make a splendid computer, Mr. Spock.” Mr. Spock: “That is very kind of you, Captain!”) In contrast, Homer Simpson seems to have forgotten where he put his Reflective System. (Homer once replied to a gun store clerk who informed him of a mandatory five-day waiting period before buying a weapon, “Five days? But I’m mad now!”) One of our major goals in this book is to see how the world might be made easier, or safer, for the Homers among us (and the Homer lurking somewhere in each of us). If people can rely more on their Automatic Systems without getting into terrible trouble, their lives should be easier, better, and longer. Put another way, let’s design policies for Homer economicus. So What? Our goal in this chapter has been to offer a brief glimpse at human fallibility. The picture that emerges is one of busy people trying to cope in a complex world in which they cannot afford to think deeply and at length about every choice they have to make. People adopt sensible rules of thumb that usually work well but sometimes lead them astray, especially in challenging or unfamiliar situations. Because they are busy and have limited attention, they tend to accept questions as posed rather than trying to determine whether their answers would vary under alternative formulations. The bottom line, from our point of view, is that people are, shall we say, nudge-able. Their choices, even in life’s most important decisions, are influenced in ways that would not be anticipated in a standard economic framework. Here is one final example to illustrate. One of the most scenic urban thoroughfares in the world is Chicago’s Lake Shore Drive, which hugs the Lake Michigan coastline that is the city’s eastern boundary. The drive offers stunning views of Chicago’s magnificent skyline. There is one stretch of this road that puts drivers through a series of S curves. These curves are dangerous. For a long time, many drivers failed to take heed of the reduced speed limit (25 mph) and wiped out. In response, the city adopted a distinctive way of encouraging drivers to slow down. Figure 1.3. Lake Shore Drive, Chicago (Courtesy of the city of Chicago) At the beginning of the dangerous curve, drivers encounter a sign painted on the road warning of the lower speed limit, and then a series of white stripes painted onto the road. The stripes do not provide much if any tactile information (they are not speed bumps) but rather just send a visual signal to drivers. When the stripes first appear, they are evenly spaced, but as drivers reach the most dangerous portion of the curve, the stripes get closer together, giving the sensation that driving speed is increasing (see Figure 1.3). One’s natural instinct is to slow down. When we drive on this familiar stretch of road, we find that those lines are speaking to us, gently urging us to touch the brake before the apex of the curve. We have been nudged. 2 Resisting Temptation Back when he was a graduate student, Thaler was hosting dinner for some guests (other then-young economists) and put out a large bowl of cashew nuts to nibble on with drinks. Within a few minutes it became clear that the bowl of nuts was going to be consumed in its entirety and that the guests might lack sufficient appetite to enjoy all the food that was to follow. Leaping into action, Thaler grabbed the bowl of nuts, and (while sneaking a few more nuts for himself) removed it to the kitchen, where it was put out of sight. When he returned, the guests thanked him for removing the nuts. The conversation immediately turned to the theoretical question of how they could possibly be happy about the fact that there was no longer a bowl of nuts in front of them. (You can now see the wisdom of the rule of thumb mentioned in Chapter 1 about a cap on the proportion of economists among attendees at a dinner party.) In economics (and in ordinary life), a basic principle is that you can never be made worse off by having more options, because you can always turn them down. Before Thaler removed the nuts, the group had the choice of whether to eat the nuts or not—now they didn’t. In the land of Econs, it is against the law to be happy about this! To help us understand this example, consider how the preferences of the group seemed to evolve over time. At 7:15, just before Thaler removed the nuts, the dinner guests had three options: A: eat a few nuts; B: eat all the nuts; and C: eat no more nuts. Their first choice would be to eat just a few more nuts, followed by eating no more nuts. The worst option was finishing the bowl, since that would ruin dinner. This means that their preferences were A > C > B. But by 7:30, had the nuts remained on the table, the group would have finished the bowl, thereby choosing Option B, which they had ranked last. Why would the group change its mind in the space of just fifteen minutes? Or do we really even want to say that the group has changed its mind? In the language of economics, the group is said to display behavior that is dynamically inconsistent. Initially people prefer Option A to Option B, but they later choose B over A. We can see dynamic inconsistency in many places. On Saturday morning, people might say that they prefer to go for a run later in the day, but once the afternoon comes, they are on the couch at home watching the football game or binge-watching the entire season of a new show. How can such behavior be understood? Two factors are relevant: te