Zero to One: The Future of Progress - PDF

Document Details

scrollinondubs

Uploaded by scrollinondubs

Stanford School of Medicine

Peter Thiel

Tags

technology startups business innovation

Summary

Zero to One explores how to build companies that create new things through the lens of the author's experiences as a co-founder and investor. This book argues that progress comes from creating things instead of copying existing models. A key theme is the idea of technology's ability to enable new ways of doing things.

Full Transcript

Preface ZERO TO ONE EVERY MOMENT IN BUSINESS happens only once. The next Bill Gates will not build an operating system. The next Larry Page or Sergey Brin won’t make a search engine. And the next Mark Zuckerberg won’t create a social network. If you are copying these guys, you...

Preface ZERO TO ONE EVERY MOMENT IN BUSINESS happens only once. The next Bill Gates will not build an operating system. The next Larry Page or Sergey Brin won’t make a search engine. And the next Mark Zuckerberg won’t create a social network. If you are copying these guys, you aren’t learning from them. Of course, it’s easier to copy a model than to make something new. Doing what we already know how to do takes the world from 1 to n, adding more of something familiar. But every time we create something new, we go from 0 to 1. The act of creation is singular, as is the moment of creation, and the result is something fresh and strange. Unless they invest in the difficult task of creating new things, American companies will fail in the future no matter how big their profits remain today. What happens when we’ve gained everything to be had from fine-tuning the old lines of business that we’ve inherited? Unlikely as it sounds, the answer threatens to be far worse than the crisis of 2008. Today’s “best practices” lead to dead ends; the best paths are new and untried. In a world of gigantic administrative bureaucracies both public and private, searching for a new path might seem like hoping for a miracle. Actually, if American business is going to succeed, we are going to need hundreds, or even thousands, of miracles. This would be depressing but for one crucial fact: humans are distinguished from other species by our ability to work miracles. We call these miracles technology. Technology is miraculous because it allows us to do more with less, ratcheting up our fundamental capabilities to a higher level. Other animals are instinctively driven to build things like dams or honeycombs, but we are the only ones that can invent new things and better ways of making them. Humans don’t decide what to build by making choices from some cosmic catalog of options given in advance; instead, by creating new technologies, we rewrite the plan of the world. These are the kind of elementary truths we teach to second graders, but they are easy to forget in a world where so much of what we do is repeat what has been done before. Zero to One is about how to build companies that create new things. It draws on everything I’ve learned directly as a co-founder of PayPal and Palantir and then an investor in hundreds of startups, including Facebook and SpaceX. But while I have noticed many patterns, and I relate them here, this book offers no formula for success. The paradox of teaching entrepreneurship is that such a formula necessarily cannot exist; because every innovation is new and unique, no authority can prescribe in concrete terms how to be innovative. Indeed, the single most powerful pattern I have noticed is that successful people find value in unexpected places, and they do this by thinking about business from first principles instead of formulas. This book stems from a course about startups that I taught at Stanford in 2012. College students can become extremely skilled at a few specialties, but many never learn what to do with those skills in the wider world. My primary goal in teaching the class was to help my students see beyond the tracks laid down by academic specialties to the broader future that is theirs to create. One of those students, Blake Masters, took detailed class notes, which circulated far beyond the campus, and in Zero to One I have worked with him to revise the notes for a wider audience. There’s no reason why the future should happen only at Stanford, or in college, or in Silicon Valley. 1 THE CHALLENGE OF THE FUTURE WHENEVER I INTERVIEW someone for a job, I like to ask this question: “What important truth do very few people agree with you on?” This question sounds easy because it’s straightforward. Actually, it’s very hard to answer. It’s intellectually difficult because the knowledge that everyone is taught in school is by definition agreed upon. And it’s psychologically difficult because anyone trying to answer must say something she knows to be unpopular. Brilliant thinking is rare, but courage is in even shorter supply than genius. Most commonly, I hear answers like the following: “Our educational system is broken and urgently needs to be fixed.” “America is exceptional.” “There is no God.” Those are bad answers. The first and the second statements might be true, but many people already agree with them. The third statement simply takes one side in a familiar debate. A good answer takes the following form: “Most people believe in x, but the truth is the opposite of x.” I’ll give my own answer later in this chapter. What does this contrarian question have to do with the future? In the most minimal sense, the future is simply the set of all moments yet to come. But what makes the future distinctive and important isn’t that it hasn’t happened yet, but rather that it will be a time when the world looks different from today. In this sense, if nothing about our society changes for the next 100 years, then the future is over 100 years away. If things change radically in the next decade, then the future is nearly at hand. No one can predict the future exactly, but we know two things: it’s going to be different, and it must be rooted in today’s world. Most answers to the contrarian question are different ways of seeing the present; good answers are as close as we can come to looking into the future. ZERO TO ONE: THE FUTURE OF PROGRESS When we think about the future, we hope for a future of progress. That progress can take one of two forms. Horizontal or extensive progress means copying things that work—going from 1 to n. Horizontal progress is easy to imagine because we already know what it looks like. Vertical or intensive progress means doing new things—going from 0 to 1. Vertical progress is harder to imagine because it requires doing something nobody else has ever done. If you take one typewriter and build 100, you have made horizontal progress. If you have a typewriter and build a word processor, you have made vertical progress. At the macro level, the single word for horizontal progress is globalization—taking things that work somewhere and making them work everywhere. China is the paradigmatic example of globalization; its 20-year plan is to become like the United States is today. The Chinese have been straightforwardly copying everything that has worked in the developed world: 19th-century railroads, 20th-century air conditioning, and even entire cities. They might skip a few steps along the way—going straight to wireless without installing landlines, for instance—but they’re copying all the same. The single word for vertical, 0 to 1 progress is technology. The rapid progress of information technology in recent decades has made Silicon Valley the capital of “technology” in general. But there is no reason why technology should be limited to computers. Properly understood, any new and better way of doing things is technology. Because globalization and technology are different modes of progress, it’s possible to have both, either, or neither at the same time. For example, 1815 to 1914 was a period of both rapid technological development and rapid globalization. Between the First World War and Kissinger’s trip to reopen relations with China in 1971, there was rapid technological development but not much globalization. Since 1971, we have seen rapid globalization along with limited technological development, mostly confined to IT. This age of globalization has made it easy to imagine that the decades ahead will bring more convergence and more sameness. Even our everyday language suggests we believe in a kind of technological end of history: the division of the world into the so-called developed and developing nations implies that the “developed” world has already achieved the achievable, and that poorer nations just need to catch up. But I don’t think that’s true. My own answer to the contrarian question is that most people think the future of the world will be defined by globalization, but the truth is that technology matters more. Without technological change, if China doubles its energy production over the next two decades, it will also double its air pollution. If every one of India’s hundreds of millions of households were to live the way Americans already do—using only today’s tools—the result would be environmentally catastrophic. Spreading old ways to create wealth around the world will result in devastation, not riches. In a world of scarce resources, globalization without new technology is unsustainable. New technology has never been an automatic feature of history. Our ancestors lived in static, zero-sum societies where success meant seizing things from others. They created new sources of wealth only rarely, and in the long run they could never create enough to save the average person from an extremely hard life. Then, after 10,000 years of fitful advance from primitive agriculture to medieval windmills and 16th- century astrolabes, the modern world suddenly experienced relentless technological progress from the advent of the steam engine in the 1760s all the way up to about 1970. As a result, we have inherited a richer society than any previous generation would have been able to imagine. Any generation excepting our parents’ and grandparents’, that is: in the late 1960s, they expected this progress to continue. They looked forward to a four-day workweek, energy too cheap to meter, and vacations on the moon. But it didn’t happen. The smartphones that distract us from our surroundings also distract us from the fact that our surroundings are strangely old: only computers and communications have improved dramatically since midcentury. That doesn’t mean our parents were wrong to imagine a better future—they were only wrong to expect it as something automatic. Today our challenge is to both imagine and create the new technologies that can make the 21st century more peaceful and prosperous than the 20th. STARTUP THINKING New technology tends to come from new ventures—startups. From the Founding Fathers in politics to the Royal Society in science to Fairchild Semiconductor’s “traitorous eight” in business, small groups of people bound together by a sense of mission have changed the world for the better. The easiest explanation for this is negative: it’s hard to develop new things in big organizations, and it’s even harder to do it by yourself. Bureaucratic hierarchies move slowly, and entrenched interests shy away from risk. In the most dysfunctional organizations, signaling that work is being done becomes a better strategy for career advancement than actually doing work (if this describes your company, you should quit now). At the other extreme, a lone genius might create a classic work of art or literature, but he could never create an entire industry. Startups operate on the principle that you need to work with other people to get stuff done, but you also need to stay small enough so that you actually can. Positively defined, a startup is the largest group of people you can convince of a plan to build a different future. A new company’s most important strength is new thinking: even more important than nimbleness, small size affords space to think. This book is about the questions you must ask and answer to succeed in the business of doing new things: what follows is not a manual or a record of knowledge but an exercise in thinking. Because that is what a startup has to do: question received ideas and rethink business from scratch. 2 PARTY LIKE IT’S 1999 OUR CONTRARIAN QUESTION—What important truth do very few people agree with you on?—is difficult to answer directly. It may be easier to start with a preliminary: what does everybody agree on? “Madness is rare in individuals—but in groups, parties, nations, and ages it is the rule,” Nietzsche wrote (before he went mad). If you can identify a delusional popular belief, you can find what lies hidden behind it: the contrarian truth. Consider an elementary proposition: companies exist to make money, not to lose it. This should be obvious to any thinking person. But it wasn’t so obvious to many in the late 1990s, when no loss was too big to be described as an investment in an even bigger, brighter future. The conventional wisdom of the “New Economy” accepted page views as a more authoritative, forward-looking financial metric than something as pedestrian as profit. Conventional beliefs only ever come to appear arbitrary and wrong in retrospect; whenever one collapses, we call the old belief a bubble. But the distortions caused by bubbles don’t disappear when they pop. The internet craze of the ’90s was the biggest bubble since the crash of 1929, and the lessons learned afterward define and distort almost all thinking about technology today. The first step to thinking clearly is to question what we think we know about the past. A QUICK HISTORY OF THE ’90S The 1990s have a good image. We tend to remember them as a prosperous, optimistic decade that happened to end with the internet boom and bust. But many of those years were not as cheerful as our nostalgia holds. We’ve long since forgotten the global context for the 18 months of dot-com mania at decade’s end. The ’90s started with a burst of euphoria when the Berlin Wall came down in November ’89. It was short-lived. By mid-1990, the United States was in recession. Technically the downturn ended in March ’91, but recovery was slow and unemployment continued to rise until July ’92. Manufacturing never fully rebounded. The shift to a service economy was protracted and painful. 1992 through the end of 1994 was a time of general malaise. Images of dead American soldiers in Mogadishu looped on cable news. Anxiety about globalization and U.S. competitiveness intensified as jobs flowed to Mexico. This pessimistic undercurrent drove then-president Bush 41 out of office and won Ross Perot nearly 20% of the popular vote in ’92 —the best showing for a third-party candidate since Theodore Roosevelt in 1912. And whatever the cultural fascination with Nirvana, grunge, and heroin reflected, it wasn’t hope or confidence. Silicon Valley felt sluggish, too. Japan seemed to be winning the semiconductor war. The internet had yet to take off, partly because its commercial use was restricted until late 1992 and partly due to the lack of user-friendly web browsers. It’s telling that when I arrived at Stanford in 1985, economics, not computer science, was the most popular major. To most people on campus, the tech sector seemed idiosyncratic or even provincial. The internet changed all this. The Mosaic browser was officially released in November 1993, giving regular people a way to get online. Mosaic became Netscape, which released its Navigator browser in late 1994. Navigator’s adoption grew so quickly—from about 20% of the browser market in January 1995 to almost 80% less than 12 months later—that Netscape was able to IPO in August ’95 even though it wasn’t yet profitable. Within five months, Netscape stock had shot up from $28 to $174 per share. Other tech companies were booming, too. Yahoo! went public in April ’96 with an $848 million valuation. Amazon followed suit in May ’97 at $438 million. By spring of ’98, each company’s stock had more than quadrupled. Skeptics questioned earnings and revenue multiples higher than those for any non-internet company. It was easy to conclude that the market had gone crazy. This conclusion was understandable but misplaced. In December ’96 —more than three years before the bubble actually burst—Fed chairman Alan Greenspan warned that “irrational exuberance” might have “unduly escalated asset values.” Tech investors were exuberant, but it’s not clear that they were so irrational. It is too easy to forget that things weren’t going very well in the rest of the world at the time. The East Asian financial crises hit in July 1997. Crony capitalism and massive foreign debt brought the Thai, Indonesian, and South Korean economies to their knees. The ruble crisis followed in August ’98 when Russia, hamstrung by chronic fiscal deficits, devalued its currency and defaulted on its debt. American investors grew nervous about a nation with 10,000 nukes and no money; the Dow Jones Industrial Average plunged more than 10% in a matter of days. People were right to worry. The ruble crisis set off a chain reaction that brought down Long-Term Capital Management, a highly leveraged U.S. hedge fund. LTCM managed to lose $4.6 billion in the latter half of 1998, and still had over $100 billion in liabilities when the Fed intervened with a massive bailout and slashed interest rates in order to prevent systemic disaster. Europe wasn’t doing that much better. The euro launched in January 1999 to great skepticism and apathy. It rose to $1.19 on its first day of trading but sank to $0.83 within two years. In mid-2000, G7 central bankers had to prop it up with a multibillion-dollar intervention. So the backdrop for the short-lived dot-com mania that started in September 1998 was a world in which nothing else seemed to be working. The Old Economy couldn’t handle the challenges of globalization. Something needed to work—and work in a big way—if the future was going to be better at all. By indirect proof, the New Economy of the internet was the only way forward. MANIA: SEPTEMBER 1998–MARCH 2000 Dot-com mania was intense but short—18 months of insanity from September 1998 to March 2000. It was a Silicon Valley gold rush: there was money everywhere, and no shortage of exuberant, often sketchy people to chase it. Every week, dozens of new startups competed to throw the most lavish launch party. (Landing parties were much more rare.) Paper millionaires would rack up thousand-dollar dinner bills and try to pay with shares of their startup’s stock—sometimes it even worked. Legions of people decamped from their well-paying jobs to found or join startups. One 40-something grad student that I knew was running six different companies in 1999. (Usually, it’s considered weird to be a 40-year-old graduate student. Usually, it’s considered insane to start a half-dozen companies at once. But in the late ’90s, people could believe that was a winning combination.) Everybody should have known that the mania was unsustainable; the most “successful” companies seemed to embrace a sort of anti-business model where they lost money as they grew. But it’s hard to blame people for dancing when the music was playing; irrationality was rational given that appending “.com” to your name could double your value overnight. PAYPAL MANIA When I was running PayPal in late 1999, I was scared out of my wits —not because I didn’t believe in our company, but because it seemed like everyone else in the Valley was ready to believe anything at all. Everywhere I looked, people were starting and flipping companies with alarming casualness. One acquaintance told me how he had planned an IPO from his living room before he’d even incorporated his company— and he didn’t think that was weird. In this kind of environment, acting sanely began to seem eccentric. At least PayPal had a suitably grand mission—the kind that post- bubble skeptics would later describe as grandiose: we wanted to create a new internet currency to replace the U.S. dollar. Our first product let people beam money from one PalmPilot to another. However, nobody had any use for that product except the journalists who voted it one of the 10 worst business ideas of 1999. PalmPilots were still too exotic then, but email was already commonplace, so we decided to create a way to send and receive payments over email. By the fall of ’99, our email payment product worked well—anyone could log in to our website and easily transfer money. But we didn’t have enough customers, growth was slow, and expenses mounted. For PayPal to work, we needed to attract a critical mass of at least a million users. Advertising was too ineffective to justify the cost. Prospective deals with big banks kept falling through. So we decided to pay people to sign up. We gave new customers $10 for joining, and we gave them $10 more every time they referred a friend. This got us hundreds of thousands of new customers and an exponential growth rate. Of course, this customer acquisition strategy was unsustainable on its own—when you pay people to be your customers, exponential growth means an exponentially growing cost structure. Crazy costs were typical at that time in the Valley. But we thought our huge costs were sane: given a large user base, PayPal had a clear path to profitability by taking a small fee on customers’ transactions. We knew we’d need more funding to reach that goal. We also knew that the boom was going to end. Since we didn’t expect investors’ faith in our mission to survive the coming crash, we moved fast to raise funds while we could. On February 16, 2000, the Wall Street Journal ran a story lauding our viral growth and suggesting that PayPal was worth $500 million. When we raised $100 million the next month, our lead investor took the Journal’s back-of-the-envelope valuation as authoritative. (Other investors were in even more of a hurry. A South Korean firm wired us $5 million without first negotiating a deal or signing any documents. When I tried to return the money, they wouldn’t tell me where to send it.) That March 2000 financing round bought us the time we needed to make PayPal a success. Just as we closed the deal, the bubble popped. LESSONS LEARNED ’Cause they say 2,000 zero zero party over, oops! Out of time! So tonight I’m gonna party like it’s 1999! —PRINCE The NASDAQ reached 5,048 at its peak in the middle of March 2000 and then crashed to 3,321 in the middle of April. By the time it bottomed out at 1,114 in October 2002, the country had long since interpreted the market’s collapse as a kind of divine judgment against the technological optimism of the ’90s. The era of cornucopian hope was relabeled as an era of crazed greed and declared to be definitely over. Everyone learned to treat the future as fundamentally indefinite, and to dismiss as an extremist anyone with plans big enough to be measured in years instead of quarters. Globalization replaced technology as the hope for the future. Since the ’90s migration “from bricks to clicks” didn’t work as hoped, investors went back to bricks (housing) and BRICs (globalization). The result was another bubble, this time in real estate. The entrepreneurs who stuck with Silicon Valley learned four big lessons from the dot-com crash that still guide business thinking today: 1. Make incremental advances Grand visions inflated the bubble, so they should not be indulged. Anyone who claims to be able to do something great is suspect, and anyone who wants to change the world should be more humble. Small, incremental steps are the only safe path forward. 2. Stay lean and flexible All companies must be “lean,” which is code for “unplanned.” You should not know what your business will do; planning is arrogant and inflexible. Instead you should try things out, “iterate,” and treat entrepreneurship as agnostic experimentation. 3. Improve on the competition Don’t try to create a new market prematurely. The only way to know you have a real business is to start with an already existing customer, so you should build your company by improving on recognizable products already offered by successful competitors. 4. Focus on product, not sales If your product requires advertising or salespeople to sell it, it’s not good enough: technology is primarily about product development, not distribution. Bubble-era advertising was obviously wasteful, so the only sustainable growth is viral growth. These lessons have become dogma in the startup world; those who would ignore them are presumed to invite the justified doom visited upon technology in the great crash of 2000. And yet the opposite principles are probably more correct: 1. It is better to risk boldness than triviality. 2. A bad plan is better than no plan. 3. Competitive markets destroy profits. 4. Sales matters just as much as product. It’s true that there was a bubble in technology. The late ’90s was a time of hubris: people believed in going from 0 to 1. Too few startups were actually getting there, and many never went beyond talking about it. But people understood that we had no choice but to find ways to do more with less. The market high of March 2000 was obviously a peak of insanity; less obvious but more important, it was also a peak of clarity. People looked far into the future, saw how much valuable new technology we would need to get there safely, and judged themselves capable of creating it. We still need new technology, and we may even need some 1999- style hubris and exuberance to get it. To build the next generation of companies, we must abandon the dogmas created after the crash. That doesn’t mean the opposite ideas are automatically true: you can’t escape the madness of crowds by dogmatically rejecting them. Instead ask yourself: how much of what you know about business is shaped by mistaken reactions to past mistakes? The most contrarian thing of all is not to oppose the crowd but to think for yourself. 3 ALL HAPPY COMPANIES ARE DIFFERENT THE BUSINESS VERSION of our contrarian question is: what valuable company is nobody building? This question is harder than it looks, because your company could create a lot of value without becoming very valuable itself. Creating value is not enough—you also need to capture some of the value you create. This means that even very big businesses can be bad businesses. For example, U.S. airline companies serve millions of passengers and create hundreds of billions of dollars of value each year. But in 2012, when the average airfare each way was $178, the airlines made only 37 cents per passenger trip. Compare them to Google, which creates less value but captures far more. Google brought in $50 billion in 2012 (versus $160 billion for the airlines), but it kept 21% of those revenues as profits—more than 100 times the airline industry’s profit margin that year. Google makes so much money that it’s now worth three times more than every U.S. airline combined. The airlines compete with each other, but Google stands alone. Economists use two simplified models to explain the difference: perfect competition and monopoly. “Perfect competition” is considered both the ideal and the default state in Economics 101. So-called perfectly competitive markets achieve equilibrium when producer supply meets consumer demand. Every firm in a competitive market is undifferentiated and sells the same homogeneous products. Since no firm has any market power, they must all sell at whatever price the market determines. If there is money to be made, new firms will enter the market, increase supply, drive prices down, and thereby eliminate the profits that attracted them in the first place. If too many firms enter the market, they’ll suffer losses, some will fold, and prices will rise back to sustainable levels. Under perfect competition, in the long run no company makes an economic profit. The opposite of perfect competition is monopoly. Whereas a competitive firm must sell at the market price, a monopoly owns its market, so it can set its own prices. Since it has no competition, it produces at the quantity and price combination that maximizes its profits. To an economist, every monopoly looks the same, whether it deviously eliminates rivals, secures a license from the state, or innovates its way to the top. In this book, we’re not interested in illegal bullies or government favorites: by “monopoly,” we mean the kind of company that’s so good at what it does that no other firm can offer a close substitute. Google is a good example of a company that went from 0 to 1: it hasn’t competed in search since the early 2000s, when it definitively distanced itself from Microsoft and Yahoo! Americans mythologize competition and credit it with saving us from socialist bread lines. Actually, capitalism and competition are opposites. Capitalism is premised on the accumulation of capital, but under perfect competition all profits get competed away. The lesson for entrepreneurs is clear: if you want to create and capture lasting value, don’t build an undifferentiated commodity business. LIES PEOPLE TELL How much of the world is actually monopolistic? How much is truly competitive? It’s hard to say, because our common conversation about these matters is so confused. To the outside observer, all businesses can seem reasonably alike, so it’s easy to perceive only small differences between them. But the reality is much more binary than that. There’s an enormous difference between perfect competition and monopoly, and most businesses are much closer to one extreme than we commonly realize. The confusion comes from a universal bias for describing market conditions in self-serving ways: both monopolists and competitors are incentivized to bend the truth. Monopoly Lies Monopolists lie to protect themselves. They know that bragging about their great monopoly invites being audited, scrutinized, and attacked. Since they very much want their monopoly profits to continue unmolested, they tend to do whatever they can to conceal their monopoly—usually by exaggerating the power of their (nonexistent) competition. Think about how Google talks about its business. It certainly doesn’t claim to be a monopoly. But is it one? Well, it depends: a monopoly in what? Let’s say that Google is primarily a search engine. As of May 2014, it owns about 68% of the search market. (Its closest competitors, Microsoft and Yahoo!, have about 19% and 10%, respectively.) If that doesn’t seem dominant enough, consider the fact that the word “google” is now an official entry in the Oxford English Dictionary—as a verb. Don’t hold your breath waiting for that to happen to Bing. But suppose we say that Google is primarily an advertising company. That changes things. The U.S. search engine advertising market is $17 billion annually. Online advertising is $37 billion annually. The entire U.S. advertising market is $150 billion. And global advertising is a $495 billion market. So even if Google completely monopolized U.S. search engine advertising, it would own just 3.4% of the global advertising market. From this angle, Google looks like a small player in a competitive world. What if we frame Google as a multifaceted technology company instead? This seems reasonable enough; in addition to its search engine, Google makes dozens of other software products, not to mention robotic cars, Android phones, and wearable computers. But 95% of Google’s revenue comes from search advertising; its other products generated just $2.35 billion in 2012, and its consumer tech products a mere fraction of that. Since consumer tech is a $964 billion market globally, Google owns less than 0.24% of it—a far cry from relevance, let alone monopoly. Framing itself as just another tech company allows Google to escape all sorts of unwanted attention. Competitive Lies Non-monopolists tell the opposite lie: “we’re in a league of our own.” Entrepreneurs are always biased to understate the scale of competition, but that is the biggest mistake a startup can make. The fatal temptation is to describe your market extremely narrowly so that you dominate it by definition. Suppose you want to start a restaurant that serves British food in Palo Alto. “No one else is doing it,” you might reason. “We’ll own the entire market.” But that’s only true if the relevant market is the market for British food specifically. What if the actual market is the Palo Alto restaurant market in general? And what if all the restaurants in nearby towns are part of the relevant market as well? These are hard questions, but the bigger problem is that you have an incentive not to ask them at all. When you hear that most new restaurants fail within one or two years, your instinct will be to come up with a story about how yours is different. You’ll spend time trying to convince people that you are exceptional instead of seriously considering whether that’s true. It would be better to pause and consider whether there are people in Palo Alto who would rather eat British food above all else. It’s very possible they don’t exist. In 2001, my co-workers at PayPal and I would often get lunch on Castro Street in Mountain View. We had our pick of restaurants, starting with obvious categories like Indian, sushi, and burgers. There were more options once we settled on a type: North Indian or South Indian, cheaper or fancier, and so on. In contrast to the competitive local restaurant market, PayPal was at that time the only email-based payments company in the world. We employed fewer people than the restaurants on Castro Street did, but our business was much more valuable than all of those restaurants combined. Starting a new South Indian restaurant is a really hard way to make money. If you lose sight of competitive reality and focus on trivial differentiating factors— maybe you think your naan is superior because of your great- grandmother’s recipe—your business is unlikely to survive. Creative industries work this way, too. No screenwriter wants to admit that her new movie script simply rehashes what has already been done before. Rather, the pitch is: “This film will combine various exciting elements in entirely new ways.” It could even be true. Suppose her idea is to have Jay-Z star in a cross between Hackers and Jaws: rap star joins elite group of hackers to catch the shark that killed his friend. That has definitely never been done before. But, like the lack of British restaurants in Palo Alto, maybe that’s a good thing. Non-monopolists exaggerate their distinction by defining their market as the intersection of various smaller markets: British food ∩ restaurant ∩ Palo Alto Rap star ∩ hackers ∩ sharks Monopolists, by contrast, disguise their monopoly by framing their market as the union of several large markets: search engine ∪ mobile phones ∪ wearable computers ∪ self-driving cars What does a monopolist’s union story look like in practice? Consider a statement from Google chairman Eric Schmidt’s testimony at a 2011 congressional hearing: We face an extremely competitive landscape in which consumers have a multitude of options to access information. Or, translated from PR-speak to plain English: Google is a small fish in a big pond. We could be swallowed whole at any time. We are not the monopoly that the government is looking for. RUTHLESS PEOPLE The problem with a competitive business goes beyond lack of profits. Imagine you’re running one of those restaurants in Mountain View. You’re not that different from dozens of your competitors, so you’ve got to fight hard to survive. If you offer affordable food with low margins, you can probably pay employees only minimum wage. And you’ll need to squeeze out every efficiency: that’s why small restaurants put Grandma to work at the register and make the kids wash dishes in the back. Restaurants aren’t much better even at the very highest rungs, where reviews and ratings like Michelin’s star system enforce a culture of intense competition that can drive chefs crazy. (French chef and winner of three Michelin stars Bernard Loiseau was quoted as saying, “If I lose a star, I will commit suicide.” Michelin maintained his rating, but Loiseau killed himself anyway in 2003 when a competing French dining guide downgraded his restaurant.) The competitive ecosystem pushes people toward ruthlessness or death. A monopoly like Google is different. Since it doesn’t have to worry about competing with anyone, it has wider latitude to care about its workers, its products, and its impact on the wider world. Google’s motto —“Don’t be evil”—is in part a branding ploy, but it’s also characteristic of a kind of business that’s successful enough to take ethics seriously without jeopardizing its own existence. In business, money is either an important thing or it is everything. Monopolists can afford to think about things other than making money; non- monopolists can’t. In perfect competition, a business is so focused on today’s margins that it can’t possibly plan for a long-term future. Only one thing can allow a business to transcend the daily brute struggle for survival: monopoly profits. MONOPOLY CAPITALISM So, a monopoly is good for everyone on the inside, but what about everyone on the outside? Do outsized profits come at the expense of the rest of society? Actually, yes: profits come out of customers’ wallets, and monopolies deserve their bad reputation—but only in a world where nothing changes. In a static world, a monopolist is just a rent collector. If you corner the market for something, you can jack up the price; others will have no choice but to buy from you. Think of the famous board game: deeds are shuffled around from player to player, but the board never changes. There’s no way to win by inventing a better kind of real estate development. The relative values of the properties are fixed for all time, so all you can do is try to buy them up. But the world we live in is dynamic: it’s possible to invent new and better things. Creative monopolists give customers more choices by adding entirely new categories of abundance to the world. Creative monopolies aren’t just good for the rest of society; they’re powerful engines for making it better. Even the government knows this: that’s why one of its departments works hard to create monopolies (by granting patents to new inventions) even though another part hunts them down (by prosecuting antitrust cases). It’s possible to question whether anyone should really be awarded a legally enforceable monopoly simply for having been the first to think of something like a mobile software design. But it’s clear that something like Apple’s monopoly profits from designing, producing, and marketing the iPhone were the reward for creating greater abundance, not artificial scarcity: customers were happy to finally have the choice of paying high prices to get a smartphone that actually works. The dynamism of new monopolies itself explains why old monopolies don’t strangle innovation. With Apple’s iOS at the forefront, the rise of mobile computing has dramatically reduced Microsoft’s decades-long operating system dominance. Before that, IBM’s hardware monopoly of the ’60s and ’70s was overtaken by Microsoft’s software monopoly. AT&T had a monopoly on telephone service for most of the 20th century, but now anyone can get a cheap cell phone plan from any number of providers. If the tendency of monopoly businesses were to hold back progress, they would be dangerous and we’d be right to oppose them. But the history of progress is a history of better monopoly businesses replacing incumbents. Monopolies drive progress because the promise of years or even decades of monopoly profits provides a powerful incentive to innovate. Then monopolies can keep innovating because profits enable them to make the long-term plans and to finance the ambitious research projects that firms locked in competition can’t dream of. So why are economists obsessed with competition as an ideal state? It’s a relic of history. Economists copied their mathematics from the work of 19th-century physicists: they see individuals and businesses as interchangeable atoms, not as unique creators. Their theories describe an equilibrium state of perfect competition because that’s what’s easy to model, not because it represents the best of business. But it’s worth recalling that the long-run equilibrium predicted by 19th-century physics was a state in which all energy is evenly distributed and everything comes to rest—also known as the heat death of the universe. Whatever your views on thermodynamics, it’s a powerful metaphor: in business, equilibrium means stasis, and stasis means death. If your industry is in a competitive equilibrium, the death of your business won’t matter to the world; some other undifferentiated competitor will always be ready to take your place. Perfect equilibrium may describe the void that is most of the universe. It may even characterize many businesses. But every new creation takes place far from equilibrium. In the real world outside economic theory, every business is successful exactly to the extent that it does something others cannot. Monopoly is therefore not a pathology or an exception. Monopoly is the condition of every successful business. Tolstoy opens Anna Karenina by observing: “All happy families are alike; each unhappy family is unhappy in its own way.” Business is the opposite. All happy companies are different: each one earns a monopoly by solving a unique problem. All failed companies are the same: they failed to escape competition. 4 THE IDEOLOGY OF COMPETITION CREATIVE MONOPOLY means new products that benefit everybody and sustainable profits for the creator. Competition means no profits for anybody, no meaningful differentiation, and a struggle for survival. So why do people believe that competition is healthy? The answer is that competition is not just an economic concept or a simple inconvenience that individuals and companies must deal with in the marketplace. More than anything else, competition is an ideology—the ideology— that pervades our society and distorts our thinking. We preach competition, internalize its necessity, and enact its commandments; and as a result, we trap ourselves within it—even though the more we compete, the less we gain. This is a simple truth, but we’ve all been trained to ignore it. Our educational system both drives and reflects our obsession with competition. Grades themselves allow precise measurement of each student’s competitiveness; pupils with the highest marks receive status and credentials. We teach every young person the same subjects in mostly the same ways, irrespective of individual talents and preferences. Students who don’t learn best by sitting still at a desk are made to feel somehow inferior, while children who excel on conventional measures like tests and assignments end up defining their identities in terms of this weirdly contrived academic parallel reality. And it gets worse as students ascend to higher levels of the tournament. Elite students climb confidently until they reach a level of competition sufficiently intense to beat their dreams out of them. Higher education is the place where people who had big plans in high school get stuck in fierce rivalries with equally smart peers over conventional careers like management consulting and investment banking. For the privilege of being turned into conformists, students (or their families) pay hundreds of thousands of dollars in skyrocketing tuition that continues to outpace inflation. Why are we doing this to ourselves? I wish I had asked myself when I was younger. My path was so tracked that in my 8th-grade yearbook, one of my friends predicted— accurately—that four years later I would enter Stanford as a sophomore. And after a conventionally successful undergraduate career, I enrolled at Stanford Law School, where I competed even harder for the standard badges of success. The highest prize in a law student’s world is unambiguous: out of tens of thousands of graduates each year, only a few dozen get a Supreme Court clerkship. After clerking on a federal appeals court for a year, I was invited to interview for clerkships with Justices Kennedy and Scalia. My meetings with the Justices went well. I was so close to winning this last competition. If only I got the clerkship, I thought, I would be set for life. But I didn’t. At the time, I was devastated. In 2004, after I had built and sold PayPal, I ran into an old friend from law school who had helped me prepare my failed clerkship applications. We hadn’t spoken in nearly a decade. His first question wasn’t “How are you doing?” or “Can you believe it’s been so long?” Instead, he grinned and asked: “So, Peter, aren’t you glad you didn’t get that clerkship?” With the benefit of hindsight, we both knew that winning that ultimate competition would have changed my life for the worse. Had I actually clerked on the Supreme Court, I probably would have spent my entire career taking depositions or drafting other people’s business deals instead of creating anything new. It’s hard to say how much would be different, but the opportunity costs were enormous. All Rhodes Scholars had a great future in their past. WAR AND PEACE Professors downplay the cutthroat culture of academia, but managers never tire of comparing business to war. MBA students carry around copies of Clausewitz and Sun Tzu. War metaphors invade our everyday business language: we use headhunters to build up a sales force that will enable us to take a captive market and make a killing. But really it’s competition, not business, that is like war: allegedly necessary, supposedly valiant, but ultimately destructive. Why do people compete with each other? Marx and Shakespeare provide two models for understanding almost every kind of conflict. According to Marx, people fight because they are different. The proletariat fights the bourgeoisie because they have completely different ideas and goals (generated, for Marx, by their very different material circumstances). The greater the differences, the greater the conflict. To Shakespeare, by contrast, all combatants look more or less alike. It’s not at all clear why they should be fighting, since they have nothing to fight about. Consider the opening line from Romeo and Juliet: “Two households, both alike in dignity.” The two houses are alike, yet they hate each other. They grow even more similar as the feud escalates. Eventually, they lose sight of why they started fighting in the first place. In the world of business, at least, Shakespeare proves the superior guide. Inside a firm, people become obsessed with their competitors for career advancement. Then the firms themselves become obsessed with their competitors in the marketplace. Amid all the human drama, people lose sight of what matters and focus on their rivals instead. Let’s test the Shakespearean model in the real world. Imagine a production called Gates and Schmidt, based on Romeo and Juliet. Montague is Microsoft. Capulet is Google. Two great families, run by alpha nerds, sure to clash on account of their sameness. As with all good tragedy, the conflict seems inevitable only in retrospect. In fact it was entirely avoidable. These families came from very different places. The House of Montague built operating systems and office applications. The House of Capulet wrote a search engine. What was there to fight about? Lots, apparently. As a startup, each clan had been content to leave the other alone and prosper independently. But as they grew, they began to focus on each other. Montagues obsessed about Capulets obsessed about Montagues. The result? Windows vs. Chrome OS, Bing vs. Google Search, Explorer vs. Chrome, Office vs. Docs, and Surface vs. Nexus. Just as war cost the Montagues and Capulets their children, it cost Microsoft and Google their dominance: Apple came along and overtook them all. In January 2013, Apple’s market capitalization was $500 billion, while Google and Microsoft combined were worth $467 billion. Just three years before, Microsoft and Google were each more valuable than Apple. War is costly business. Rivalry causes us to overemphasize old opportunities and slavishly copy what has worked in the past. Consider the recent proliferation of mobile credit card readers. In October 2010, a startup called Square released a small, white, square-shaped product that let anyone with an iPhone swipe and accept credit cards. It was the first good payment processing solution for mobile handsets. Imitators promptly sprang into action. A Canadian company called NetSecure launched its own card reader in a half-moon shape. Intuit brought a cylindrical reader to the geometric battle. In March 2012, eBay’s PayPal unit launched its own copycat card reader. It was shaped like a triangle—a clear jab at Square, as three sides are simpler than four. One gets the sense that this Shakespearean saga won’t end until the apes run out of shapes. The hazards of imitative competition may partially explain why individuals with an Asperger’s-like social ineptitude seem to be at an advantage in Silicon Valley today. If you’re less sensitive to social cues, you’re less likely to do the same things as everyone else around you. If you’re interested in making things or programming computers, you’ll be less afraid to pursue those activities single-mindedly and thereby become incredibly good at them. Then when you apply your skills, you’re a little less likely than others to give up your own convictions: this can save you from getting caught up in crowds competing for obvious prizes. Competition can make people hallucinate opportunities where none exist. The crazy ’90s version of this was the fierce battle for the online pet store market. It was Pets.com vs. PetStore.com vs. Petopia.com vs. what seemed like dozens of others. Each company was obsessed with defeating its rivals, precisely because there were no substantive differences to focus on. Amid all the tactical questions—Who could price chewy dog toys most aggressively? Who could create the best Super Bowl ads?—these companies totally lost sight of the wider question of whether the online pet supply market was the right space to be in. Winning is better than losing, but everybody loses when the war isn’t one worth fighting. When Pets.com folded after the dot-com crash, $300 million of investment capital disappeared with it. Other times, rivalry is just weird and distracting. Consider the Shakespearean conflict between Larry Ellison, co-founder and CEO of Oracle, and Tom Siebel, a top salesman at Oracle and Ellison’s protégé before he went on to found Siebel Systems in 1993. Ellison was livid at what he thought was Siebel’s betrayal. Siebel hated being in the shadow of his former boss. The two men were basically identical— hard-charging Chicagoans who loved to sell and hated to lose—so their hatred ran deep. Ellison and Siebel spent the second half of the ’90s trying to sabotage each other. At one point, Ellison sent truckloads of ice cream sandwiches to Siebel’s headquarters to try to convince Siebel employees to jump ship. The copy on the wrappers? “Summer is near. Oracle is here. To brighten your day and your career.” Strangely, Oracle intentionally accumulated enemies. Ellison’s theory was that it’s always good to have an enemy, so long as it was large enough to appear threatening (and thus motivational to employees) but not so large as to actually threaten the company. So Ellison was probably thrilled when in 1996 a small database company called Informix put up a billboard near Oracle’s Redwood Shores headquarters that read: CAUTION: DINOSAUR CROSSING. Another Informix billboard on northbound Highway 101 read: YOU’VE JUST PASSED REDWOOD SHORES. SO DID WE. Oracle shot back with a billboard that implied that Informix’s software was slower than snails. Then Informix CEO Phil White decided to make things personal. When White learned that Larry Ellison enjoyed Japanese samurai culture, he commissioned a new billboard depicting the Oracle logo along with a broken samurai sword. The ad wasn’t even really aimed at Oracle as an entity, let alone the consuming public; it was a personal attack on Ellison. But perhaps White spent a little too much time worrying about the competition: while he was busy creating billboards, Informix imploded in a massive accounting scandal and White soon found himself in federal prison for securities fraud. If you can’t beat a rival, it may be better to merge. I started Confinity with my co-founder Max Levchin in 1998. When we released the PayPal product in late 1999, Elon Musk’s X.com was right on our heels: our companies’ offices were four blocks apart on University Avenue in Palo Alto, and X’s product mirrored ours feature-for-feature. By late 1999, we were in all-out war. Many of us at PayPal logged 100- hour workweeks. No doubt that was counterproductive, but the focus wasn’t on objective productivity; the focus was defeating X.com. One of our engineers actually designed a bomb for this purpose; when he presented the schematic at a team meeting, calmer heads prevailed and the proposal was attributed to extreme sleep deprivation. But in February 2000, Elon and I were more scared about the rapidly inflating tech bubble than we were about each other: a financial crash would ruin us both before we could finish our fight. So in early March we met on neutral ground—a café almost exactly equidistant to our offices—and negotiated a 50-50 merger. De-escalating the rivalry post- merger wasn’t easy, but as far as problems go, it was a good one to have. As a unified team, we were able to ride out the dot-com crash and then build a successful business. Sometimes you do have to fight. Where that’s true, you should fight and win. There is no middle ground: either don’t throw any punches, or strike hard and end it quickly. This advice can be hard to follow because pride and honor can get in the way. Hence Hamlet: Exposing what is mortal and unsure To all that fortune, death, and danger dare, Even for an eggshell. Rightly to be great Is not to stir without great argument, But greatly to find quarrel in a straw When honor’s at the stake. For Hamlet, greatness means willingness to fight for reasons as thin as an eggshell: anyone would fight for things that matter; true heroes take their personal honor so seriously they will fight for things that don’t matter. This twisted logic is part of human nature, but it’s disastrous in business. If you can recognize competition as a destructive force instead of a sign of value, you’re already more sane than most. The next chapter is about how to use a clear head to build a monopoly business. 5 LAST MOVER ADVANTAGE ESCAPING COMPETITION will give you a monopoly, but even a monopoly is only a great business if it can endure in the future. Compare the value of the New York Times Company with Twitter. Each employs a few thousand people, and each gives millions of people a way to get news. But when Twitter went public in 2013, it was valued at $24 billion—more than 12 times the Times’s market capitalization— even though the Times earned $133 million in 2012 while Twitter lost money. What explains the huge premium for Twitter? The answer is cash flow. This sounds bizarre at first, since the Times was profitable while Twitter wasn’t. But a great business is defined by its ability to generate cash flows in the future. Investors expect Twitter will be able to capture monopoly profits over the next decade, while newspapers’ monopoly days are over. Simply stated, the value of a business today is the sum of all the money it will make in the future. (To properly value a business, you also have to discount those future cash flows to their present worth, since a given amount of money today is worth more than the same amount in the future.) Comparing discounted cash flows shows the difference between low- growth businesses and high-growth startups at its starkest. Most of the value of low-growth businesses is in the near term. An Old Economy business (like a newspaper) might hold its value if it can maintain its current cash flows for five or six years. However, any firm with close substitutes will see its profits competed away. Nightclubs or restaurants are extreme examples: successful ones might collect healthy amounts today, but their cash flows will probably dwindle over the next few years when customers move on to newer and trendier alternatives. Technology companies follow the opposite trajectory. They often lose money for the first few years: it takes time to build valuable things, and that means delayed revenue. Most of a tech company’s value will come at least 10 to 15 years in the future. In March 2001, PayPal had yet to make a profit but our revenues were growing 100% year-over-year. When I projected our future cash flows, I found that 75% of the company’s present value would come from profits generated in 2011 and beyond—hard to believe for a company that had been in business for only 27 months. But even that turned out to be an underestimation. Today, PayPal continues to grow at about 15% annually, and the discount rate is lower than a decade ago. It now appears that most of the company’s value will come from 2020 and beyond. LinkedIn is another good example of a company whose value exists in the far future. As of early 2014, its market capitalization was $24.5 billion—very high for a company with less than $1 billion in revenue and only $21.6 million in net income for 2012. You might look at these numbers and conclude that investors have gone insane. But this valuation makes sense when you consider LinkedIn’s projected future cash flows. The overwhelming importance of future profits is counterintuitive even in Silicon Valley. For a company to be valuable it must grow and endure, but many entrepreneurs focus only on short-term growth. They have an excuse: growth is easy to measure, but durability isn’t. Those who succumb to measurement mania obsess about weekly active user statistics, monthly revenue targets, and quarterly earnings reports. However, you can hit those numbers and still overlook deeper, harder- to-measure problems that threaten the durability of your business. For example, rapid short-term growth at both Zynga and Groupon distracted managers and investors from long-term challenges. Zynga scored early wins with games like Farmville and claimed to have a “psychometric engine” to rigorously gauge the appeal of new releases. But they ended up with the same problem as every Hollywood studio: how can you reliably produce a constant stream of popular entertainment for a fickle audience? (Nobody knows.) Groupon posted fast growth as hundreds of thousands of local businesses tried their product. But persuading those businesses to become repeat customers was harder than they thought. If you focus on near-term growth above all else, you miss the most important question you should be asking: will this business still be around a decade from now? Numbers alone won’t tell you the answer; instead you must think critically about the qualitative characteristics of your business. CHARACTERISTICS OF MONOPOLY What does a company with large cash flows far into the future look like? Every monopoly is unique, but they usually share some combination of the following characteristics: proprietary technology, network effects, economies of scale, and branding. This isn’t a list of boxes to check as you build your business—there’s no shortcut to monopoly. However, analyzing your business according to these characteristics can help you think about how to make it durable. 1. Proprietary Technology Proprietary technology is the most substantive advantage a company can have because it makes your product difficult or impossible to replicate. Google’s search algorithms, for example, return results better than anyone else’s. Proprietary technologies for extremely short page load times and highly accurate query autocompletion add to the core search product’s robustness and defensibility. It would be very hard for anyone to do to Google what Google did to all the other search engine companies in the early 2000s. As a good rule of thumb, proprietary technology must be at least 10 times better than its closest substitute in some important dimension to lead to a real monopolistic advantage. Anything less than an order of magnitude better will probably be perceived as a marginal improvement and will be hard to sell, especially in an already crowded market. The clearest way to make a 10x improvement is to invent something completely new. If you build something valuable where there was nothing before, the increase in value is theoretically infinite. A drug to safely eliminate the need for sleep, or a cure for baldness, for example, would certainly support a monopoly business. Or you can radically improve an existing solution: once you’re 10x better, you escape competition. PayPal, for instance, made buying and selling on eBay at least 10 times better. Instead of mailing a check that would take 7 to 10 days to arrive, PayPal let buyers pay as soon as an auction ended. Sellers received their proceeds right away, and unlike with a check, they knew the funds were good. Amazon made its first 10x improvement in a particularly visible way: they offered at least 10 times as many books as any other bookstore. When it launched in 1995, Amazon could claim to be “Earth’s largest bookstore” because, unlike a retail bookstore that might stock 100,000 books, Amazon didn’t need to physically store any inventory—it simply requested the title from its supplier whenever a customer made an order. This quantum improvement was so effective that a very unhappy Barnes & Noble filed a lawsuit three days before Amazon’s IPO, claiming that Amazon was unfairly calling itself a “bookstore” when really it was a “book broker.” You can also make a 10x improvement through superior integrated design. Before 2010, tablet computing was so poor that for all practical purposes the market didn’t even exist. “Microsoft Windows XP Tablet PC Edition” products first shipped in 2002, and Nokia released its own “Internet Tablet” in 2005, but they were a pain to use. Then Apple released the iPad. Design improvements are hard to measure, but it seems clear that Apple improved on anything that had come before by at least an order of magnitude: tablets went from unusable to useful. 2. Network Effects Network effects make a product more useful as more people use it. For example, if all your friends are on Facebook, it makes sense for you to join Facebook, too. Unilaterally choosing a different social network would only make you an eccentric. Network effects can be powerful, but you’ll never reap them unless your product is valuable to its very first users when the network is necessarily small. For example, in 1960 a quixotic company called Xanadu set out to build a two-way communication network between all computers—a sort of early, synchronous version of the World Wide Web. After more than three decades of futile effort, Xanadu folded just as the web was becoming commonplace. Their technology probably would have worked at scale, but it could have worked only at scale: it required every computer to join the network at the same time, and that was never going to happen. Paradoxically, then, network effects businesses must start with especially small markets. Facebook started with just Harvard students —Mark Zuckerberg’s first product was designed to get all his classmates signed up, not to attract all people of Earth. This is why successful network businesses rarely get started by MBA types: the initial markets are so small that they often don’t even appear to be business opportunities at all. 3. Economies of Scale A monopoly business gets stronger as it gets bigger: the fixed costs of creating a product (engineering, management, office space) can be spread out over ever greater quantities of sales. Software startups can enjoy especially dramatic economies of scale because the marginal cost of producing another copy of the product is close to zero. Many businesses gain only limited advantages as they grow to large scale. Service businesses especially are difficult to make monopolies. If you own a yoga studio, for example, you’ll only be able to serve a certain number of customers. You can hire more instructors and expand to more locations, but your margins will remain fairly low and you’ll never reach a point where a core group of talented people can provide something of value to millions of separate clients, as software engineers are able to do. A good startup should have the potential for great scale built into its first design. Twitter already has more than 250 million users today. It doesn’t need to add too many customized features in order to acquire more, and there’s no inherent reason why it should ever stop growing. 4. Branding A company has a monopoly on its own brand by definition, so creating a strong brand is a powerful way to claim a monopoly. Today’s strongest tech brand is Apple: the attractive looks and carefully chosen materials of products like the iPhone and MacBook, the Apple Stores’ sleek minimalist design and close control over the consumer experience, the omnipresent advertising campaigns, the price positioning as a maker of premium goods, and the lingering nimbus of Steve Jobs’s personal charisma all contribute to a perception that Apple offers products so good as to constitute a category of their own. Many have tried to learn from Apple’s success: paid advertising, branded stores, luxurious materials, playful keynote speeches, high prices, and even minimalist design are all susceptible to imitation. But these techniques for polishing the surface don’t work without a strong underlying substance. Apple has a complex suite of proprietary technologies, both in hardware (like superior touchscreen materials) and software (like touchscreen interfaces purpose-designed for specific materials). It manufactures products at a scale large enough to dominate pricing for the materials it buys. And it enjoys strong network effects from its content ecosystem: thousands of developers write software for Apple devices because that’s where hundreds of millions of users are, and those users stay on the platform because it’s where the apps are. These other monopolistic advantages are less obvious than Apple’s sparkling brand, but they are the fundamentals that let the branding effectively reinforce Apple’s monopoly. Beginning with brand rather than substance is dangerous. Ever since Marissa Mayer became CEO of Yahoo! in mid-2012, she has worked to revive the once-popular internet giant by making it cool again. In a single tweet, Yahoo! summarized Mayer’s plan as a chain reaction of “people then products then traffic then revenue.” The people are supposed to come for the coolness: Yahoo! demonstrated design awareness by overhauling its logo, it asserted youthful relevance by acquiring hot startups like Tumblr, and it has gained media attention for Mayer’s own star power. But the big question is what products Yahoo! will actually create. When Steve Jobs returned to Apple, he didn’t just make Apple a cool place to work; he slashed product lines to focus on the handful of opportunities for 10x improvements. No technology company can be built on branding alone. BUILDING A MONOPOLY Brand, scale, network effects, and technology in some combination define a monopoly; but to get them to work, you need to choose your market carefully and expand deliberately. Start Small and Monopolize Every startup is small at the start. Every monopoly dominates a large share of its market. Therefore, every startup should start with a very small market. Always err on the side of starting too small. The reason is simple: it’s easier to dominate a small market than a large one. If you think your initial market might be too big, it almost certainly is. Small doesn’t mean nonexistent. We made this mistake early on at PayPal. Our first product let people beam money to each other via PalmPilots. It was interesting technology and no one else was doing it. However, the world’s millions of PalmPilot users weren’t concentrated in a particular place, they had little in common, and they used their devices only episodically. Nobody needed our product, so we had no customers. With that lesson learned, we set our sights on eBay auctions, where we found our first success. In late 1999, eBay had a few thousand high- volume “PowerSellers,” and after only three months of dedicated effort, we were serving 25% of them. It was much easier to reach a few thousand people who really needed our product than to try to compete for the attention of millions of scattered individuals. The perfect target market for a startup is a small group of particular people concentrated together and served by few or no competitors. Any big market is a bad choice, and a big market already served by competing companies is even worse. This is why it’s always a red flag when entrepreneurs talk about getting 1% of a $100 billion market. In practice, a large market will either lack a good starting point or it will be open to competition, so it’s hard to ever reach that 1%. And even if you do succeed in gaining a small foothold, you’ll have to be satisfied with keeping the lights on: cutthroat competition means your profits will be zero. Scaling Up Once you create and dominate a niche market, then you should gradually expand into related and slightly broader markets. Amazon shows how it can be done. Jeff Bezos’s founding vision was to dominate all of online retail, but he very deliberately started with books. There were millions of books to catalog, but they all had roughly the same shape, they were easy to ship, and some of the most rarely sold books— those least profitable for any retail store to keep in stock—also drew the most enthusiastic customers. Amazon became the dominant solution for anyone located far from a bookstore or seeking something unusual. Amazon then had two options: expand the number of people who read books, or expand to adjacent markets. They chose the latter, starting with the most similar markets: CDs, videos, and software. Amazon continued to add categories gradually until it had become the world’s general store. The name itself brilliantly encapsulated the company’s scaling strategy. The biodiversity of the Amazon rain forest reflected Amazon’s first goal of cataloging every book in the world, and now it stands for every kind of thing in the world, period. eBay also started by dominating small niche markets. When it launched its auction marketplace in 1995, it didn’t need the whole world to adopt it at once; the product worked well for intense interest groups, like Beanie Baby obsessives. Once it monopolized the Beanie Baby trade, eBay didn’t jump straight to listing sports cars or industrial surplus: it continued to cater to small-time hobbyists until it became the most reliable marketplace for people trading online no matter what the item. Sometimes there are hidden obstacles to scaling—a lesson that eBay has learned in recent years. Like all marketplaces, the auction marketplace lent itself to natural monopoly because buyers go where the sellers are and vice versa. But eBay found that the auction model works best for individually distinctive products like coins and stamps. It works less well for commodity products: people don’t want to bid on pencils or Kleenex, so it’s more convenient just to buy them from Amazon. eBay is still a valuable monopoly; it’s just smaller than people in 2004 expected it to be. Sequencing markets correctly is underrated, and it takes discipline to expand gradually. The most successful companies make the core progression—to first dominate a specific niche and then scale to adjacent markets—a part of their founding narrative. Don’t Disrupt Silicon Valley has become obsessed with “disruption.” Originally, “disruption” was a term of art to describe how a firm can use new technology to introduce a low-end product at low prices, improve the product over time, and eventually overtake even the premium products offered by incumbent companies using older technology. This is roughly what happened when the advent of PCs disrupted the market for mainframe computers: at first PCs seemed irrelevant, then they became dominant. Today mobile devices may be doing the same thing to PCs. However, disruption has recently transmogrified into a self- congratulatory buzzword for anything posing as trendy and new. This seemingly trivial fad matters because it distorts an entrepreneur’s self- understanding in an inherently competitive way. The concept was coined to describe threats to incumbent companies, so startups’ obsession with disruption means they see themselves through older firms’ eyes. If you think of yourself as an insurgent battling dark forces, it’s easy to become unduly fixated on the obstacles in your path. But if you truly want to make something new, the act of creation is far more important than the old industries that might not like what you create. Indeed, if your company can be summed up by its opposition to already existing firms, it can’t be completely new and it’s probably not going to become a monopoly. Disruption also attracts attention: disruptors are people who look for trouble and find it. Disruptive kids get sent to the principal’s office. Disruptive companies often pick fights they can’t win. Think of Napster: the name itself meant trouble. What kinds of things can one “nap”? Music … Kids … and perhaps not much else. Shawn Fanning and Sean Parker, Napster’s then-teenage founders, credibly threatened to disrupt the powerful music recording industry in 1999. The next year, they made the cover of Time magazine. A year and a half after that, they ended up in bankruptcy court. PayPal could be seen as disruptive, but we didn’t try to directly challenge any large competitor. It’s true that we took some business away from Visa when we popularized internet payments: you might use PayPal to buy something online instead of using your Visa card to buy it in a store. But since we expanded the market for payments overall, we gave Visa far more business than we took. The overall dynamic was net positive, unlike Napster’s negative-sum struggle with the U.S. recording industry. As you craft a plan to expand to adjacent markets, don’t disrupt: avoid competition as much as possible. THE LAST WILL BE FIRST You’ve probably heard about “first mover advantage”: if you’re the first entrant into a market, you can capture significant market share while competitors scramble to get started. But moving first is a tactic, not a goal. What really matters is generating cash flows in the future, so being the first mover doesn’t do you any good if someone else comes along and unseats you. It’s much better to be the last mover—that is, to make the last great development in a specific market and enjoy years or even decades of monopoly profits. The way to do that is to dominate a small niche and scale up from there, toward your ambitious long-term vision. In this one particular at least, business is like chess. Grandmaster José Raúl Capablanca put it well: to succeed, “you must study the endgame before everything else.” 6 YOU ARE NOT A LOTTERY TICKET THE MOST CONTENTIOUS question in business is whether success comes from luck or skill. What do successful people say? Malcolm Gladwell, a successful author who writes about successful people, declares in Outliers that success results from a “patchwork of lucky breaks and arbitrary advantages.” Warren Buffett famously considers himself a “member of the lucky sperm club” and a winner of the “ovarian lottery.” Jeff Bezos attributes Amazon’s success to an “incredible planetary alignment” and jokes that it was “half luck, half good timing, and the rest brains.” Bill Gates even goes so far as to claim that he “was lucky to be born with certain skills,” though it’s not clear whether that’s actually possible. Perhaps these guys are being strategically humble. However, the phenomenon of serial entrepreneurship would seem to call into question our tendency to explain success as the product of chance. Hundreds of people have started multiple multimillion-dollar businesses. A few, like Steve Jobs, Jack Dorsey, and Elon Musk, have created several multibillion-dollar companies. If success were mostly a matter of luck, these kinds of serial entrepreneurs probably wouldn’t exist. In January 2013, Jack Dorsey, founder of Twitter and Square, tweeted to his 2 million followers: “Success is never accidental.” Most of the replies were unambiguously negative. Referencing the tweet in The Atlantic, reporter Alexis Madrigal wrote that his instinct was to reply: “ ‘Success is never accidental,’ said all multimillionaire white men.” It’s true that already successful people have an easier time doing new things, whether due to their networks, wealth, or experience. But perhaps we’ve become too quick to dismiss anyone who claims to have succeeded according to plan. Is there a way to settle this debate objectively? Unfortunately not, because companies are not experiments. To get a scientific answer about Facebook, for example, we’d have to rewind to 2004, create 1,000 copies of the world, and start Facebook in each copy to see how many times it would succeed. But that experiment is impossible. Every company starts in unique circumstances, and every company starts only once. Statistics doesn’t work when the sample size is one. From the Renaissance and the Enlightenment to the mid-20th century, luck was something to be mastered, dominated, and controlled; everyone agreed that you should do what you could, not focus on what you couldn’t. Ralph Waldo Emerson captured this ethos when he wrote: “Shallow men believe in luck, believe in circumstances.… Strong men believe in cause and effect.” In 1912, after he became the first explorer to reach the South Pole, Roald Amundsen wrote: “Victory awaits him who has everything in order—luck, people call it.” No one pretended that misfortune didn’t exist, but prior generations believed in making their own luck by working hard. If you believe your life is mainly a matter of chance, why read this book? Learning about startups is worthless if you’re just reading stories about people who won the lottery. Slot Machines for Dummies can purport to tell you which kind of rabbit’s foot to rub or how to tell which machines are “hot,” but it can’t tell you how to win. Did Bill Gates simply win the intelligence lottery? Was Sheryl Sandberg born with a silver spoon, or did she “lean in”? When we debate historical questions like these, luck is in the past tense. Far more important are questions about the future: is it a matter of chance or design? CAN YOU CONTROL YOUR FUTURE? You can expect the future to take a definite form or you can treat it as hazily uncertain. If you treat the future as something definite, it makes sense to understand it in advance and to work to shape it. But if you expect an indefinite future ruled by randomness, you’ll give up on trying to master it. Indefinite attitudes to the future explain what’s most dysfunctional in our world today. Process trumps substance: when people lack concrete plans to carry out, they use formal rules to assemble a portfolio of various options. This describes Americans today. In middle school, we’re encouraged to start hoarding “extracurricular activities.” In high school, ambitious students compete even harder to appear omnicompetent. By the time a student gets to college, he’s spent a decade curating a bewilderingly diverse résumé to prepare for a completely unknowable future. Come what may, he’s ready—for nothing in particular. A definite view, by contrast, favors firm convictions. Instead of pursuing many-sided mediocrity and calling it “well-roundedness,” a definite person determines the one best thing to do and then does it. Instead of working tirelessly to make herself indistinguishable, she strives to be great at something substantive—to be a monopoly of one. This is not what young people do today, because everyone around them has long since lost faith in a definite world. No one gets into Stanford by excelling at just one thing, unless that thing happens to involve throwing or catching a leather ball. You can also expect the future to be either better or worse than the present. Optimists welcome the future; pessimists fear it. Combining these possibilities yields four views: Indefinite Pessimism Every culture has a myth of decline from some golden age, and almost all peoples throughout history have been pessimists. Even today pessimism still dominates huge parts of the world. An indefinite pessimist looks out onto a bleak future, but he has no idea what to do about it. This describes Europe since the early 1970s, when the continent succumbed to undirected bureaucratic drift. Today the whole Eurozone is in slow-motion crisis, and nobody is in charge. The European Central Bank doesn’t stand for anything but improvisation: the U.S. Treasury prints “In God We Trust” on the dollar; the ECB might as well print “Kick the Can Down the Road” on the euro. Europeans just react to events as they happen and hope things don’t get worse. The indefinite pessimist can’t know whether the inevitable decline will be fast or slow, catastrophic or gradual. All he can do is wait for it to happen, so he might as well eat, drink, and be merry in the meantime: hence Europe’s famous vacation mania. Definite Pessimism A definite pessimist believes the future can be known, but since it will be bleak, he must prepare for it. Perhaps surprisingly, China is probably the most definitely pessimistic place in the world today. When Americans see the Chinese economy grow ferociously fast (10% per year since 2000), we imagine a confident country mastering its future. But that’s because Americans are still optimists, and we project our optimism onto China. From China’s viewpoint, economic growth cannot come fast enough. Every other country is afraid that China is going to take over the world; China is the only country afraid that it won’t. China can grow so fast only because its starting base is so low. The easiest way for China to grow is to relentlessly copy what has already worked in the West. And that’s exactly what it’s doing: executing definite plans by burning ever more coal to build ever more factories and skyscrapers. But with a huge population pushing resource prices higher, there’s no way Chinese living standards can ever actually catch up to those of the richest countries, and the Chinese know it. This is why the Chinese leadership is obsessed with the way in which things threaten to get worse. Every senior Chinese leader experienced famine as a child, so when the Politburo looks to the future, disaster is not an abstraction. The Chinese public, too, knows that winter is coming. Outsiders are fascinated by the great fortunes being made inside China, but they pay less attention to the wealthy Chinese trying hard to get their money out of the country. Poorer Chinese just save everything they can and hope it will be enough. Every class of people in China takes the future deadly seriously. Definite Optimism To a definite optimist, the future will be better than the present if he plans and works to make it better. From the 17th century through the 1950s and ’60s, definite optimists led the Western world. Scientists, engineers, doctors, and businessmen made the world richer, healthier, and more long-lived than previously imaginable. As Karl Marx and Friedrich Engels saw clearly, the 19th-century business class created more massive and more colossal productive forces than all preceding generations together. Subjection of Nature’s forces to man, machinery, application of chemistry to industry and agriculture, steam-navigation, railways, electric telegraphs, clearing of whole continents for cultivation, canalisation of rivers, whole populations conjured out of the ground—what earlier century had even a presentiment that such productive forces slumbered in the lap of social labor? Each generation’s inventors and visionaries surpassed their predecessors. In 1843, the London public was invited to make its first crossing underneath the River Thames by a newly dug tunnel. In 1869, the Suez Canal saved Eurasian shipping traffic from rounding the Cape of Good Hope. In 1914 the Panama Canal cut short the route from Atlantic to Pacific. Even the Great Depression failed to impede relentless progress in the United States, which has always been home to the world’s most far-seeing definite optimists. The Empire State Building was started in 1929 and finished in 1931. The Golden Gate Bridge was started in 1933 and completed in 1937. The Manhattan Project was started in 1941 and had already produced the world’s first nuclear bomb by 1945. Americans continued to remake the face of the world in peacetime: the Interstate Highway System began construction in 1956, and the first 20,000 miles of road were open for driving by 1965. Definite planning even went beyond the surface of this planet: NASA’s Apollo Program began in 1961 and put 12 men on the moon before it finished in 1972. Bold plans were not reserved just for political leaders or government scientists. In the late 1940s, a Californian named John Reber set out to reinvent the physical geography of the whole San Francisco Bay Area. Reber was a schoolteacher, an amateur theater producer, and a self- taught engineer. Undaunted by his lack of credentials, he publicly proposed to build two huge dams in the Bay, construct massive freshwater lakes for drinking water and irrigation, and reclaim 20,000 acres of land for development. Even though he had no personal authority, people took the Reber Plan seriously. It was endorsed by newspaper editorial boards across California. The U.S. Congress held hearings on its feasibility. The Army Corps of Engineers even constructed a 1.5-acre scale model of the Bay in a cavernous Sausalito warehouse to simulate it. These tests revealed technical shortcomings, so the plan wasn’t executed. But would anybody today take such a vision seriously in the first place? In the 1950s, people welcomed big plans and asked whether they would work. Today a grand plan coming from a schoolteacher would be dismissed as crankery, and a long-range vision coming from anyone more powerful would be derided as hubris. You can still visit the Bay Model in that Sausalito warehouse, but today it’s just a tourist attraction: big plans for the future have become archaic curiosities. In the 1950s, Americans thought big plans for the future were too important to be left to experts. Indefinite Optimism After a brief pessimistic phase in the 1970s, indefinite optimism has dominated American thinking ever since 1982, when a long bull market began and finance eclipsed engineering as the way to approach the future. To an indefinite optimist, the future will be better, but he doesn’t know how exactly, so he won’t make any specific plans. He expects to profit from the future but sees no reason to design it concretely. Instead of working for years to build a new product, indefinite optimists rearrange already-invented ones. Bankers make money by rearranging the capital structures of already existing companies. Lawyers resolve disputes over old things or help other people structure their affairs. And private equity investors and management consultants don’t start new businesses; they squeeze extra efficiency from old ones with incessant procedural optimizations. It’s no surprise that these fields all attract disproportionate numbers of high-achieving Ivy League optionality chasers; what could be a more appropriate reward for two decades of résumé-building than a seemingly elite, process- oriented career that promises to “keep options open”? Recent graduates’ parents often cheer them on the established path. The strange history of the Baby Boom produced a generation of indefinite optimists so used to effortless progress that they feel entitled to it. Whether you were born in 1945 or 1950 or 1955, things got better every year for the first 18 years of your life, and it had nothing to do with you. Technological advance seemed to accelerate automatically, so the Boomers grew up with great expectations but few specific plans for how to fulfill them. Then, when technological progress stalled in the 1970s, increasing income inequality came to the rescue of the most elite Boomers. Every year of adulthood continued to get automatically better and better for the rich and successful. The rest of their generation was left behind, but the wealthy Boomers who shape public opinion today see little reason to question their naïve optimism. Since tracked careers worked for them, they can’t imagine that they won’t work for their kids, too. Malcolm Gladwell says you can’t understand Bill Gates’s success without understanding his fortunate personal context: he grew up in a good family, went to a private school equipped with a computer lab, and counted Paul Allen as a childhood friend. But perhaps you can’t understand Malcolm Gladwell without understanding his historical context as a Boomer (born in 1963). When Baby Boomers grow up and write books to explain why one or another individual is successful, they point to the power of a particular individual’s context as determined by chance. But they miss the even bigger social context for their own preferred explanations: a whole generation learned from childhood to overrate the power of chance and underrate the importance of planning. Gladwell at first appears to be making a contrarian critique of the myth of the self-made businessman, but actually his own account encapsulates the conventional view of a generation. OUR INDEFINITELY OPTIMISTIC WORLD Indefinite Finance While a definitely optimistic future would need engineers to design underwater cities and settlements in space, an indefinitely optimistic future calls for more bankers and lawyers. Finance epitomizes indefinite thinking because it’s the only way to make money when you have no idea how to create wealth. If they don’t go to law school, bright college graduates head to Wall Street precisely because they have no real plan for their careers. And once they arrive at Goldman, they find that even inside finance, everything is indefinite. It’s still optimistic— you wouldn’t play in the markets if you expected to lose—but the fundamental tenet is that the market is random; you can’t know anything specific or substantive; diversification becomes supremely important. The indefiniteness of finance can be bizarre. Think about what happens when successful entrepreneurs sell their company. What do they do with the money? In a financialized world, it unfolds like this: The founders don’t know what to do with it, so they give it to a large bank. The bankers don’t know what to do with it, so they diversify by spreading it across a portfolio of institutional investors. Institutional investors don’t know what to do with their managed capital, so they diversify by amassing a portfolio of stocks. Companies try to increase their share price by generating free cash flows. If they do, they issue dividends or buy back shares and the cycle repeats. At no point does anyone in the chain know what to do with money in the real economy. But in an indefinite world, people actually prefer unlimited optionality; money is more valuable than anything you could possibly do with it. Only in a definite future is money a means to an end, not the end itself. Indefinite Politics Politicians have always been officially accountable to the public at election time, but today they are attuned to what the public thinks at every moment. Modern polling enables politicians to tailor their image to match preexisting public opinion exactly, so for the most part, they do. Nate Silver’s election predictions are remarkably accurate, but even more remarkable is how big a story they become every four years. We are more fascinated today by statistical predictions of what the country will be thinking in a few weeks’ time than by visionary predictions of what the country will look like 10 or 20 years from now. And it’s not just the electoral process—the very character of government has become indefinite, too. The government used to be able to coordinate complex solutions to problems like atomic weaponry and lunar exploration. But today, after 40 years of indefinite creep, the government mainly just provides insurance; our solutions to big problems are Medicare, Social Security, and a dizzying array of other transfer payment programs. It’s no surprise that entitlement spending has eclipsed discretionary spending every year since 1975. To increase discretionary spending we’d need definite plans to solve specific problems. But according to the indefinite logic of entitlement spending, we can make things better just by sending out more checks. Indefinite Philosophy You can see the shift to an indefinite attitude not just in politics but in the political philosophers whose ideas underpin both left and right. The philosophy of the ancient world was pessimistic: Plato, Aristotle, Epicurus, and Lucretius all accepted strict limits on human potential. The only question was how best to cope with our tragic fate. Modern philosophers have been mostly optimistic. From Herbert Spencer on the right and Hegel in the center to Marx on the left, the 19th century shared a belief in progress. (Remember Marx and Engels’s encomium to the technological triumphs of capitalism from this page.) These thinkers expected material advances to fundamentally change human life for the better: they were definite optimists. In the late 20th century, indefinite philosophies came to the fore. The two dominant political thinkers, John Rawls and Robert Nozick, are usually seen as stark opposites: on the egalitarian left, Rawls was concerned with questions of fairness and distribution; on the libertarian right, Nozick focused on maximizing individual freedom. They both believed that people could get along with each other peacefully, so unlike the ancients, they were optimistic. But unlike Spencer or Marx, Rawls and Nozick were indefinite optimists: they didn’t have any specific vision of the future. Their indefiniteness took different forms. Rawls begins A Theory of Justice with the famous “veil of ignorance”: fair political reasoning is supposed to be impossible for anyone with knowledge of the world as it concretely exists. Instead of trying to change our actual world of unique people and real technologies, Rawls fantasized about an “inherently stable” society with lots of fairness but little dynamism. Nozick opposed Rawls’s “patterned” concept of justice. To Nozick, any voluntary exchange must be allowed, and no social pattern could be noble enough to justify maintenance by coercion. He didn’t have any more concrete ideas about the good society than Rawls: both of them focused on process. Today, we exaggerate the differences between left- liberal egalitarianism and libertarian individualism because almost everyone shares their common indefinite attitude. In philosophy, politics, and business, too, arguing over process has become a way to endlessly defer making concrete plans for a better future. Indefinite Life Our ancestors sought to understand and extend the human lifespan. In the 16th century, conquistadors searched the jungles of Florida for a Fountain of Youth. Francis Bacon wrote that “the prolongation of life” should be considered its own branch of medicine—and the noblest. In the 1660s, Robert Boyle placed life extension (along with “the Recovery of Youth”) atop his famous wish list for the future of science. Whether through geographic exploration or laboratory research, the best minds of the Renaissance thought of death as something to defeat. (Some resisters were killed in action: Bacon caught pneumonia and died in 1626 while experimenting to see if he could extend a chicken’s life by freezing it in the snow.) We haven’t yet uncovered the secrets of life, but insurers and statisticians in the 19th century successfully revealed a secret about death that still governs our thinking today: they discovered how to reduce it to a mathematical probability. “Life tables” tell us our chances of dying in any given year, something previous generations didn’t know. However, in exchange for better insurance contracts, we seem to have given up the search for secrets about longevity. Systematic knowledge of the current range of human lifespans has made that range seem natural. Today our society is permeated by the twin ideas that death is both inevitable and random. Meanwhile, probabilistic attitudes have come to shape the agenda of biology itself. In 1928, Scottish scientist Alexander Fleming found that a mysterious antibacterial fungus had grown on a petri dish he’d f

Use Quizgecko on...
Browser
Browser