🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Deep Work_ Rules for focused success in a distracted world ( PDFDrive ).pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Begin Reading Table of Contents Newsletters Copyright Page In accordance with the U.S. Copyright Act of 1976,...

Begin Reading Table of Contents Newsletters Copyright Page In accordance with the U.S. Copyright Act of 1976, the scanning, uploading, and electronic sharing of any part of this book without the permission of the publisher constitute unlawful piracy and theft of the author’s intellectual property. If you would like to use material from the book (other than for review purposes), prior written permission must be obtained by contacting the publisher at [email protected]. Thank you for your support of the author’s rights. Introduction In the Swiss canton of St. Gallen, near the northern banks of Lake Zurich, is a village named Bollingen. In 1922, the psychiatrist Carl Jung chose this spot to begin building a retreat. He began with a basic two-story stone house he called the Tower. After returning from a trip to India, where he observed the practice of adding meditation rooms to homes, he expanded the complex to include a private office. “In my retiring room I am by myself,” Jung said of the space. “I keep the key with me all the time; no one else is allowed in there except with my permission.” In his book Daily Rituals, journalist Mason Currey sorted through various sources on Jung to re-create the psychiatrist’s work habits at the Tower. Jung would rise at seven a.m., Currey reports, and after a big breakfast he would spend two hours of undistracted writing time in his private office. His afternoons would often consist of meditation or long walks in the surrounding countryside. There was no electricity at the Tower, so as day gave way to night, light came from oil lamps and heat from the fireplace. Jung would retire to bed by ten p.m. “The feeling of repose and renewal that I had in this tower was intense from the start,” he said. Though it’s tempting to think of Bollingen Tower as a vacation home, if we put it into the context of Jung’s career at this point it’s clear that the lakeside retreat was not built as an escape from work. In 1922, when Jung bought the property, he could not afford to take a vacation. Only one year earlier, in 1921, he had published Psychological Types, a seminal book that solidified many differences that had been long developing between Jung’s thinking and the ideas of his onetime friend and mentor, Sigmund Freud. To disagree with Freud in the 1920s was a bold move. To back up his book, Jung needed to stay sharp and produce a stream of smart articles and books further supporting and establishing analytical psychology, the eventual name for his new school of thought. Jung’s lectures and counseling practice kept him busy in Zurich—this is clear. But he wasn’t satisfied with busyness alone. He wanted to change the way we understood the unconscious, and this goal required deeper, more careful thought than he could manage amid his hectic city lifestyle. Jung retreated to Bollingen, not to escape his professional life, but instead to advance it. Carl Jung went on to become one of the most influential thinkers of the twentieth century. There are, of course, many reasons for his eventual success. In this book, however, I’m interested in his commitment to the following skill, which almost certainly played a key role in his accomplishments: Deep Work: Professional activities performed in a state of distraction-free concentration that push your cognitive capabilities to their limit. These efforts create new value, improve your skill, and are hard to replicate. Deep work is necessary to wring every last drop of value out of your current intellectual capacity. We now know from decades of research in both psychology and neuroscience that the state of mental strain that accompanies deep work is also necessary to improve your abilities. Deep work, in other words, was exactly the type of effort needed to stand out in a cognitively demanding field like academic psychiatry in the early twentieth century. The term “deep work” is my own and is not something Carl Jung would have used, but his actions during this period were those of someone who understood the underlying concept. Jung built a tower out of stone in the woods to promote deep work in his professional life—a task that required time, energy, and money. It also took him away from more immediate pursuits. As Mason Currey writes, Jung’s regular journeys to Bollingen reduced the time he spent on his clinical work, noting, “Although he had many patients who relied on him, Jung was not shy about taking time off.” Deep work, though a burden to prioritize, was crucial for his goal of changing the world. Indeed, if you study the lives of other influential figures from both distant and recent history, you’ll find that a commitment to deep work is a common theme. The sixteenth-century essayist Michel de Montaigne, for example, prefigured Jung by working in a private library he built in the southern tower guarding the stone walls of his French château, while Mark Twain wrote much of The Adventures of Tom Sawyer in a shed on the property of the Quarry Farm in New York, where he was spending the summer. Twain’s study was so isolated from the main house that his family took to blowing a horn to attract his attention for meals. Moving forward in history, consider the screenwriter and director Woody Allen. In the forty-four-year period between 1969 and 2013, Woody Allen wrote and directed forty-four films that received twenty-three Academy Award nominations—an absurd rate of artistic productivity. Throughout this period, Allen never owned a computer, instead completing all his writing, free from electronic distraction, on a German Olympia SM3 manual typewriter. Allen is joined in his rejection of computers by Peter Higgs, a theoretical physicist who performs his work in such disconnected isolation that journalists couldn’t find him after it was announced he had won the Nobel Prize. J.K. Rowling, on the other hand, does use a computer, but was famously absent from social media during the writing of her Harry Potter novels—even though this period coincided with the rise of the technology and its popularity among media figures. Rowling’s staff finally started a Twitter account in her name in the fall of 2009, as she was working on The Casual Vacancy, and for the first year and a half her only tweet read: “This is the real me, but you won’t be hearing from me often I am afraid, as pen and paper is my priority at the moment.” Deep work, of course, is not limited to the historical or technophobic. Microsoft CEO Bill Gates famously conducted “Think Weeks” twice a year, during which he would isolate himself (often in a lakeside cottage) to do nothing but read and think big thoughts. It was during a 1995 Think Week that Gates wrote his famous “Internet Tidal Wave” memo that turned Microsoft’s attention to an upstart company called Netscape Communications. And in an ironic twist, Neal Stephenson, the acclaimed cyberpunk author who helped form our popular conception of the Internet age, is near impossible to reach electronically—his website offers no e-mail address and features an essay about why he is purposefully bad at using social media. Here’s how he once explained the omission: “If I organize my life in such a way that I get lots of long, consecutive, uninterrupted time-chunks, I can write novels. [If I instead get interrupted a lot] what replaces it? Instead of a novel that will be around for a long time… there is a bunch of e-mail messages that I have sent out to individual persons.” The ubiquity of deep work among influential individuals is important to emphasize because it stands in sharp contrast to the behavior of most modern knowledge workers —a group that’s rapidly forgetting the value of going deep. The reason knowledge workers are losing their familiarity with deep work is well established: network tools. This is a broad category that captures communication services like e-mail and SMS, social media networks like Twitter and Facebook, and the shiny tangle of infotainment sites like BuzzFeed and Reddit. In aggregate, the rise of these tools, combined with ubiquitous access to them through smartphones and networked office computers, has fragmented most knowledge workers’ attention into slivers. A 2012 McKinsey study found that the average knowledge worker now spends more than 60 percent of the workweek engaged in electronic communication and Internet searching, with close to 30 percent of a worker’s time dedicated to reading and answering e-mail alone. This state of fragmented attention cannot accommodate deep work, which requires long periods of uninterrupted thinking. At the same time, however, modern knowledge workers are not loafing. In fact, they report that they are as busy as ever. What explains the discrepancy? A lot can be explained by another type of effort, which provides a counterpart to the idea of deep work: Shallow Work: Noncognitively demanding, logistical-style tasks, often performed while distracted. These efforts tend to not create much new value in the world and are easy to replicate. In an age of network tools, in other words, knowledge workers increasingly replace deep work with the shallow alternative—constantly sending and receiving e- mail messages like human network routers, with frequent breaks for quick hits of distraction. Larger efforts that would be well served by deep thinking, such as forming a new business strategy or writing an important grant application, get fragmented into distracted dashes that produce muted quality. To make matters worse for depth, there’s increasing evidence that this shift toward the shallow is not a choice that can be easily reversed. Spend enough time in a state of frenetic shallowness and you permanently reduce your capacity to perform deep work. “What the Net seems to be doing is chipping away my capacity for concentration and contemplation,” admitted journalist Nicholas Carr, in an oft-cited 2008 Atlantic article. “[And] I’m not the only one.” Carr expanded this argument into a book, The Shallows, which became a finalist for the Pulitzer Prize. To write The Shallows, appropriately enough, Carr had to move to a cabin and forcibly disconnect. The idea that network tools are pushing our work from the deep toward the shallow is not new. The Shallows was just the first in a series of recent books to examine the Internet’s effect on our brains and work habits. These subsequent titles include William Powers’s Hamlet’s BlackBerry, John Freeman’s The Tyranny of E-mail, and Alex Soojung-Kin Pang’s The Distraction Addiction—all of which agree, more or less, that network tools are distracting us from work that requires unbroken concentration, while simultaneously degrading our capacity to remain focused. Given this existing body of evidence, I will not spend more time in this book trying to establish this point. We can, I hope, stipulate that network tools negatively impact deep work. I’ll also sidestep any grand arguments about the long-term societal consequence of this shift, as such arguments tend to open impassible rifts. On one side of the debate are techno-skeptics like Jaron Lanier and John Freeman, who suspect that many of these tools, at least in their current state, damage society, while on the other side techno-optimists like Clive Thompson argue that they’re changing society, for sure, but in ways that’ll make us better off. Google, for example, might reduce our memory, but we no longer need good memories, as in the moment we can now search for anything we need to know. I have no stance in this philosophical debate. My interest in this matter instead veers toward a thesis of much more pragmatic and individualized interest: Our work culture’s shift toward the shallow (whether you think it’s philosophically good or bad) is exposing a massive economic and personal opportunity for the few who recognize the potential of resisting this trend and prioritizing depth—an opportunity that, not too long ago, was leveraged by a bored young consultant from Virginia named Jason Benn. There are many ways to discover that you’re not valuable in our economy. For Jason Benn the lesson was made clear when he realized, not long after taking a job as a financial consultant, that the vast majority of his work responsibilities could be automated by a “kludged together” Excel script. The firm that hired Benn produced reports for banks involved in complex deals. (“It was about as interesting as it sounds,” Benn joked in one of our interviews.) The report creation process required hours of manual manipulation of data in a series of Excel spreadsheets. When he first arrived, it took Benn up to six hours per report to finish this stage (the most efficient veterans at the firm could complete this task in around half the time). This didn’t sit well with Benn. “The way it was taught to me, the process seemed clunky and manually intensive,” Benn recalls. He knew that Excel has a feature called macros that allows users to automate common tasks. Benn read articles on the topic and soon put together a new worksheet, wired up with a series of these macros that could take the six-hour process of manual data manipulation and replace it, essentially, with a button click. A report- writing process that originally took him a full workday could now be reduced to less than an hour. Benn is a smart guy. He graduated from an elite college (the University of Virginia) with a degree in economics, and like many in his situation he had ambitions for his career. It didn’t take him long to realize that these ambitions would be thwarted so long as his main professional skills could be captured in an Excel macro. He decided, therefore, he needed to increase his value to the world. After a period of research, Benn reached a conclusion: He would, he declared to his family, quit his job as a human spreadsheet and become a computer programmer. As is often the case with such grand plans, however, there was a hitch: Jason Benn had no idea how to write code. As a computer scientist I can confirm an obvious point: Programming computers is hard. Most new developers dedicate a four-year college education to learning the ropes before their first job—and even then, competition for the best spots is fierce. Jason Benn didn’t have this time. After his Excel epiphany, he quit his job at the financial firm and moved home to prepare for his next step. His parents were happy he had a plan, but they weren’t happy about the idea that this return home might be long- term. Benn needed to learn a hard skill, and needed to do so fast. It’s here that Benn ran into the same problem that holds back many knowledge workers from navigating into more explosive career trajectories. Learning something complex like computer programming requires intense uninterrupted concentration on cognitively demanding concepts—the type of concentration that drove Carl Jung to the woods surrounding Lake Zurich. This task, in other words, is an act of deep work. Most knowledge workers, however, as I argued earlier in this introduction, have lost their ability to perform deep work. Benn was no exception to this trend. “I was always getting on the Internet and checking my e-mail; I couldn’t stop myself; it was a compulsion,” Benn said, describing himself during the period leading up to his quitting his finance job. To emphasize his difficulty with depth, Benn told me about a project that a supervisor at the finance firm once brought to him. “They wanted me to write a business plan,” he explained. Benn didn’t know how to write a business plan, so he decided he would find and read five different existing plans—comparing and contrasting them to understand what was needed. This was a good idea, but Benn had a problem: “I couldn’t stay focused.” There were days during this period, he now admits, when he spent almost every minute (“98 percent of my time”) surfing the Web. The business plan project—a chance to distinguish himself early in his career—fell to the wayside. By the time he quit, Benn was well aware of his difficulties with deep work, so when he dedicated himself to learning how to code, he knew he had to simultaneously teach his mind how to go deep. His method was drastic but effective. “I locked myself in a room with no computer: just textbooks, notecards, and a highlighter.” He would highlight the computer programming textbooks, transfer the ideas to notecards, and then practice them out loud. These periods free from electronic distraction were hard at first, but Benn gave himself no other option: He had to learn this material, and he made sure there was nothing in that room to distract him. Over time, however, he got better at concentrating, eventually getting to a point where he was regularly clocking five or more disconnected hours per day in the room, focused without distraction on learning this hard new skill. “I probably read something like eighteen books on the topic by the time I was done,” he recalls. After two months locked away studying, Benn attended the notoriously difficult Dev Bootcamp: a hundred-hour-a-week crash course in Web application programming. (While researching the program, Benn found a student with a PhD from Princeton who had described Dev as “the hardest thing I’ve ever done in my life.”) Given both his preparation and his newly honed ability for deep work, Benn excelled. “Some people show up not prepared,” he said. “They can’t focus. They can’t learn quickly.” Only half the students who started the program with Benn ended up graduating on time. Benn not only graduated, but was also the top student in his class. The deep work paid off. Benn quickly landed a job as a developer at a San Francisco tech start-up with $25 million in venture funding and its pick of employees. When Benn quit his job as a financial consultant, only half a year earlier, he was making $40,000 a year. His new job as a computer developer paid $100,000—an amount that can continue to grow, essentially without limit in the Silicon Valley market, along with his skill level. When I last spoke with Benn, he was thriving in his new position. A newfound devotee of deep work, he rented an apartment across the street from his office, allowing him to show up early in the morning before anyone else arrived and work without distraction. “On good days, I can get in four hours of focus before the first meeting,” he told me. “Then maybe another three to four hours in the afternoon. And I do mean ‘focus’: no e-mail, no Hacker News [a website popular among tech types], just programming.” For someone who admitted to sometimes spending up to 98 percent of his day in his old job surfing the Web, Jason Benn’s transformation is nothing short of astonishing. Jason Benn’s story highlights a crucial lesson: Deep work is not some nostalgic affectation of writers and early-twentieth-century philosophers. It’s instead a skill that has great value today. There are two reasons for this value. The first has to do with learning. We have an information economy that’s dependent on complex systems that change rapidly. Some of the computer languages Benn learned, for example, didn’t exist ten years ago and will likely be outdated ten years from now. Similarly, someone coming up in the field of marketing in the 1990s probably had no idea that today they’d need to master digital analytics. To remain valuable in our economy, therefore, you must master the art of quickly learning complicated things. This task requires deep work. If you don’t cultivate this ability, you’re likely to fall behind as technology advances. The second reason that deep work is valuable is because the impacts of the digital network revolution cut both ways. If you can create something useful, its reachable audience (e.g., employers or customers) is essentially limitless—which greatly magnifies your reward. On the other hand, if what you’re producing is mediocre, then you’re in trouble, as it’s too easy for your audience to find a better alternative online. Whether you’re a computer programmer, writer, marketer, consultant, or entrepreneur, your situation has become similar to Jung trying to outwit Freud, or Jason Benn trying to hold his own in a hot start-up: To succeed you have to produce the absolute best stuff you’re capable of producing—a task that requires depth. The growing necessity of deep work is new. In an industrial economy, there was a small skilled labor and professional class for which deep work was crucial, but most workers could do just fine without ever cultivating an ability to concentrate without distraction. They were paid to crank widgets—and not much about their job would change in the decades they kept it. But as we shift to an information economy, more and more of our population are knowledge workers, and deep work is becoming a key currency—even if most haven’t yet recognized this reality. Deep work is not, in other words, an old-fashioned skill falling into irrelevance. It’s instead a crucial ability for anyone looking to move ahead in a globally competitive information economy that tends to chew up and spit out those who aren’t earning their keep. The real rewards are reserved not for those who are comfortable using Facebook (a shallow task, easily replicated), but instead for those who are comfortable building the innovative distributed systems that run the service (a decidedly deep task, hard to replicate). Deep work is so important that we might consider it, to use the phrasing of business writer Eric Barker, “the superpower of the 21st century.” We have now seen two strands of thought—one about the increasing scarcity of deep work and the other about its increasing value—which we can combine into the idea that provides the foundation for everything that follows in this book: The Deep Work Hypothesis: The ability to perform deep work is becoming increasingly rare at exactly the same time it is becoming increasingly valuable in our economy. As a consequence, the few who cultivate this skill, and then make it the core of their working life, will thrive. This book has two goals, pursued in two parts. The first, tackled in Part 1, is to convince you that the deep work hypothesis is true. The second, tackled in Part 2, is to teach you how to take advantage of this reality by training your brain and transforming your work habits to place deep work at the core of your professional life. Before diving into these details, however, I’ll take a moment to explain how I became such a devotee of depth. I’ve spent the past decade cultivating my own ability to concentrate on hard things. To understand the origins of this interest, it helps to know that I’m a theoretical computer scientist who performed my doctoral training in MIT’s famed Theory of Computation group—a professional setting where the ability to focus is considered a crucial occupational skill. During these years, I shared a graduate student office down the hall from a MacArthur “genius grant” winner—a professor who was hired at MIT before he was old enough to legally drink. It wasn’t uncommon to find this theoretician sitting in the common space, staring at markings on a whiteboard, with a group of visiting scholars arrayed around him, also sitting quietly and staring. This could go on for hours. I’d go to lunch; I’d come back—still staring. This particular professor is hard to reach. He’s not on Twitter and if he doesn’t know you, he’s unlikely to respond to your e-mail. Last year he published sixteen papers. This type of fierce concentration permeated the atmosphere during my student years. Not surprisingly, I soon developed a similar commitment to depth. To the chagrin of both my friends and the various publicists I’ve worked with on my books, I’ve never had a Facebook or Twitter account, or any other social media presence outside of a blog. I don’t Web surf and get most of my news from my home-delivered Washington Post and NPR. I’m also generally hard to reach: My author website doesn’t provide a personal e-mail address, and I didn’t own my first smartphone until 2012 (when my pregnant wife gave me an ultimatum—“you have to have a phone that works before our son is born”). On the other hand, my commitment to depth has rewarded me. In the ten-year period following my college graduation, I published four books, earned a PhD, wrote peer-reviewed academic papers at a high rate, and was hired as a tenure-track professor at Georgetown University. I maintained this voluminous production while rarely working past five or six p.m. during the workweek. This compressed schedule is possible because I’ve invested significant effort to minimize the shallow in my life while making sure I get the most out of the time this frees up. I build my days around a core of carefully chosen deep work, with the shallow activities I absolutely cannot avoid batched into smaller bursts at the peripheries of my schedule. Three to four hours a day, five days a week, of uninterrupted and carefully directed concentration, it turns out, can produce a lot of valuable output. My commitment to depth has also returned nonprofessional benefits. For the most part, I don’t touch a computer between the time when I get home from work and the next morning when the new workday begins (the main exception being blog posts, which I like to write after my kids go to bed). This ability to fully disconnect, as opposed to the more standard practice of sneaking in a few quick work e-mail checks, or giving in to frequent surveys of social media sites, allows me to be present with my wife and two sons in the evenings, and read a surprising number of books for a busy father of two. More generally, the lack of distraction in my life tones down that background hum of nervous mental energy that seems to increasingly pervade people’s daily lives. I’m comfortable being bored, and this can be a surprisingly rewarding skill—especially on a lazy D.C. summer night listening to a Nationals game slowly unfold on the radio. This book is best described as an attempt to formalize and explain my attraction to depth over shallowness, and to detail the types of strategies that have helped me act on this attraction. I’ve committed this thinking to words, in part, to help you follow my lead in rebuilding your life around deep work—but this isn’t the whole story. My other interest in distilling and clarifying these thoughts is to further develop my own practice. My recognition of the deep work hypothesis has helped me thrive, but I’m convinced that I haven’t yet reached my full value-producing potential. As you struggle and ultimately triumph with the ideas and rules in the chapters ahead, you can be assured that I’m following suit—ruthlessly culling the shallow and painstakingly cultivating the intensity of my depth. (You’ll learn how I fare in this book’s conclusion.) When Carl Jung wanted to revolutionize the field of psychiatry, he built a retreat in the woods. Jung’s Bollingen Tower became a place where he could maintain his ability to think deeply and then apply the skill to produce work of such stunning originality that it changed the world. In the pages ahead, I’ll try to convince you to join me in the effort to build our own personal Bollingen Towers; to cultivate an ability to produce real value in an increasingly distracted world; and to recognize a truth embraced by the most productive and important personalities of generations past: A deep life is a good life. PART 1 The Idea Chapter One Deep Work Is Valuable As Election Day loomed in 2012, traffic at the New York Times website spiked, as is normal during moments of national importance. But this time, something was different. A wildly disproportionate fraction of this traffic—more than 70 percent by some reports—was visiting a single location in the sprawling domain. It wasn’t a front-page breaking news story, and it wasn’t commentary from one of the paper’s Pulitzer Prize– winning columnists; it was instead a blog run by a baseball stats geek turned election forecaster named Nate Silver. Less than a year later, ESPN and ABC News lured Silver away from the Times (which tried to retain him by promising a staff of up to a dozen writers) in a major deal that would give Silver’s operation a role in everything from sports to weather to network news segments to, improbably enough, Academy Awards telecasts. Though there’s debate about the methodological rigor of Silver’s hand-tuned models, there are few who deny that in 2012 this thirty-five-year-old data whiz was a winner in our economy. Another winner is David Heinemeier Hansson, a computer programming star who created the Ruby on Rails website development framework, which currently provides the foundation for some of the Web’s most popular destinations, including Twitter and Hulu. Hansson is a partner in the influential development firm Basecamp (called 37signals until 2014). Hansson doesn’t talk publicly about the magnitude of his profit share from Basecamp or his other revenue sources, but we can assume they’re lucrative given that Hansson splits his time between Chicago, Malibu, and Marbella, Spain, where he dabbles in high-performance race-car driving. Our third and final example of a clear winner in our economy is John Doerr, a general partner in the famed Silicon Valley venture capital fund Kleiner Perkins Caufield & Byers. Doerr helped fund many of the key companies fueling the current technological revolution, including Twitter, Google, Amazon, Netscape, and Sun Microsystems. The return on these investments has been astronomical: Doerr’s net worth, as of this writing, is more than $3 billion. Why have Silver, Hansson, and Doerr done so well? There are two types of answers to this question. The first are micro in scope and focus on the personality traits and tactics that helped drive this trio’s rise. The second type of answers are more macro in that they focus less on the individuals and more on the type of work they represent. Though both approaches to this core question are important, the macro answers will prove most relevant to our discussion, as they better illuminate what our current economy rewards. To explore this macro perspective we turn to a pair of MIT economists, Erik Brynjolfsson and Andrew McAfee, who in their influential 2011 book, Race Against the Machine, provide a compelling case that among various forces at play, it’s the rise of digital technology in particular that’s transforming our labor markets in unexpected ways. “We are in the early throes of a Great Restructuring,” Brynjolfsson and McAfee explain early in their book. “Our technologies are racing ahead but many of our skills and organizations are lagging behind.” For many workers, this lag predicts bad news. As intelligent machines improve, and the gap between machine and human abilities shrinks, employers are becoming increasingly likely to hire “new machines” instead of “new people.” And when only a human will do, improvements in communications and collaboration technology are making remote work easier than ever before, motivating companies to outsource key roles to stars—leaving the local talent pool underemployed. This reality is not, however, universally grim. As Brynjolfsson and McAfee emphasize, this Great Restructuring is not driving down all jobs but is instead dividing them. Though an increasing number of people will lose in this new economy as their skill becomes automatable or easily outsourced, there are others who will not only survive, but thrive—becoming more valued (and therefore more rewarded) than before. Brynjolfsson and McAfee aren’t alone in proposing this bimodal trajectory for the economy. In 2013, for example, the George Mason economist Tyler Cowen published Average Is Over, a book that echoes this thesis of a digital division. But what makes Brynjolfsson and McAfee’s analysis particularly useful is that they proceed to identify three specific groups that will fall on the lucrative side of this divide and reap a disproportionate amount of the benefits of the Intelligent Machine Age. Not surprisingly, it’s to these three groups that Silver, Hansson, and Doerr happen to belong. Let’s touch on each of these groups in turn to better understand why they’re suddenly so valuable. The High-Skilled Workers Brynjolfsson and McAfee call the group personified by Nate Silver the “high-skilled” workers. Advances such as robotics and voice recognition are automating many low- skilled positions, but as these economists emphasize, “other technologies like data visualization, analytics, high speed communications, and rapid prototyping have augmented the contributions of more abstract and data-driven reasoning, increasing the values of these jobs.” In other words, those with the oracular ability to work with and tease valuable results out of increasingly complex machines will thrive. Tyler Cowen summarizes this reality more bluntly: “The key question will be: are you good at working with intelligent machines or not?” Nate Silver, of course, with his comfort in feeding data into large databases, then siphoning it out into his mysterious Monte Carlo simulations, is the epitome of the high-skilled worker. Intelligent machines are not an obstacle to Silver’s success, but instead provide its precondition. The Superstars The ace programmer David Heinemeier Hansson provides an example of the second group that Brynjolfsson and McAfee predict will thrive in our new economy: “superstars.” High-speed data networks and collaboration tools like e-mail and virtual meeting software have destroyed regionalism in many sectors of knowledge work. It no longer makes sense, for example, to hire a full-time programmer, put aside office space, and pay benefits, when you can instead pay one of the world’s best programmers, like Hansson, for just enough time to complete the project at hand. In this scenario, you’ll probably get a better result for less money, while Hansson can service many more clients per year, and will therefore also end up better off. The fact that Hansson might be working remotely from Marbella, Spain, while your office is in Des Moines, Iowa, doesn’t matter to your company, as advances in communication and collaboration technology make the process near seamless. (This reality does matter, however, to the less-skilled local programmers living in Des Moines and in need of a steady paycheck.) This same trend holds for the growing number of fields where technology makes productive remote work possible— consulting, marketing, writing, design, and so on. Once the talent market is made universally accessible, those at the peak of the market thrive while the rest suffer. In a seminal 1981 paper, the economist Sherwin Rosen worked out the mathematics behind these “winner-take-all” markets. One of his key insights was to explicitly model talent—labeled, innocuously, with the variable q in his formulas—as a factor with “imperfect substitution,” which Rosen explains as follows: “Hearing a succession of mediocre singers does not add up to a single outstanding performance.” In other words, talent is not a commodity you can buy in bulk and combine to reach the needed levels: There’s a premium to being the best. Therefore, if you’re in a marketplace where the consumer has access to all performers, and everyone’s q value is clear, the consumer will choose the very best. Even if the talent advantage of the best is small compared to the next rung down on the skill ladder, the superstars still win the bulk of the market. In the 1980s, when Rosen studied this effect, he focused on examples like movie stars and musicians, where there existed clear markets, such as music stores and movie theaters, where an audience has access to different performers and can accurately approximate their talent before making a purchasing decision. The rapid rise of communication and collaboration technologies has transformed many other formerly local markets into a similarly universal bazaar. The small company looking for a computer programmer or public relations consultant now has access to an international marketplace of talent in the same way that the advent of the record store allowed the small-town music fan to bypass local musicians to buy albums from the world’s best bands. The superstar effect, in other words, has a broader application today than Rosen could have predicted thirty years ago. An increasing number of individuals in our economy are now competing with the rock stars of their sectors. The Owners The final group that will thrive in our new economy—the group epitomized by John Doerr—consists of those with capital to invest in the new technologies that are driving the Great Restructuring. As we’ve understood since Marx, access to capital provides massive advantages. It’s also true, however, that some periods offer more advantages than others. As Brynjolfsson and McAfee point out, postwar Europe was an example of a bad time to be sitting on a pile of cash, as the combination of rapid inflation and aggressive taxation wiped out old fortunes with surprising speed (what we might call the “Downton Abbey Effect”). The Great Restructuring, unlike the postwar period, is a particularly good time to have access to capital. To understand why, first recall that bargaining theory, a key component in standard economic thinking, argues that when money is made through the combination of capital investment and labor, the rewards are returned, roughly speaking, proportional to the input. As digital technology reduces the need for labor in many industries, the proportion of the rewards returned to those who own the intelligent machines is growing. A venture capitalist in today’s economy can fund a company like Instagram, which was eventually sold for a billion dollars, while employing only thirteen people. When else in history could such a small amount of labor be involved in such a large amount of value? With so little input from labor, the proportion of this wealth that flows back to the machine owners—in this case, the venture investors—is without precedent. It’s no wonder that a venture capitalist I interviewed for my last book admitted to me with some concern, “Everyone wants my job.” Let’s pull together the threads spun so far: Current economic thinking, as I’ve surveyed, argues that the unprecedented growth and impact of technology are creating a massive restructuring of our economy. In this new economy, three groups will have a particular advantage: those who can work well and creatively with intelligent machines, those who are the best at what they do, and those with access to capital. To be clear, this Great Restructuring identified by economists like Brynjolfsson, McAfee, and Cowen is not the only economic trend of importance at the moment, and the three groups mentioned previously are not the only groups who will do well, but what’s important for this book’s argument is that these trends, even if not alone, are important, and these groups, even if they are not the only such groups, will thrive. If you can join any of these groups, therefore, you’ll do well. If you cannot, you might still do well, but your position is more precarious. The question we must now face is the obvious one: How does one join these winners? At the risk of quelling your rising enthusiasm, I should first confess that I have no secret for quickly amassing capital and becoming the next John Doerr. (If I had such secrets, it’s unlikely I’d share them in a book.) The other two winning groups, however, are accessible. How to access them is the goal we tackle next. How to Become a Winner in the New Economy I just identified two groups that are poised to thrive and that I claim are accessible: those who can work creatively with intelligent machines and those who are stars in their field. What’s the secret to landing in these lucrative sectors of the widening digital divide? I argue that the following two core abilities are crucial. Two Core Abilities for Thriving in the New Economy 1. The ability to quickly master hard things. 2. The ability to produce at an elite level, in terms of both quality and speed. Let’s begin with the first ability. To start, we must remember that we’ve been spoiled by the intuitive and drop-dead-simple user experience of many consumer- facing technologies, like Twitter and the iPhone. These examples, however, are consumer products, not serious tools: Most of the intelligent machines driving the Great Restructuring are significantly more complex to understand and master. Consider Nate Silver, our earlier example of someone who thrives by working well with complicated technology. If we dive deeper into his methodology, we discover that generating data-driven election forecasts is not as easy as typing “Who will win more votes?” into a search box. He instead maintains a large database of poll results (thousands of polls from more than 250 pollsters) that he feeds into Stata, a popular statistical analysis system produced by a company called StataCorp. These are not easy tools to master. Here, for example, is the type of command you need to understand to work with a modern database like Silver uses: CREATE VIEW cities AS SELECT name, population, altitude FROM capitals UNION SELECT name, population, altitude FROM non_capitals; Databases of this type are interrogated in a language called SQL. You send them commands like the one shown here to interact with their stored information. Understanding how to manipulate these databases is subtle. The example command, for example, creates a “view”: a virtual database table that pulls together data from multiple existing tables, and that can then be addressed by the SQL commands like a standard table. When to create views and how to do so well is a tricky question, one of many that you must understand and master to tease reasonable results out of real- world databases. Sticking with our Nate Silver case study, consider the other technology he relies on: Stata. This is a powerful tool, and definitely not something you can learn intuitively after some modest tinkering. Here, for example, is a description of the features added to the most recent version of this software: “Stata 13 adds many new features such as treatment effects, multilevel GLM, power and sample size, generalized SEM, forecasting, effect sizes, Project Manager, long strings and BLOBs, and much more.” Silver uses this complex software—with its generalized SEM and BLOBs—to build intricate models with interlocking parts: multiple regressions, conducted on custom parameters, which are then referenced as custom weights used in probabilistic expressions, and so on. The point of providing these details is to emphasize that intelligent machines are complicated and hard to master. * To join the group of those who can work well with these machines, therefore, requires that you hone your ability to master hard things. And because these technologies change rapidly, this process of mastering hard things never ends: You must be able to do it quickly, again and again. This ability to learn hard things quickly, of course, isn’t just necessary for working well with intelligent machines; it also plays a key role in the attempt to become a superstar in just about any field—even those that have little to do with technology. To become a world-class yoga instructor, for example, requires that you master an increasingly complex set of physical skills. To excel in a particular area of medicine, to give another example, requires that you be able to quickly master the latest research on relevant procedures. To summarize these observations more succinctly: If you can’t learn, you can’t thrive. Now consider the second core ability from the list shown earlier: producing at an elite level. If you want to become a superstar, mastering the relevant skills is necessary, but not sufficient. You must then transform that latent potential into tangible results that people value. Many developers, for example, can program computers well, but David Hansson, our example superstar from earlier, leveraged this ability to produce Ruby on Rails, the project that made his reputation. Ruby on Rails required Hansson to push his current skills to their limit and produce unambiguously valuable and concrete results. This ability to produce also applies to those looking to master intelligent machines. It wasn’t enough for Nate Silver to learn how to manipulate large data sets and run statistical analyses; he needed to then show that he could use this skill to tease information from these machines that a large audience cared about. Silver worked with many stats geeks during his days at Baseball Prospectus, but it was Silver alone who put in the effort to adapt these skills to the new and more lucrative territory of election forecasting. This provides another general observation for joining the ranks of winners in our economy: If you don’t produce, you won’t thrive—no matter how skilled or talented you are. Having established two abilities that are fundamental to getting ahead in our new, technology-disrupted world, we can now ask the obvious follow-up question: How does one cultivate these core abilities? It’s here that we arrive at a central thesis of this book: The two core abilities just described depend on your ability to perform deep work. If you haven’t mastered this foundational skill, you’ll struggle to learn hard things or produce at an elite level. The dependence of these abilities on deep work isn’t immediately obvious; it requires a closer look at the science of learning, concentration, and productivity. The sections ahead provide this closer look, and by doing so will help this connection between deep work and economic success shift for you from unexpected to unimpeachable. Deep Work Helps You Quickly Learn Hard Things “Let your mind become a lens, thanks to the converging rays of attention; let your soul be all intent on whatever it is that is established in your mind as a dominant, wholly absorbing idea.” This advice comes from Antonin-Dalmace Sertillanges, a Dominican friar and professor of moral philosophy, who during the early part of the twentieth century penned a slim but influential volume titled The Intellectual Life. Sertillanges wrote the book as a guide to “the development and deepening of the mind” for those called to make a living in the world of ideas. Throughout The Intellectual Life, Sertillanges recognizes the necessity of mastering complicated material and helps prepare the reader for this challenge. For this reason, his book proves useful in our quest to better understand how people quickly master hard (cognitive) skills. To understand Sertillanges’s advice, let’s return to the quote from earlier. In these words, which are echoed in many forms in The Intellectual Life, Sertillanges argues that to advance your understanding of your field you must tackle the relevant topics systematically, allowing your “converging rays of attention” to uncover the truth latent in each. In other words, he teaches: To learn requires intense concentration. This idea turns out to be ahead of its time. In reflecting on the life of the mind in the 1920s, Sertillanges uncovered a fact about mastering cognitively demanding tasks that would take academia another seven decades to formalize. This task of formalization began in earnest in the 1970s, when a branch of psychology, sometimes called performance psychology, began to systematically explore what separates experts (in many different fields) from everyone else. In the early 1990s, K. Anders Ericsson, a professor at Florida State University, pulled together these strands into a single coherent answer, consistent with the growing research literature, that he gave a punchy name: deliberate practice. Ericsson opens his seminal paper on the topic with a powerful claim: “We deny that these differences [between expert performers and normal adults] are immutable… Instead, we argue that the differences between expert performers and normal adults reflect a life-long period of deliberate effort to improve performance in a specific domain.” American culture, in particular, loves the storyline of the prodigy (“Do you know how easy this is for me!?” Matt Damon’s character famously cries in the movie Good Will Hunting as he makes quick work of proofs that stymie the world’s top mathematicians). The line of research promoted by Ericsson, and now widely accepted (with caveats*), de-stabilizes these myths. To master a cognitively demanding task requires this specific form of practice—there are few exceptions made for natural talent. (On this point too, Sertillanges seems to have been ahead of his time, arguing in The Intellectual Life, “Men of genius themselves were great only by bringing all their power to bear on the point on which they had decided to show their full measure.” Ericsson couldn’t have said it better.) This brings us to the question of what deliberate practice actually requires. Its core components are usually identified as follows: (1) your attention is focused tightly on a specific skill you’re trying to improve or an idea you’re trying to master; (2) you receive feedback so you can correct your approach to keep your attention exactly where it’s most productive. The first component is of particular importance to our discussion, as it emphasizes that deliberate practice cannot exist alongside distraction, and that it instead requires uninterrupted concentration. As Ericsson emphasizes, “Diffused attention is almost antithetical to the focused attention required by deliberate practice” (emphasis mine). As psychologists, Ericsson and the other researchers in his field are not interested in why deliberate practice works; they’re just identifying it as an effective behavior. In the intervening decades since Ericsson’s first major papers on the topic, however, neuroscientists have been exploring the physical mechanisms that drive people’s improvements on hard tasks. As the journalist Daniel Coyle surveys in his 2009 book, The Talent Code , these scientists increasingly believe the answer includes myelin—a layer of fatty tissue that grows around neurons, acting like an insulator that allows the cells to fire faster and cleaner. To understand the role of myelin in improvement, keep in mind that skills, be they intellectual or physical, eventually reduce down to brain circuits. This new science of performance argues that you get better at a skill as you develop more myelin around the relevant neurons, allowing the corresponding circuit to fire more effortlessly and effectively. To be great at something is to be well myelinated. This understanding is important because it provides a neurological foundation for why deliberate practice works. By focusing intensely on a specific skill, you’re forcing the specific relevant circuit to fire, again and again, in isolation. This repetitive use of a specific circuit triggers cells called oligodendrocytes to begin wrapping layers of myelin around the neurons in the circuits—effectively cementing the skill. The reason, therefore, why it’s important to focus intensely on the task at hand while avoiding distraction is because this is the only way to isolate the relevant neural circuit enough to trigger useful myelination. By contrast, if you’re trying to learn a complex new skill (say, SQL database management) in a state of low concentration (perhaps you also have your Facebook feed open), you’re firing too many circuits simultaneously and haphazardly to isolate the group of neurons you actually want to strengthen. In the century that has passed since Antonin-Dalmace Sertillanges first wrote about using the mind like a lens to focus rays of attention, we have advanced from this elevated metaphor to a decidedly less poetic explanation expressed in terms of oligodendrocyte cells. But this sequence of thinking about thinking points to an inescapable conclusion: To learn hard things quickly, you must focus intensely without distraction. To learn, in other words, is an act of deep work. If you’re comfortable going deep, you’ll be comfortable mastering the increasingly complex systems and skills needed to thrive in our economy. If you instead remain one of the many for whom depth is uncomfortable and distraction ubiquitous, you shouldn’t expect these systems and skills to come easily to you. Deep Work Helps You Produce at an Elite Level Adam Grant produces at an elite level. When I met Grant in 2013, he was the youngest professor to be awarded tenure at the Wharton School of Business at Penn. A year later, when I started writing this chapter (and was just beginning to think about my own tenure process), the claim was updated: He’s now the youngest full professor * at Wharton. The reason Grant advanced so quickly in his corner of academia is simple: He produces. In 2012, Grant published seven articles—all of them in major journals. This is an absurdly high rate for his field (in which professors tend to work alone or in small professional collaborations and do not have large teams of students and postdocs to support their research). In 2013, this count fell to five. This is still absurdly high, but below his recent standards. He can be excused for this dip, however, because this same year he published a book titled Give and Take , which popularized some of his research on relationships in business. To say that this book was successful is an understatement. It ended up featured on the cover of the New York Times Magazine and went on to become a massive bestseller. When Grant was awarded full professorship in 2014, he had already written more than sixty peer- reviewed publications in addition to his bestselling book. Soon after meeting Grant, my own academic career on my mind, I couldn’t help but ask him about his productivity. Fortunately for me, he was happy to share his thoughts on the subject. It turns out that Grant thinks a lot about the mechanics of producing at an elite level. He sent me, for example, a collection of PowerPoint slides from a workshop he attended with several other professors in his field. The event was focused on data-driven observations about how to produce academic work at an optimum rate. These slides included detailed pie charts of time allocation per season, a flowchart capturing relationship development with co-authors, and a suggested reading list with more than twenty titles. These business professors do not live the cliché of the absentminded academic lost in books and occasionally stumbling on a big idea. They see productivity as a scientific problem to systematically solve—a goal Adam Grant seems to have achieved. Though Grant’s productivity depends on many factors, there’s one idea in particular that seems central to his method: the batching of hard but important intellectual work into long, uninterrupted stretches. Grant performs this batching at multiple levels. Within the year, he stacks his teaching into the fall semester, during which he can turn all of his attention to teaching well and being available to his students. (This method seems to work, as Grant is currently the highest-rated teacher at Wharton and the winner of multiple teaching awards.) By batching his teaching in the fall, Grant can then turn his attention fully to research in the spring and summer, and tackle this work with less distraction. Grant also batches his attention on a smaller time scale. Within a semester dedicated to research, he alternates between periods where his door is open to students and colleagues, and periods where he isolates himself to focus completely and without distraction on a single research task. (He typically divides the writing of a scholarly paper into three discrete tasks: analyzing the data, writing a full draft, and editing the draft into something publishable.) During these periods, which can last up to three or four days, he’ll often put an out-of-office auto-responder on his e-mail so correspondents will know not to expect a response. “It sometimes confuses my colleagues,” he told me. “They say, ‘You’re not out of office, I see you in your office right now!’” But to Grant, it’s important to enforce strict isolation until he completes the task at hand. My guess is that Adam Grant doesn’t work substantially more hours than the average professor at an elite research institution (generally speaking, this is a group prone to workaholism), but he still manages to produce more than just about anyone else in his field. I argue that his approach to batching helps explain this paradox. In particular, by consolidating his work into intense and uninterrupted pulses, he’s leveraging the following law of productivity: High-Quality Work Produced = (Time Spent) x (Intensity of Focus) If you believe this formula, then Grant’s habits make sense: By maximizing his intensity when he works, he maximizes the results he produces per unit of time spent working. This is not the first time I’ve encountered this formulaic conception of productivity. It first came to my attention when I was researching my second book, How to Become a Straight-A Student, many years earlier. During that research process, I interviewed around fifty ultra-high-scoring college undergraduates from some of the country’s most competitive schools. Something I noticed in these interviews is that the very best students often studied less than the group of students right below them on the GPA rankings. One of the explanations for this phenomenon turned out to be the formula detailed earlier: The best students understood the role intensity plays in productivity and therefore went out of their way to maximize their concentration—radically reducing the time required to prepare for tests or write papers, without diminishing the quality of their results. The example of Adam Grant implies that this intensity formula applies beyond just undergraduate GPA and is also relevant to other cognitively demanding tasks. But why would this be? An interesting explanation comes from Sophie Leroy, a business professor at the University of Minnesota. In a 2009 paper, titled, intriguingly, “Why Is It So Hard to Do My Work?,” Leroy introduced an effect she called attention residue. In the introduction to this paper, she noted that other researchers have studied the effect of multitasking—trying to accomplish multiple tasks simultaneously—on performance, but that in the modern knowledge work office, once you got to a high enough level, it was more common to find people working on multiple projects sequentially: “Going from one meeting to the next, starting to work on one project and soon after having to transition to another is just part of life in organizations,” Leroy explains. The problem this research identifies with this work strategy is that when you switch from some Task A to another Task B, your attention doesn’t immediately follow—a residue of your attention remains stuck thinking about the original task. This residue gets especially thick if your work on Task A was unbounded and of low intensity before you switched, but even if you finish Task A before moving on, your attention remains divided for a while. Leroy studied the effect of this attention residue on performance by forcing task switches in the laboratory. In one such experiment, for example, she started her subjects working on a set of word puzzles. In one of the trials, she would interrupt them and tell them that they needed to move on to a new and challenging task, in this case, reading résumés and making hypothetical hiring decisions. In other trials, she let the subjects finish the puzzles before giving them the next task. In between puzzling and hiring, she would deploy a quick lexical decision game to quantify the amount of residue left from the first task.* The results from this and her similar experiments were clear: “People experiencing attention residue after switching tasks are likely to demonstrate poor performance on that next task,” and the more intense the residue, the worse the performance. The concept of attention residue helps explain why the intensity formula is true and therefore helps explain Grant’s productivity. By working on a single hard task for a long time without switching, Grant minimizes the negative impact of attention residue from his other obligations, allowing him to maximize performance on this one task. When Grant is working for days in isolation on a paper, in other words, he’s doing so at a higher level of effectiveness than the standard professor following a more distracted strategy in which the work is repeatedly interrupted by residue-slathering interruptions. Even if you’re unable to fully replicate Grant’s extreme isolation (we’ll tackle different strategies for scheduling depth in Part 2), the attention residue concept is still telling because it implies that the common habit of working in a state of semi- distraction is potentially devastating to your performance. It might seem harmless to take a quick glance at your inbox every ten minutes or so. Indeed, many justify this behavior as better than the old practice of leaving an inbox open on the screen at all times (a straw-man habit that few follow anymore). But Leroy teaches us that this is not in fact much of an improvement. That quick check introduces a new target for your attention. Even worse, by seeing messages that you cannot deal with at the moment (which is almost always the case), you’ll be forced to turn back to the primary task with a secondary task left unfinished. The attention residue left by such unresolved switches dampens your performance. When we step back from these individual observations, we see a clear argument form: To produce at your peak level you need to work for extended periods with full concentration on a single task free from distraction. Put another way, the type of work that optimizes your performance is deep work. If you’re not comfortable going deep for extended periods of time, it’ll be difficult to get your performance to the peak levels of quality and quantity increasingly necessary to thrive professionally. Unless your talent and skills absolutely dwarf those of your competition, the deep workers among them will outproduce you. What About Jack Dorsey? I’ve now made my argument for why deep work supports abilities that are becoming increasingly important in our economy. Before we accept this conclusion, however, we must face a type of question that often arises when I discuss this topic: What about Jack Dorsey? Jack Dorsey helped found Twitter. After stepping down as CEO, he then launched the payment-processing company Square. To quote a Forbes profile: “He is a disrupter on a massive scale and a repeat offender.” He is also someone who does not spend a lot of time in a state of deep work. Dorsey doesn’t have the luxury of long periods of uninterrupted thinking because, at the time when the Forbes profile was written, he maintained management duties at both Twitter (where he remained chairman) and Square, leading to a tightly calibrated schedule that ensures that the companies have a predictable “weekly cadence” (and that also ensures that Dorsey’s time and attention are severely fractured). Dorsey reports, for example, that he ends the average day with thirty to forty sets of meeting notes that he reviews and filters at night. In the small spaces between all these meetings, he believes in serendipitous availability. “I do a lot of my work at stand-up tables, which anyone can come up to,” Dorsey said. “I get to hear all these conversations around the company.” This style of work is not deep. To use a term from our previous section, Dorsey’s attention residue is likely slathered on thick as he darts from one meeting to another, letting people interrupt him freely in the brief interludes in between. And yet, we cannot say that Dorsey’s work is shallow, because shallow work, as defined in the introduction, is low value and easily replicable, while what Jack Dorsey does is incredibly valuable and highly rewarded in our economy (as of this writing he was among the top one thousand richest people in the world, with a net worth over $1.1 billion). Jack Dorsey is important to our discussion because he’s an exemplar of a group we cannot ignore: individuals who thrive without depth. When I titled the motivating question of this section “What About Jack Dorsey?,” I was providing a specific example of a more general query: If deep work is so important, why are there distracted people who do well? To conclude this chapter, I want to address this question so it doesn’t nag at your attention as we dive deeper into the topic of depth in the pages ahead. To start, we must first note that Jack Dorsey is a high-level executive of a large company (two companies, in fact). Individuals with such positions play a major role in the category of those who thrive without depth, because the lifestyle of such executives is famously and unavoidably distracted. Here’s Kerry Trainor, CEO of Vimeo, trying to answer the question of how long he can go without e-mail: “I can go a good solid Saturday without, without… well, most of the daytime without it… I mean, I’ll check it, but I won’t necessarily respond.” At the same time, of course, these executives are better compensated and more important in the American economy today than in any other time in history. Jack Dorsey’s success without depth is common at this elite level of management. Once we’ve stipulated this reality, we must then step back to remind ourselves that it doesn’t undermine the general value of depth. Why? Because the necessity of distraction in these executives’ work lives is highly specific to their particular jobs. A good chief executive is essentially a hard-to-automate decision engine, not unlike IBM’s Jeopardy!-playing Watson system. They have built up a hard-won repository of experience and have honed and proved an instinct for their market. They’re then presented inputs throughout the day—in the form of e-mails, meetings, site visits, and the like—that they must process and act on. To ask a CEO to spend four hours thinking deeply about a single problem is a waste of what makes him or her valuable. It’s better to hire three smart subordinates to think deeply about the problem and then bring their solutions to the executive for a final decision. This specificity is important because it tells us that if you’re a high-level executive at a major company, you probably don’t need the advice in the pages that follow. On the other hand, it also tells us that you cannot extrapolate the approach of these executives to other jobs. The fact that Dorsey encourages interruption or Kerry Trainor checks his e-mail constantly doesn’t mean that you’ll share their success if you follow suit: Their behaviors are characteristic of their specific roles as corporate officers. This rule of specificity should be applied to similar counterexamples that come to mind while reading the rest of this book. There are, we must continually remember, certain corners of our economy where depth is not valued. In addition to executives, we can also include, for example, certain types of salesmen and lobbyists, for whom constant connection is their most valued currency. There are even those who manage to grind out distracted success in fields where depth would help. But at the same time, don’t be too hasty to label your job as necessarily non-deep. Just because your current habits make deep work difficult doesn’t mean that this lack of depth is fundamental to doing your job well. In the next chapter, for example, I tell the story of a group of high-powered management consultants who were convinced that constant e-mail connectivity was necessary for them to service their clients. When a Harvard professor forced them to disconnect more regularly (as part of a research study), they found, to their surprise, that this connectivity didn’t matter nearly as much as they had assumed. The clients didn’t really need to reach them at all times and their performance as consultants improved once their attention became less fractured. Similarly, several managers I know tried to convince me that they’re most valuable when they’re able to respond quickly to their teams’ problems, preventing project logjams. They see their role as enabling others’ productivity, not necessarily protecting their own. Follow-up discussions, however, soon uncovered that this goal d i d n’ t really require attention-fracturing connectivity. Indeed, many software companies now deploy the Scrum project management methodology, which replaces a lot of this ad hoc messaging with regular, highly structured, and ruthlessly efficient status meetings (often held standing up to minimize the urge to bloviate). This approach frees up more managerial time for thinking deeply about the problems their teams are tackling, often improving the overall value of what they produce. Put another way: Deep work is not the only skill valuable in our economy, and it’s possible to do well without fostering this ability, but the niches where this is advisable are increasingly rare. Unless you have strong evidence that distraction is important for your specific profession, you’re best served, for the reasons argued earlier in this chapter, by giving serious consideration to depth. Chapter Two Deep Work Is Rare In 2012, Facebook unveiled the plans for a new headquarters designed by Frank Gehry. At the center of this new building is what CEO Mark Zuckerberg called “the largest open floor plan in the world”: More than three thousand employees will work on movable furniture spread over a ten-acre expanse. Facebook, of course, is not the only Silicon Valley heavyweight to embrace the open office concept. When Jack Dorsey, whom we met at the end of the last chapter, bought the old San Francisco Chronicle building to house Square, he configured the space so that his developers work in common spaces on long shared desks. “We encourage people to stay out in the open because we believe in serendipity—and people walking by each other teaching new things,” Dorsey explained. Another big business trend in recent years is the rise of instant messaging. A Times article notes that this technology is no longer the “province of chatty teenagers” and is now helping companies benefit from “new productivity gains and improvements in customer response time.” A senior product manager at IBM boasts: “We send 2.5 million I.M.’s within I.B.M. each day.” One of the more successful recent entrants into the business IM space is Hall, a Silicon Valley start-up that helps employees move beyond just chat and engage in “real-time collaboration.” A San Francisco–based developer I know described to me what it was like to work in a company that uses Hall. The most “efficient” employees, he explained, set up their text editor to flash an alert on their screen when a new question or comment is posted to the company’s Hall account. They can then, with a sequence of practiced keystrokes, jump over to Hall, type in their thoughts, and then jump back to their coding with barely a pause. My friend seemed impressed when describing their speed. A third trend is the push for content producers of all types to maintain a social media presence. The New York Times , a bastion of old-world media values, now encourages its employees to tweet—a hint taken by the more than eight hundred writers, editors, and photographers for the paper who now maintain a Twitter account. This is not outlier behavior; it’s instead the new normal. When the novelist Jonathan Franzen wrote a piece for the Guardian calling Twitter a “coercive development” in the literary world, he was widely ridiculed as out of touch. The online magazine Slate called Franzen’s complaints a “lonely war on the Internet” and fellow novelist Jennifer Weiner wrote a response in The New Republic in which she argued, “Franzen’s a category of one, a lonely voice issuing ex cathedra edicts that can only apply to himself.” The sarcastic hashtag #JonathanFranzenhates soon became a fad. I mention these three business trends because they highlight a paradox. In the last chapter, I argued that deep work is more valuable than ever before in our shifting economy. If this is true, however, you would expect to see this skill promoted not just by ambitious individuals but also by organizations hoping to get the most out of their employees. As the examples provided emphasize, this is not happening. Many other ideas are being prioritized as more important than deep work in the business world, including, as we just encountered, serendipitous collaboration, rapid communication, and an active presence on social media. It’s bad enough that so many trends are prioritized ahead of deep work, but to add insult to injury, many of these trends actively decrease one’s ability to go deep. Open offices, for example, might create more opportunities for collaboration,* but they do so at the cost of “massive distraction,” to quote the results of experiments conducted for a British TV special titled The Secret Life of Office Buildings. “If you are just getting into some work and a phone goes off in the background, it ruins what you are concentrating on,” said the neuroscientist who ran the experiments for the show. “Even though you are not aware at the time, the brain responds to distractions.” Similar issues apply to the rise of real-time messaging. E-mail inboxes, in theory, can distract you only when you choose to open them, whereas instant messenger systems are meant to be always active—magnifying the impact of interruption. Gloria Mark, a professor of informatics at the University of California, Irvine, is an expert on the science of attention fragmentation. In a well-cited study, Mark and her co-authors observed knowledge workers in real offices and found that an interruption, even if short, delays the total time required to complete a task by a significant fraction. “This was reported by subjects as being very detrimental,” she summarized with typical academic understatement. Forcing content producers onto social media also has negative effects on the ability to go deep. Serious journalists, for example, need to focus on doing serious journalism —diving into complicated sources, pulling out connective threads, crafting persuasive prose—so to ask them to interrupt this deep thinking throughout the day to participate in the frothy back-and-forth of online tittering seems irrelevant (and somewhat demeaning) at best, and devastatingly distracting at worst. The respected New Yorker staff writer George Packer captured this fear well in an essay about why he does not tweet: “Twitter is crack for media addicts. It scares me, not because I’m morally superior to it, but because I don’t think I could handle it. I’m afraid I’d end up letting my son go hungry.” Tellingly, when he wrote that essay, Packer was busy writing his book The Unwinding, which came out soon after and promptly won the National Book Award—despite (or, perhaps, aided by) his lack of social media use. To summarize, big trends in business today actively decrease people’s ability to perform deep work, even though the benefits promised by these trends (e.g., increased serendipity, faster responses to requests, and more exposure) are arguably dwarfed by the benefits that flow from a commitment to deep work (e.g., the ability to learn hard things fast and produce at an elite level). The goal of this chapter is to explain this paradox. The rareness of deep work, I’ll argue, is not due to some fundamental weakness of the habit. When we look closer at why we embrace distraction in the workplace we’ll find the reasons are more arbitrary than we might expect—based on flawed thinking combined with the ambiguity and confusion that often define knowledge work. My objective is to convince you that although our current embrace of distraction is a real phenomenon, it’s built on an unstable foundation and can be easily dismissed once you decide to cultivate a deep work ethic. The Metric Black Hole In the fall of 2012, Tom Cochran, the chief technology officer of Atlantic Media, became alarmed at how much time he seemed to spend on e-mail. So like any good techie, he decided to quantify this unease. Observing his own behavior, he measured that in a single week he received 511 e-mail messages and sent 284. This averaged to around 160 e-mails per day over a five-day workweek. Calculating further, Cochran noted that even if he managed to spend only thirty seconds per message on average, this still added up to almost an hour and a half per day dedicated to moving information around like a human network router. This seemed like a lot of time spent on something that wasn’t a primary piece of his job description. As Cochran recalls in a blog post he wrote about his experiment for the Harvard Business Review, these simple statistics got him thinking about the rest of his company. Just how much time were employees of Atlantic Media spending moving around information instead of focusing on the specialized tasks they were hired to perform? Determined to answer this question, Cochran gathered company-wide statistics on e-mails sent per day and the average number of words per e-mail. He then combined these numbers with the employees’ average typing speed, reading speed, and salary. The result: He discovered that Atlantic Media was spending well over a million dollars a year to pay people to process e-mails, with every message sent or received tapping the company for around ninety-five cents of labor costs. “A ‘free and frictionless’ method of communication,” Cochran summarized, “had soft costs equivalent to procuring a small company Learjet.” Tom Cochran’s experiment yielded an interesting result about the literal cost of a seemingly harmless behavior. But the real importance of this story is the experiment itself, and in particular, its complexity. It turns out to be really difficult to answer a simple question such as: What’s the impact of our current e-mail habits on the bottom line? Cochran had to conduct a company-wide survey and gather statistics from the IT infrastructure. He also had to pull together salary data and information on typing and reading speed, and run the whole thing through a statistical model to spit out his final result. And even then, the outcome is fungible, as it’s not able to separate out, for example, how much value was produced by this frequent, expensive e-mail use to offset some of its cost. This example generalizes to most behaviors that potentially impede or improve deep work. Even though we abstractly accept that distraction has costs and depth has value, these impacts, as Tom Cochran discovered, are difficult to measure. This isn’t a trait unique to habits related to distraction and depth: Generally speaking, as knowledge work makes more complex demands of the labor force, it becomes harder to measure the value of an individual’s efforts. The French economist Thomas Piketty made this point explicit in his study of the extreme growth of executive salaries. The enabling assumption driving his argument is that “it is objectively difficult to measure individual contributions to a firm’s output.” In the absence of such measures, irrational outcomes, such as executive salaries way out of proportion to the executive’s marginal productivity, can occur. Even though some details of Piketty’s theory are controversial, the underlying assumption that it’s increasingly difficult to measure individuals’ contributions is generally considered, to quote one of his critics, “undoubtedly true.” We should not, therefore, expect the bottom-line impact of depth-destroying behaviors to be easily detected. As Tom Cochran discovered, such metrics fall into an opaque region resistant to easy measurement—a region I call the metric black hole. Of course, just because it’s hard to measure metrics related to deep work doesn’t automatically lead to the conclusion that businesses will dismiss it. We have many examples of behaviors for which it’s hard to measure their bottom-line impact but that nevertheless flourish in our business culture; think, for example, of the three trends that opened this chapter, or the outsize executive salaries that puzzled Thomas Piketty. But without clear metrics to support it, any business behavior is vulnerable to unstable whim and shifting forces, and in this volatile scrum deep work has fared particularly poorly. The reality of this metric black hole is the backdrop for the arguments that follow in this chapter. In these upcoming sections, I’ll describe various mind-sets and biases that have pushed business away from deep work and toward more distracting alternatives. None of these behaviors would survive long if it was clear that they were hurting the bottom line, but the metric black hole prevents this clarity and allows the shift toward distraction we increasingly encounter in the professional world. The Principle of Least Resistance When it comes to distracting behaviors embraced in the workplace, we must give a position of dominance to the now ubiquitous culture of connectivity, where one is expected to read and respond to e-mails (and related communication) quickly. In researching this topic, Harvard Business School professor Leslie Perlow found that the professionals she surveyed spent around twenty to twenty-five hours a week outside the office monitoring e-mail—believing it important to answer any e-mail (internal or external) within an hour of its arrival. You might argue—as many do—that this behavior is necessary in many fast-paced businesses. But here’s where things get interesting: Perlow tested this claim. In more detail, she convinced executives at the Boston Consulting Group, a high-pressure management consulting firm with an ingrained culture of connectivity, to let her fiddle with the work habits of one of their teams. She wanted to test a simple question: Does it really help your work to be constantly connected? To do so, she did something extreme: She forced each member of the team to take one day out of the workweek completely off—no connectivity to anyone inside or outside the company. “At first, the team resisted the experiment,” she recalled about one of the trials. “The partner in charge, who had been very supportive of the basic idea, was suddenly nervous about having to tell her client that each member of her team would be off one day a week.” The consultants were equally nervous and worried that they were “putting their careers in jeopardy.” But the team didn’t lose their clients and its members did not lose their jobs. Instead, the consultants found more enjoyment in their work, better communication among themselves, more learning (as we might have predicted, given the connection between depth and skill development highlighted in the last chapter), and perhaps most important, “a better product delivered to the client.” This motivates an interesting question: Why do so many follow the lead of the Boston Consulting Group and foster a culture of connectivity even though it’s likely, as Perlow found in her study, that it hurts employees’ well-being and productivity, and probably doesn’t help the bottom line? I think the answer can be found in the following reality of workplace behavior. The Principle of Least Resistance: In a business setting, without clear feedback on the impact of various behaviors to the bottom line, we will tend toward behaviors that are easiest in the moment. To return to our question about why cultures of connectivity persist, the answer, according to our principle, is because it’s easier. There are at least two big reasons why this is true. The first concerns responsiveness to your needs. If you work in an environment where you can get an answer to a question or a specific piece of information immediately when the need arises, this makes your life easier—at least, in the moment. If you couldn’t count on this quick response time you’d instead have to do more advance planning for your work, be more organized, and be prepared to put things aside for a while and turn your attention elsewhere while waiting for what you requested. All of this would make the day to day of your working life harder (even if it produced more satisfaction and a better outcome in the long term). The rise of professional instant messaging, mentioned earlier in this chapter, can be seen as this mind-set pushed toward an extreme. If receiving an e-mail reply within an hour makes your day easier, then getting an answer via instant message in under a minute would improve this gain by an order of magnitude. The second reason that a culture of connectivity makes life easier is that it creates an environment where it becomes acceptable to run your day out of your inbox— responding to the latest missive with alacrity while others pile up behind it, all the while feeling satisfyingly productive (more on this soon). If e-mail were to move to the periphery of your workday, you’d be required to deploy a more thoughtful approach to figuring out what you should be working on and for how long. This type of planning is hard. Consider, for example, David Allen’s Getting Things Done task- management methodology, which is a well-respected system for intelligently managing competing workplace obligations. This system proposes a fifteen-element flowchart for making a decision on what to do next! It’s significantly easier to simply chime in on the latest cc’d e-mail thread. I’m picking on constant connectivity as a case study in this discussion, but it’s just one of many examples of business behaviors that are antithetical to depth, and likely reducing the bottom-line value produced by the company, that nonetheless thrive because, in the absence of metrics, most people fall back on what’s easiest. To name another example, consider the common practice of setting up regularly occurring meetings for projects. These meetings tend to pile up and fracture schedules to the point where sustained focus during the day becomes impossible. Why do they persist? They’re easier. For many, these standing meetings become a simple (but blunt) form of personal organization. Instead of trying to manage their time and obligations themselves, they let the impending meeting each week force them to take some action on a given project and more generally provide a highly visible simulacrum of progress. Also consider the frustratingly common practice of forwarding an e-mail to one or more colleagues, labeled with a short open-ended interrogative, such as: “Thoughts?” These e-mails take the sender only a handful of seconds to write but can command many minutes (if not hours, in some cases) of time and attention from their recipients to work toward a coherent response. A little more care in crafting the message by the sender could reduce the overall time spent by all parties by a significant fraction. So why are these easily avoidable and time-sucking e-mails so common? From the sender’s perspective, they’re easier. It’s a way to clear something out of their inbox —at least, temporarily—with a minimum amount of energy invested. The Principle of Least Resistance, protected from scrutiny by the metric black hole, supports work cultures that save us from the short-term discomfort of concentration and planning, at the expense of long-term satisfaction and the production of real value. By doing so, this principle drives us toward shallow work in an economy that increasingly rewards depth. It’s not, however, the only trend that leverages the metric black hole to reduce depth. We must also consider the always present and always vexing demand toward “productivity,” the topic we’ll turn our attention to next. Busyness as a Proxy for Productivity There are a lot of things difficult about being a professor at a research-oriented university. But one benefit that this profession enjoys is clarity. How well or how poorly you’re doing as an academic researcher can be boiled down to a simple question: Are you publishing important papers? The answer to this question can even be quantified as a single number, such as the h-index: a formula, named for its inventor, Jorge Hirsch, that processes your publication and citation counts into a single value that approximates your impact on your field. In computer science, for example, an h-index score above 40 is difficult to achieve and once reached is considered the mark of a strong long-term career. On the other hand, if your h-index is in single digits when your case goes up for tenure review, you’re probably in trouble. Google Scholar, a tool popular among academics for finding research papers, even calculates your h-index automatically so you can be reminded, multiple times per week, precisely where you stand. (In case you’re wondering, as of the morning when I’m writing this chapter, I’m a 21.) This clarity simplifies decisions about what work habits a professor adopts or abandons. Here, for example, is the late Nobel Prize–winning physicist Richard Feynman explaining in an interview one of his less orthodox productivity strategies: To do real good physics work, you do need absolute solid lengths of time… it needs a lot of concentration… if you have a job administrating anything, you don’t have the time. So I have invented another myth for myself: that I’m irresponsible. I’m actively irresponsible. I tell everyone I don’t do anything. If anyone asks me to be on a committee for admissions, “no,” I tell them: I’m irresponsible. Feynman was adamant in avoiding administrative duties because he knew they would only decrease his ability to do the one thing that mattered most in his professional life: “to do real good physics work.” Feynman, we can assume, was probably bad at responding to e-mails and would likely switch universities if you had tried to move him into an open office or demand that he tweet. Clarity about what matters provides clarity about what does not. I mention the example of professors because they’re somewhat exceptional among knowledge workers, most of whom don’t share this transparency regarding how well they’re doing their job. Here’s the social critic Matthew Crawford’s description of this uncertainty: “Managers themselves inhabit a bewildering psychic landscape, and are made anxious by the vague imperatives they must answer to.” Though Crawford was speaking specifically to the plight of the knowledge work middle manager, the “bewildering psychic landscape” he references applies to many positions in this sector. As Crawford describes in his 2009 ode to the trades, Shop Class as Soulcraft, he quit his job as a Washington, D.C., think tank director to open a motorcycle repair shop exactly to escape this bewilderment. The feeling of taking a broken machine, struggling with it, then eventually enjoying a tangible indication that he had succeeded (the bike driving out of the shop under its own power) provides a concrete sense of accomplishment he struggled to replicate when his day revolved vaguely around reports and communications strategies. A similar reality creates problems for many knowledge workers. They want to prove that they’re productive members of the team and are earning their keep, but they’re not entirely clear what this goal constitutes. They have no rising h-index or rack of repaired motorcycles to point to as evidence of their worth. To overcome this gap, many seem to be turning back to the last time when productivity was more universally observable: the industrial age. To understand this claim, recall that with the rise of assembly lines came the rise of the Efficiency Movement, identified with its founder, Frederic Taylor, who would famously stand with a stopwatch monitoring the efficiency of worker movements— looking for ways to increase the speed at which they accomplished their tasks. In Taylor’s era, productivity was unambiguous: widgets created per unit of time. It seems that in today’s business landscape, many knowledge workers, bereft of other ideas, are turning toward this old definition of productivity in trying to solidify their value in the otherwise bewildering landscape of their professional lives. (David Allen, for example, even uses the specific phrase “cranking widgets” to describe a productive work flow.) Knowledge workers, I’m arguing, are tending toward increasingly visible busyness because they lack a better way to demonstrate their value. Let’s give this tendency a name. Busyness as Proxy for Productivity: In the absence of clear indicators of what it means to be productive and valuable in their jobs, many knowledge workers turn back toward an industrial indicator of productivity: doing lots of stuff in a visible manner. This mind-set provides another explanation for the popularity of many depth- destroying behaviors. If you send and answer e-mails at all hours, if you schedule and attend meetings constantly, if you weigh in on instant message systems like Hall within seconds when someone poses a new question, or if you roam your open office bouncing ideas off all whom you encounter—all of these behaviors make you seem busy in a public manner. If you’re using busyness as a proxy for productivity, then these behaviors can seem crucial for convincing yourself and others that you’re doing your job well. This mind-set is not necessarily irrational. For some, their jobs really do depend on such behavior. In 2013, for example, Yahoo’s new CEO Marissa Mayer banned employees from working at home. She made this decision after checking the server logs for the virtual private network that Yahoo employees use to remotely log in to company servers. Mayer was upset because the employees working from home didn’t sign in enough throughout the day. She was, in some sense, punishing her employees for not spending more time checking e-mail (one of the primary reasons to log in to the servers). “If you’re not visibly busy,” she signaled, “I’ll assume you’re not productive.” Viewed objectively, however, this concept is anachronistic. Knowledge work is not an assembly line, and extracting value from information is an activity that’s often at odds with busyness, not supported by it. Remember, for example, Adam Grant, the academic from our last chapter who became the youngest full professor at Wharton by repeatedly shutting himself off from the outside world to concentrate on writing. Such behavior is the opposite of being publicly busy. If Grant worked for Yahoo, Marissa Mayer might have fired him. But this deep strategy turned out to produce a massive amount of value. We could, of course, eliminate this anachronistic commitment to busyness if we could easily demonstrate its negative impact on the bottom line, but the metric black hole enters the scene at this point and prevents such clarity. This potent mixture of job ambiguity and lack of metrics to measure the effectiveness of different strategies allows behavior that can seem ridiculous when viewed objectively to thrive in the increasingly bewildering psychic landscape of our daily work. As we’ll see next, however, even those who have a clear understanding of what it means to succeed in their knowledge work job can still be lured away from depth. All it takes is an ideology seductive enough to convince you to discard common sense. The Cult of the Internet Consider Alissa Rubin. She’s the New York Times ’ bureau chief in Paris. Before that she was the bureau chief in Kabul, Afghanistan, where she reported from the front lines on the postwar reconstruction. Around the time I was writing this chapter, she was publishing a series of hard-hitting articles that looked at the French government’s complicity in the Rwandan genocide. Rubin, in other words, is a serious journalist who is good at her craft. She also, at what I can only assume is the persistent urging of her employer, tweets. Rubin’s Twitter profile reveals a steady and somewhat desultory string of missives, one every two to four days, as if Rubin receives a regular notice from the Times’ social media desk (a real thing) reminding her to appease her followers. With few exceptions, the tweets simply mention an article she recently read and liked. Rubin is a reporter, not a media personality. Her value to her paper is her ability to cultivate important sources, pull together facts, and write articles that make a splash. It’s the Alissa Rubins of the world who provide the Times with its reputation, and it’s this reputation that provides the foundation for the paper’s commercial success in an age of ubiquitous and addictive click-bait. So why is Alissa Rubin urged to regularly interrupt this necessarily deep work to provide, for free, shallow content to a service run by an unrelated media company based out of Silicon Valley? And perhaps even more important, why does this behavior seem so normal to most people? If we can answer these questions, we’ll better understand the final trend I want to discuss relevant to the question of why deep work has become so paradoxically rare. A foundation for our answer can be found in a warning provided by the late communication theorist and New York University professor Neil Postman. Writing in the early 1990s, as the personal computer revolution first accelerated, Postman argued that our society was sliding into a troubling relationship with technology. We were, he noted, no longer discussing the trade-offs surrounding new technologies, balancing the new efficiencies against the new problems introduced. If it’s high-tech, we began to instead assume, then it’s

Use Quizgecko on...
Browser
Browser