State of the Cloud 2024 - Bessemer Venture Partners PDF
Document Details
2024
Christine Deakers
Tags
Related
- AZ900-C1-Microsoft Azure Fundamentals -Cloud Concepts-AI.pdf
- UE1 - Management des Systèmes d'Information et Transformation Digitale PDF
- Wk05 Cloud Computing for AI-enabled IoT_student.pdf
- Genesys Cloud Training (004) - Hiring Team
- Microsoft Azure AI Fundamentals: AI Overview PDF
- VMware Private AI Foundation with NVIDIA PDF
Summary
This report analyzes the five key trends shaping the future of the AI cloud economy. It highlights the rapid advancement of AI foundation models, the intensifying competition amongst various technology players, and the emergence of novel architectures in the field. The report assesses the long-term implications and predictions for the AI cloud landscape.
Full Transcript
State of the Cloud 2024 The Legacy Cloud is dead — long live AI Cloud! Christine Deakers In last year’s State of the Cloud report, six months after the launch of ChatGPT, we declared that language model powered AI was here to stay and would impact nearly every cloud roadmap. In hindsight, this was...
State of the Cloud 2024 The Legacy Cloud is dead — long live AI Cloud! Christine Deakers In last year’s State of the Cloud report, six months after the launch of ChatGPT, we declared that language model powered AI was here to stay and would impact nearly every cloud roadmap. In hindsight, this was the understatement of the year. Even our optimistic take underestimated the accelerated technological change that followed in the past 12 months (a.k.a. 12 “AI years”). In 2023, our enthusiasm in AI was underscored by our committed $1 billion in funding. A year later, we have total conviction in the promise of AI — we’ve backed multiple new investments in AI startups and the broad adoption of LLM technology in nearly every portfolio company roadmap. It’s rare to find a cloud company these days that isn’t, at some, level an AI company. We’re not alone, of course, in our enthusiasm. New technology waves often whet VC appetites, but the speed of VC reaction to this wave is wild compared to historical precedents. While The BVP Nasdaq Emerging Cloud Index (EMCLOUD) remains down from ZIRP highs and trades at historical norms, the private sector has rebounded and arguably bubbled up again, largely on the back of AI Cloud. And AI is all we can think about. Yes, there are still meaningful opportunities for the legacy cloud with embedded payments and payroll continuing to open seams in new verticals. These verticals include underserved sectors like supply chain and logistics, trucking, govtech, and climate, which are seeing signs of life. And yes, there is drama watching cloud companies that were overvalued and overfunded in the Covid bubble crash come down to earth or pivot to austerity. But as we surveyed our 62 global investors we weren’t surprised that 124 eyes were trained on what we collectively agree is the largest shift in technology in our lifetimes. This year’s State of the Cloud report serves both as a eulogy for the legacy cloud and a celebration of the current AI Cloud moment. In the words ahead, we focus entirely on the top five trends deemed most relevant to the AI Cloud by our global team of investors and the implications for the thrilling years we predict are headed our way. For State of the Cloud 2024, we delve into five of the most powerful trends shaping the future of the AI Cloud economy and our predictions for what to expect by 2030. The five trends defining AI Cloud in 2024 Trend 1: AI foundation models set the stage for Big Tech’s new battle-of-the- century When we reflect on the platform shifts of yesteryear — from Browsers and Search to Mobile and Cloud — every technological change catalyzed competition to control the foundational layer. The age of AI is no different. Foundation models are the new “oil” that will fuel downstream AI applications and tooling. In 2023, foundational model companies captured the lion's share of venture funding in AI, accounting for over 60% of total AI dollars. Players such as OpenAI, Anthropic, Mistral, Cohere, among others raised $23 billion at an aggregate market cap of $124 billion, underscoring their critical role in the AI ecosystem globally. Notably, this influx of capital was primarily driven by corporate VCs, who represented 90% of private GenAI fundraising in 2023 (up from 40% in 2022 according to Morgan Stanley). Big Tech companies such as Microsoft, Google, Amazon, NVIDIA, and Oracle, now have significant stakes in foundational model companies, as these investments are strategically aligned to enhance the AI capabilities of these tech giants, driving consumption of their core cloud and compute services. This is in addition to Big Tech companies working on their own in-house foundation model initiatives, such as Google’s Gemini and Meta AI’s Llama. With so much funding flowing into this fundamental layer, competition is intensifying at an unprecedented pace, driving an incredible amount of innovation in the ecosystem. Here are some key trends we’ve observed in 2023: Base models improving rapidly: General purpose LLMs are getting better and better every second, both in terms of base performance capabilities (such as accuracy and latency) but also at the frontier including multi-modal functionality. The launch of GPT-4o left all our jaws dropping — the new release demonstrated the capability to view and understand video and audio from an uploaded file, as well as generate short videos. The dizzying pace of model improvement raised obvious questions around investment strategy in models that seem to have a half life measured in months. Battle between open and closed source intensifies: As we touched upon in last year’s State of the Cloud, the open source vs. closed source debate remains a hot topic in 2024 as open-source leaders closely track close-source model performance, especially with the recent launch of Llama 3. New questions have been raised around regulatory impact, whether closed- source players should open-source their older models as part of a new commercialization strategy, or if this might be the first time in history where an open-source leader might become the winner of this market. Small model movement gets big: Additionally, 2023 also saw the rise of the small model movement, with HuggingFace CEO and Co-founder Clem Delangue declaring 2024 will be the year of SLMs. Compared to larger counterparts, examples like Mistral 8x22b which launched this year have shown that bigger isn’t always better in terms of performance, and that small models can have significant cost and latency benefits. Emergence of novel architectures and special purpose foundation models: In 2023, we also saw excitement around the emergence of novel model architectures beyond the transformer, such as state- space models and geometric deep learning, pushing the frontier on foundation models that can be less computationally intensive, able to handle longer context, or exhibit structured reasoning. We also saw an explosion of teams training specific-purpose models for code generation, biology, video, image, speech, robotics, music, physics, brainwaves, etc., adding another vector of diversity into the model layer. We discuss this trend in our recent AI Infrastructure roadmap. With so much happening at the foundational layer, it often seems like the ground is shifting beneath our feet every second. Despite the copious amount of funding that has been invested here, there isn’t necessarily a clear consensus on the winner right now. Prediction: The battle of AI models will remain white-hot for the foreseeable future since this is a critical “land grab” that determines which Big Tech companies reign supreme within the cloud and compute markets in the coming years. There are three possible realities we expect to see in the foreseeable future on who will capture the most value in this model layer fight: Reality 1: The model layer becomes commoditized. Will hundreds of millions of dollars of capital be squandered as VCs and Big Tech back the derby of AI leaders? The most well-capitalized models does not mean they’ll become the winner, as open-source models continue to closely challenge the leading market players. But a future where AI models are commoditized doesn’t necessarily imply the value of models will diminish. AI models as commodities will be akin to compute or oil as commodities — they’ll one day become the assets essential for global business operations. In this reality, the ultimate value in the AI ecosystem will be captured by compute and cloud service providers, marketplaces, and applications — not by the models themselves. However, in a world where AI models are commoditized, as we’ve seen in the oil market, this could still give rise to one or two extremely valuable companies selling these “commodities.” Reality 2: AI Model Giants split the pie. Similar to the Cloud Wars, a handful of notable new model companies, heavily supported by Big Tech strategics or corporate VCs, will own the foundational model ecosystem and become giants. Each of these winners will find a differentiated wedge to pair with technological differentiation, whether that’s via distribution, price/cost efficiencies, regulatory impact, etc. There could still be a long-tail of different players (especially open-source) but value will accrue to the top handful of model players. It’s not only the superior technology that will determine tomorrow’s AI Giants, but also their established distribution. Reality 3: AI models become as diverse and popular as the potato chip market. Just like there are endless flavors of potato chips, the future of the AI model economy could very well look something similar to the snack aisle at your local grocery store. Many model companies can thrive as there are enough differentiated use cases (e.g., modalities, performance, latency, cost, security, etc.) for different model companies to survive. Additionally, geography and regulation could play a role here if geopolitical considerations enter the realm of AI models, with a mix of regulations and sovereign concerns supporting the proliferation of diversity in this layer. Prediction: While we’re far from consensus, a slight majority of our partnership predicts Closed-Source models will drive the bulk of LLM compute cycles, and AI Model Giants will eventually split the economic pie (Reality #2). We expect to see Cloud Giants leverage their access to compute, chips, and capital to influence the battle in their favor. And the frontrunners are already in the race — Microsoft/OpenAI, AWS/Anthropic, Google/Gemini, with Meta/Llama as the Linux-equivalent OSS alternative, including Mistral as a European lead. Deep Dive: A behind-the-scenes conversation on the AI model layer Trend 2: AI turning all of us into 10X developers The modern engineer has always been part builder and part student — completing a day job while constantly working to stay up-to-date on new languages, frameworks, infrastructure, etc. The AI quake has added a Ph.D. requirement to the job, as developers face a completely new set of toolchains and best practices for leveraging constantly evolving LLMs, including a new infrastructure suite for data management, curation, prompting, pre- training, and fine-tuning. Each year in the AI era requires coming up to speed on a decade’s worth of new developer knowledge. But AI may also offer the solution to this new level of complexity. 2023 saw widespread adoption of code copilots and the first few months of 2024 saw early breakthroughs in agentic tooling that suggest that end-to-end automation of simple code tasks and perhaps much more may be arriving sooner than we might have expected. Prediction: The role of the developer will be radically transformed, perhaps more than any other profession, by AI. By the end of the decade, significant developer capability will be available to every human with a computer. The resulting rate of software development will melt keyboards and dramatically reduce the age of the average technology startup founder. Three main areas are driving the lightning speed evolution of the AI developer economy: 1. The code copilot industry has been a hotbed of innovation and competition with $3.9 billion VC dollars invested in 2023 for GenAI technology and tools. Github’s incumbent Copilot product, powered by OpenAI’s GPT-4 and Codex models, is well-penetrated with north of 14 million installs. A long-tail of well-funded and scaling startup competitors, such as Tabnine, Magic.dev, Augment, Poolside, Cursor AI, OpenDevin, Cognition’s Devin, and Supermaven, are building and iterating with developers in the loop. Some, like Magic.dev, Poolside, Augment, and Supermaven, are pre-training their own large AI models with an emphasis on model properties such as context, latency, etc. Others, like Cursor, are model agnostic and are focused on the developer experience, interface, and workflows. This landscape is a good example of the capital intensity of model-layer AI companies; Magic.dev, Augment, Poolside, and Devin have each raised $150M+ in the last couple of years. 2. The “graduation motion” of copilots embedding agentic search and generation functionality will drive outsized value in the coming years. Devin, SWE-agent and OpenDevin have demonstrated the potential of end-to-end agentic tools that interact with developer environments (i.e., file editor, bash shell) and the internet to complete coding tasks. Underpinning these agentic demos are rapid advancements in code-language reasoning, agentic trajectory planning (approaches vary across prompting, behavioral cloning / fine-tuning, reinforcement learning), and various agent-computer interface (ACI) improvements (i.e., abstractions and infrastructure across the browser and operating system that enable seamless agentic tool querying and self-correction). 3. Code-language reasoning will remain an epicenter of AI activity, set up to benefit from both model layer innovation (e.g., GPT-4, Claude 3 Opus) and novel reasoning/agentic paradigms (e.g., Cognition’s Devin, SWE-agent, OpenDevin). Model layer improvements will flow down into code editing and completion quality, with value ultimately accruing to developers and software organizations. Beyond code reasoning, systems that push the boundaries of latency, context size, and expand the language domain / pre-training set will also drive outsized value for developers. AI is driving both innovation and upheaval alike, and accelerating developer velocity, productivity, and leverage for software organizations. Forward-thinking software organizations are routinely surveying the landscape for emerging tools and vendors, and rapidly prioritizing and adopting high-value developer software. Developer budgets are flowing once again and the willingness to pay is high for tools that have visible impact. For developer entrepreneurs, this is an exciting time to be building; opportunities abound across copilots but also infrastructure, dev tooling, QA, IT configuration and provisioning, security operations monitoring, penetration testing, and on and on. Copilots are perhaps the most obvious opportunity at the moment, but that makes them likely the most competitive playing field. We have also seen an explosion of tools in more specific developer domains — from SecOps in security to SRE to QA and pen testing. These tools use LLMs to abstract away low-level complexity and automate highly time- consuming, painful engineering tasks, freeing up engineering resources for higher-order tasks. The integration of AI in DevOps processes will enhance CI/CD pipelines, automated testing, and deployment strategies, leading to faster and more reliable software delivery. Code refactoring is another great example of AI’s impact in the developer workflow and ecosystem. Many modern engineering teams spend only a fraction of their FTE time writing net-new code. At large organizations in particular, a large fraction of SWE time is spent on the less “sexy” parts of the software engineering role: maintaining, securing, and testing code. Many of these tasks, such as code refactoring, require deep knowledge of the stack and are often unwieldy projects performed with dread by senior engineers. AI has obvious potential to address these challenges; startups like Gitar, Grit, ModelCode, and others leverage code-gen models, static analysis, and AST parsers to interpret code structure and migrate code across language, package libraries, and frameworks. Some of these efforts are focused on modern web frameworks while others work with brittle legacy engineering stacks (i.e., COBALT, PEARL, etc.) where fluent engineers are becoming obsolete over time. Many workflows adjacent to the core software engineering function are also highly time-intensive, repetitive, and ripe for automation. Prediction: By 2030, a majority of corporate software developers will become something more akin to software reviewers. The cost of development will fall and as experienced developers become more productive their salaries will rise. AI will impact the scope, and skills required for all job markets, but perhaps none more so than of the developer. AI enhancements will not only vastly improve the productivity of this occupation, but also expand the boundaries of the developer universe. By the end of this decade, development capability will be an accessible skill to most of the global population. Deep Dive: A behind-the-scenes conversation on AI Developer Tools Trend 3: Multimodal models and AI agents will transform human relationships with software The rise of multimodal models and AI Agents is leading the next wave of innovation in AI, and dramatically expanding AI’s potential applications to far broader use cases than early text-based models achieved. There’s a greenfield opportunity for AI entrepreneurs to innovate across new modalities, such as voice, image and video, as well as agentic workflows. These new modalities give AI the equivalent to the human capabilities of vision, hearing and speech, which unlocks the opportunities for AI to play a role in augmenting the large share of human work that is dependent on these senses. In the next 12 months, we expect voice AI applications in particular to see breakout growth. Over the longer term we also see the promise of agent-first products changing the way businesses operate, as they set new expectations in terms of the complexity and breadth of tasks that AI can be entrusted to handle. Prediction: Voice AI applications will unlock $10B of new software TAM over the next five years. Recent progress is undeniable: Voice The first wave of voice AI companies were primarily leveraging advancements in Automatic Speech Recognition (ASR), such as Abridge, which offers the leading product for transcribing notes from doctor-patient conversations, and Rillavoice, which captures field sales representative conversations with customers to assist in sales training. We are now seeing a new wave of voice AI companies that are developing conversational voice products capable of handling tedious and repetitive workflows, empowering humans across sales, recruiting, customer success, and administrative use cases to focus on higher-value work. One example from our portfolio is Ada, which has taken advantage of recent voice breakthroughs to expand their chat-based customer support product to incorporate conversational voice. Underpinning these developments are new voice architectures. We are seeing a shift from cascading architectures (ASR transcribes audio to text which is passed into an LLM, then text is fed back into a Text-to-Speech model) to speech-native architectures, as exhibited by new models such as GPT-4o, that can process and reason on raw audio data without ever transcribing to text and respond in native audio. This transition will enable conversational voice products with much lower latency and much greater understanding of non-textual information like emotion, tone, and sentiment, most of which get lost in cascading architectures. These advancements will result in conversational voice experiences that are truly real-time and can help users resolve their issues faster and with far less frustration than prior generations of voice automation. AI voice applications are emerging in many industries including auto dealerships, retail, restaurants, and home services. A large portion — or even a majority — of inbound sales calls are missed as they happen outside of business hours, and in these cases, AI is primed to pick up the slack. AI voice applications in sales tend to be incredibly high ROI use cases because the AI is essentially picking up lost revenue for these businesses, and can thus offer a really compelling value proposition. Entrepreneurs building at the forefront of voice AI are more equipped than ever to deliver interfaces that are increasingly natural and conversational, capable of providing near human-level performance. We expect to see an explosion of companies across the Voice AI stack (see below), many of which will experience truly breakout growth. In the process, we also expect consumer expectations around interacting with voice AI to change, as modern conversational voice applications start to deliver far more natural experiences for users and ultimately get them to resolution much faster. Image / Video Computer vision models have existed for years, but what is so exciting about the new generation of multi-modal LLMs is their ability to combine their understanding of image and text data (among other modalities), as this combination is extremely useful for many tasks. The initial wave of enterprise-based image applications was focused largely on data extraction use cases. We have seen companies like Raft ingest freight documents, extracting critical information to populate the customer’s ERP and automate invoice reconciliation workflows. As the underlying models keep improving, we believe we will see a host of vertical-specific image and video processing applications emerge that will also be able to ingest increasing amounts of data to power their applications. We have also seen applications in engineering and design that leverage vision models, and image generation models to help reason on graphical data, like schematics, or generate renderings of a building design. For example, Flux.ai offers an AI copilot that helps electrical engineers generate printed circuit board components in their design software, based on ingesting a PDF spec sheet for the component. Autonomous AI Agents One of the most exciting emerging themes in AI is the development of AI Agents, capable of handling complex multi-step tasks end-to-end, fully autonomously. While most AI agents don’t yet operate reliably enough to function autonomously in complex use cases, progress on agentic workflows is moving very quickly and we are seeing glimpses of what is possible. Each new demo is better than the last, with Cognition AI’s Devin — the AI software engineer — hinting at what’s possible as AI’s planning and reasoning capabilities continue to expand. More applications are beginning to implement AI agents in highly constrained use cases in which they can limit the impact of compounding errors across multistep processes. For example, enterprises are leveraging solutions like Bessemer portfolio company Leena AI providing AI agents to support employees with IT, HR, and Finance related tasks, helping these teams free themselves of busywork and improving the employee experience. In addition, new models are emerging with stronger reasoning capabilities that can further empower agents to execute more complex workflows. And perhaps more interesting, there is a flurry of research focused on new architectural approaches to improve agent implementations through various methods including, chain-of-thought reasoning, self-reflection, tool use, planning, and multi-agent collaboration. 2023 was the year we saw an explosion of AI applications focused on text-based use cases. In 2024, we predict multimodal models will open up new frontiers in terms of both the capabilities and use cases that we see AI being used in at the application layer. This will lead to a new wave of applications bringing near human capabilities to markets ranging from large enterprises to small businesses within specific verticals, and will even unlock exciting potential for consumer apps. Deep Dive: A behind-the-scenes conversation on Multimodal Models and AI Agents Trend 4: Vertical AI shows potential to dwarf legacy SaaS with new applications and business models Vertical SaaS proved to be a sleeping giant that transformed industries during the first cloud revolution. Today, the top 20 US publicly traded vertical SaaS companies represent a combined market capitalization of ~$300 billion, with more than half of these companies having IPO'd in the last ten years. Now the rise of large language models (LLMs) has sparked the next wave of vertical SaaS as we see the creation of new LLM-native companies targeting new functions and at times industries that were out of bounds for legacy vertical SaaS; notably Vertical AI applications target the high cost repetitive language-based tasks that dominate numerous verticals and large sectors of the economy. The US Bureau of Labor Statistics cites the Business and Professional Services industry at 13% of US GDP making this sector alone, dominated by repetitive language tasks, ~10x the size of the software industry. Beyond the professional service sector, within every industry vertical repetitive language based tasks represent a significant share of activity. We believe vertical AI will compete for a meaningful share of these dollars and will also drive activity in areas where human labor was insufficient. For example, Bessemer portfolio company EvenUp automates third party legal services as well as internal paralegal workflows. Moreover, EvenUp opens up task areas where human labor was formerly too expensive or inconsistent to be applied. This multi-dimensional expansion holds implications for Vertical AI across all sectors of the economy. Prediction: Vertical AI’s market capitalization will be at least 10x the size of legacy Vertical SaaS as Vertical AI takes on the services economy and unleashes new business models. Copilots, Autopilots, and AI-enabled Services make up the three new business models of the Vertical AI economy. Vertical AI is also being delivered via several different business models, thus increasing the odds of matching AI capability with a given industry need. Copilots accelerate efficiency among workers by levering LLMs to automate tasks. Sixfold, for example, supercharges insurance underwriters to better analyze data and understand risk. In the copilot model, the AI application sits side-by-side with the human user to make the user more successful. While copilots help employees do their work, Agents fully automate workflows and replace the user. Agents tend to focus on specific functions inside of vertical companies like outbound sales or inbound call reception. Slang AI, for example, handles inbound calls for restaurants to book and reservations, answer questions, and more. Finally, we are seeing the emergence of AI-enabled Services. These are services typically outsourced to a third- party provider like accounting, legal services, medical billing, etc. Because they are so people-intensive, these businesses have traditionally been lower margin, hard to scale, difficult to differentiate, and less valuable than technology businesses. By using software to automate work, these AI- powered services companies aim to deliver cheaper, better, and faster services to the market and take share from incumbent service-oriented businesses. SmarterDx, uses AI to audit inpatient claims on behalf of health systems and hospitals before the bill and corresponding clinical documentation are sent to a payer. These pre-bill services were traditionally outsourced to vendors that used physicians and nurses to do this audit work. Early signal on Vertical AI business model strength from the Bessemer portfolio Bessemer was fortunate to back the legacy SaaS leader in several verticals — and now we have one of the largest Vertical AI portfolios, particularly with businesses that have reached mid to growth stages. As a result, we already have meaningful data we can use to compare Vertical AI companies and legacy vertical SaaS comps. And while we’re as excited as any VC about the power of language models, we’re growing equally excited by the early data we’re seeing on Vertical AI business models. Three analyses of our Vertical AI portfolio hint at the strength of this new class of applications. First, we’ll note that for the most part Vertical AI players are leading with functionality that is not competing with legacy SaaS. The utility of these applications is typically complementary to a legacy SaaS product (if one exits at all) and thus not being asked to replicate and displace an incumbent. Equally exciting, these Vertical AI upstarts are already commanding ~80% of the ACV of the traditional core vertical SaaS systems. And these Vertical AI players are just getting started with obvious potential to expand ACVs. This data demonstrates Vertical AI’s ability by replacing service spending to unlock significant spend within vertical end markets and deliver TAMs that may ultimately be a significant multiple of those enjoyed by legacy SaaS. We’re also encouraged by the efficiency and growth profile of our Vertical AI companies with meaningful scale ($4M ARR+). This is a cohort of companies growing as fast as any we’ve ever seen at ~400% year-over-year. As impressive, these companies are also demonstrating healthy efficiency with an average ~65% gross margin and a ~1.1x BVP Efficiency Ratio (Net New CARR / Net Burn). We believe these companies will only improve margins over time as we’ve historically seen in software, and thus are, as a category, well positioned to be enduring standalone public companies. Finally, we analyzed the percent of revenue these Vertical AI companies are spending on model costs to address the concern that many of these applications are simply thin wrappers. On average, these companies are currently only spending ~10% of their revenue on model costs or ~25% of their total COGS. Thus these vertical applications built on top of LLMs are already generating margins ~6X the underlying model expenses. With model costs dropping rapidly and these startups just starting to optimize their spend, we believe these attractive margins will only get better. Overall while we expect massive value creation in the model layer, this data tells us that as with past infrastructure innovations the majority of enterprise value will once again be captured in the application layer. Vertical Software incumbents are not completely asleep at the wheel. Companies like Thomson Reuters (acquiring CaseText for $650M) and Docusign (acquiring Lexion for $165M) have made some of the first high-profile Vertical AI acquisitions. But we believe we’re still near the starting line for a Vertical AI marathon…albeit one where the runners may sprint the entire race. With early startup leaders such as EvenUp, Abridge, Rilla, Axion, and others growing at impressive clips, we expect new enduring public Vertical AI companies to be born in a few short years. Based on the growth rates at scale we are already seeing, we predict we will see at least five Vertical AI Centaurs ($100M+ ARR) emerge within the next two to three years. Prediction: The first Vertical AI IPO will occur within the next three years. Deep Dive: A behind-the-scenes conversation on Vertical AI Trend 5: AI brings Consumer Cloud back from the dead It’s no secret that consumer cloud has had a slow decade. (We define consumer cloud as companies that provide cloud-based storage, compute, and digital applications directly to individual consumers, including, at times, concurrent B2B and “prosumer” offerings.) To illustrate just how slow, we analyzed the past eight years of Cloud 100 data—the definitive ranking of the top 100 private cloud companies published by Bessemer, Forbes, and Salesforce Ventures every year since 2016. Only 4% of the cumulative lists since inception nine years ago represented companies with a consumer offering, sometimes alongside a much more prominent B2B offering (e.g., Zoom in 2016 and, more recently, OpenAI in 2023). Arguably, we have not seen an exit of a ‘pure’ consumer cloud company since the once-upon-a-time decacorn, Dropbox, which had their IPO in 2018. Consumer cloud unicorns have historically been built in the aftermath of major enabling technology shifts. But we haven’t seen a widespread relevant quake in consumer facing technology since the launch of the iPhone and subsequent developments in social media platforms almost fifteen years ago. However, two years ago consumers heard a major rumble. As the fast-evolving multi-modal capabilities of LLMs allow us to extend and enhance our text, visual, and auditory senses in previously impossible ways, we’re seeing potential for disruption open up in every category of legacy consumer cloud. One measure of AI’s consumptive power is how much these applications gobble up our time and attention. For example, ChatGPT is now neck-to-neck with leaders in the Attention Economy, such as Reddit, with other general-purpose AI assistants, including Claude and Gemini, quickly gaining traction. In addition to general-purpose assistants mentioned above, we’re already seeing examples of consumer AI companies that are driving innovation in their respective categories. These include Perplexity for search, Character.ai for companionship, Midjourney for image creativity, Suno and Udio for music generation, and Luma, Viggle and Pika for video generation. These companies are demonstrating the potential of LLM-native applications to attract and retain a dedicated user base and in some cases, effectively displace modern incumbents. With AI changing the way we engage and play with technology, this is one of the most exciting times to be a consumer cloud builder and investor. We expect multiple consumer cloud IPOs over the course of the next five years. Prediction: With the startling rise of synthetic media, new consumer applications, and conversation AI-agents, we predict that by 2030 the top three businesses dominating the Attention Economy will be based on AI- generated content or products. We’re also seeing significant early stage activity in the longer tail of functionally specific consumer AI applications (i.e., content generation and editing, education) as evidenced by monthly web and app visits. The good news is that these signals indicate the amount of consumer demand and excitement – an early indication that consumers are looking for AI to enhance their lives. The bad news is that there are not yet more than 10 category-specific consumer AI-native apps that have shown clear signs of product depth beyond thin wrappers or proven sustaining customer love in the form of strong retention. We believe there is still a clear opportunity for motivated entrepreneurs to build sustaining cloud companies over the coming months and years to address many unmet consumer needs. As we look across consumer needs, we’re asking ourselves two key questions to understand where value will accrue in this LLM moment: How acutely painful or labor intensive is the status quo for the consumer? How much repetitive, predictable language / visual / auditory effort is required? As we ask these questions, we are reexamining every daily need and pain point in a consumer’s life, but also not limiting our imagination on what’s possible by just considering established consumer needs. We believe there are large businesses to be built delivering novel utility to consumers, such as clones and companions, creativity and creation, interactive entertainment, and memory augmentation, including many other yet-to-be-invented markets. We are also excitedly tracking novel form factors that are starting to surface to address specific consumer needs. Since we can’t predict the future, we can’t tell exactly what form factor AI will take as it penetrates consumer life. However, hand-held devices, wearables, and household objects (toys, frames, mirrors) are already starting to emerge, at least as prototypes, as potential harbingers of startups to come. AI will not only reinvent our favorite pastimes (e.g., social, entertainment, shopping, travel, etc.), but also help us discover and reimagine new ways for people to connect, play, buy, and explore the world. There is plenty yet to be figured out. From an investment standpoint, we question which consumer demands will be fulfilled by general-purpose AI assistants (including, for example, Siri on the iPhone) versus standalone applications. Not to mention ethical considerations which will emerge alongside these future products. Despite many unknowns on the horizon, we believe the early signals clearly indicate that the LLM revolution will change all of our lives, and rejuvenate the consumer cloud landscape. Deep Dive: A behind-the-scenes conversation on Consumer Cloud Conclusion — AI Cloud: reality vs. hype “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run” - Roy Amara Boy, did Roy Amara have VC’s number in many past tech waves. Dot com for sure. Nanotech. Cleantech. Blockchain. Even boring old SaaS flew too close to the sun in 2021. So does the hype around the AI Cloud exceed reality? Are we destined for a reckoning in the next year or two when we admit the promise of AI nudged cloud VCs over our skis? Or is AI threatening to shatter “Amara’s law”? The first tech wave where reality outpaces the insane hype? A survey of several dozen Bessemer investors working around the globe in all stages and across every technology sector gives a clear answer: so far, the hype is well deserved. Everywhere we turn we see evidence of AI impact at levels without historical precedent. The vast majority of our portfolio has adopted AI technology internally and are updating product roadmaps to incorporate AI. AI-native portfolio companies are driving meaningful commercial traction, growing as rapidly and efficiently as any cohort we’ve ever witnessed. Our global investor group has responded, spending the vast majority of cycles tracking the AI phenomenon in all regions of the cloud. Tellingly and remarkably a half dozen trends within the AI wave are competing fiercely for our team’s interest — speaking again to the size and intensity of the AI Cloud phenomenon. A look back at our predictions from last year provided more evidence of our inability, despite severe optimism and excitement, to fully predict the speed and magnitude of this change. Specifically we predicted that AI Native companies will reach $1 billion in revenue 50% faster than their legacy cloud counterparts. OpenAI reportedly reached $2 billion in revenue in February of this year and was just reported to cross $3.4 billion run-rate months later. Anthropic has reportedly projected reaching $850 million in annualized revenue by the end of 2024. Other reports estimate Midjourney brings in $200 million in revenue and similar scale for Character.ai. Yeah, we’re ready to call that prediction. Our final prediction is that when we look back a year from now in the 2025 State of the Cloud, AI will not have given up a photon of its current spotlight — while we’ll surely have hits and misses, the progress on average will exceed our most aggressive predictions, and we’ll be even more excited then than we are now to turn the clock forward again and see where this amazing moment takes us.