AppliedAI_Whitepaper_UseCase_Webansicht.pdf

Full Transcript

Applying AI: How to find and prioritize AI use cases “Don’t waste time on AI for AI’s sake. Be motivated by what it will do for you, not by how sci-fi it sounds.” Cassie Kozyrkov, Chief Decision Scientist at Google Applying AI Introduction and Motivation Some companies truly struggle to define...

Applying AI: How to find and prioritize AI use cases “Don’t waste time on AI for AI’s sake. Be motivated by what it will do for you, not by how sci-fi it sounds.” Cassie Kozyrkov, Chief Decision Scientist at Google Applying AI Introduction and Motivation Some companies truly struggle to define and prioritize AI use cases. Good for them! The real problem arises for the many enterprises that grossly underestimate the challenge, feeling they have done the same thing many times before. After all, everyone went through Digital programs. Also, most have identified application areas for Big Data. A common assumption is that the methods for selecting successful AI use cases should not be all that different. The results are failed prototypes, but more often great initial demos followed by a quagmire of disappointments. For internal processes, AI deployment hits all kinds of practical implementation hurdles. Alternatively, scaling fails or the maintenance effort escalates. When AI is applied in offerings, glitches turn customers off, and the sales process draws out when customer data is involved or there is no willingness to pay. In all cases, the high hopes of value creation rarely materialize. All of this might seem to be the inevitable collateral damage of applying a disruptive technology. But it is not! A strong body of experiences and deep expertise on the subject is already available. With the proper preparation and understanding of the specific challenges of AI use cases and how to address them, many of the above-mentioned developments can be anticipated and avoided. and business aspects that were absent in both Digital and Big Data (without learning or even active decision making by a system). These critically influence the nature, the value, and the ease of implementation of AI use cases. We support our analysis with instructive practical examples from in-depth experience on the subject in order to render the subject as accessible as possible to the business reader. This paper shares critical lessons specific to identifying and prioritizing AI use cases and provides a guide around the pitfalls. Fundamentally, the intrinsic interweaving of data and learning algorithms introduces process Enjoy the ride! 4 Applying AI Elements of a comprehensive AI strategy There is little doubt that AI will become relevant for all companies, regardless of their industry or size. When it comes to creating value from AI, several pitfalls can be observed in practice – including the isolation of AI use cases, the lack of resources and capabilities, and a poor understanding of use cases and applications. To avoid this, a systematic approach towards AI is needed. Therefore, from the very beginning, you need to be clear on the overarching objectives or purpose of your company: What is its goal? Furthermore, it is necessary to understand how AI can help to achieve your objectives. A comprehensive AI strategy consists of three parts: an AI vision, a portfolio of AI use cases, and a clear strategy for the required enabling factors. A company’s AI vision sets the high-level goals of any AI application to be developed or deployed. It includes an understanding of the current position of the company, its competitive position and industry dynamics, including potential changes to the industry’s business model. On this basis, it can be decided where the organization could benefit most from AI − within a specific product or service and/or by improving processes. The vision needs to be translated into a portfolio of AI use cases. To build this portfolio, you need to identify and prioritize relevant use cases. To execute the use cases, a set of enabling factors is required concerning the organization, the people, the technology, and the AI ecosystem. All of these aspects need to be taken into account when it comes to the development of a comprehensive AI strategy and are further detailed in our report “Elements of a comprehensive AI strategy.” AI strategy aligned with overall strategy Product/Service-centric AI Process-centric AI AI Vision Competitive positioning Ideation and prioritization AI use cases Value chain configuration Enabling factors Organization People Technology AI ecosystem Structure Know-how + Talent AI Infrastructure Startup companies Governance Culture + Collaboration Data Other partners Execution 5 Applying AI What is specific about AI use cases? We study AI use cases, which are defined as: a set of activities taken to reach a specific goal from a business or customer perspective, which involve a substantial application of artificial intelligence. For the aficionados: we mostly consider the AI subfield of machine learning — that is, applications of truly learning algorithms. We draw no specific boundary on the sophistication of the learning (i.e. “how intelligent” the application is). In our experience, even simple learning might create high business value. The overall process of defining and prioritizing AI use cases is depicted in the chart below: Iterative approach Preparation 1. Ideation 2. Assessment 3. Prioritization Identify use cases Assess use cases by Cluster use cases Take use cases into aligned with AI vision value and complexity and prioritize production stage Execution Contents • (Iterative) Prioritiza- • Per use case: Define Maturity Assessment (Customer journey and ment of all identified tion of portfolio of scope for 1st data • Business Priorities & process map) use cases use case ideas exploration and scope • Demand side methods Structured assess- Understanding AI • AI Search Fields • • • Supply side methods • Assess dimensions • for MVP Based on ease of (data and AI capabi- value and ease of implementation and • Data gathering lities) implementation value • Model dev. & validation Participants • Business Top Mgt. • AI Strategists • AI Strategists • AI Strategists • Business Owner • AI Strategists • AI Engineers • AI Engineers • AI Engineers • AI Engineers • Business Owners/ • Business Owners/ • Business Owners/ • Business Owners/ • External Solution Domain Experts • Data Experts/Owners Domain Experts Partners Domain Experts Domain Experts • Data Experts/Owners • Data Experts/Owners • Data Experts/Owners • Filled-out oppor- • Filled-out oppor- • Filled-out prioritiza- Output • Good AI knowledge • Narrow-scope defi- & relevant case tunity and solution tunity and solution tion matrix (value + nition for initial data studies space templates per space templates per ease of implemen- exploration, broader • Focus areas for AI identified use case identified use case tation) scope for MVP • AI Maturity Draft roadmap for development • prioritized use cases 6 Applying AI Let us summarize this and look at pertinent AI specifics. In the main body of this report we will analyse and illustrate them in detail: Preparation Before you start, you should enable your team and management with at least one “Introduction to AI” workshop. This “demystification and leveling of the playing field” is crucial, as even in well-prepared companies many business and functional leaders lack a (common) understanding of AI. This introduction should be followed by the development of a top-down AI vision, sketching out the areas where AI could add substantial business value — arguably, the most common trap remains the a priori focus on search fields of marginal value. Finally, as a company, you need to assess your AI maturity, including key enabling dimensions such as data and infrastructure, governance and culture, organization, and talent. As we shall see, “AI maturity” is a critical input for assessing to what extent more sophisticated applications can be implemented efficiently and add value. A proper evaluation will help to preempt many of the most common failures. • For all these you need to understand: What are table-stakes, what is hard, but doable, what is expected to become available soon, and what has game-changing potential? Data is the soil for AI algorithms and a source of inspiration in the ideation process — as well as a source of frustration in assessment and implementation. Therefore, it is critical that you conduct a review of sources for internal and external data (structured and unstructured), combined with options for data augmentation, synthetic data, transfer learning, simulated data, and more. Third, make sure you bring both parties to the table — deep domain expertise from the business and in-depth AI expertise. Step 2: Assessment Ultimately, we would like to prioritize AI use case ideas based on their value and their ease of implementation. This requires assessing these dimensions first: • Step 1: Ideation When it comes to the ideation phase, three points are important to spark great use case ideas: First, identify friction points or improvement opportunities in the strategic fields defined in the AI vision — for example, along your customer journey or process maps. We stress this point since failing to target a business-critical problem remains, in our experience, a common root cause for AI use cases not delivering value. Second, complement your need identification step by leveraging both AI capabilities and data sources. For the former, you’ll require an expertise of the true capabilities of AI. These capabilities include: • • Unstructured data, such as vision and language, that can now be interpreted Novel analytic capabilities, such as pre- diction and optimization, but also creative capabilities Finally goal-driven robotics/control (see section 2) • • Ease of implementation: The difficulty involved in implementing an AI use case depends on four categories: the required data, the complexity of the required algorithm, the required adoption of processes and systems, and the availability of the required know-how.Since the intelligence of AI depends on data, often the most difficult implementation issues involve data availability, quality, and updates. Assessing this properly requires experience, knowledge in data management, and some sampling. For lower maturity levels, this absolutely requires the use of (independent) AI experts. Value: Assessing the value demands domain knowledge/business owners. In addition, there are a few AI specifics, which are rooted in the inductive learning and entanglement of data and algorithm. For novel use cases or untested data repertoires, there remains some uncertainty on the performance of the AI application. While the expert knowledge for professional estimates is quickly increasing, in practice we continue to see gross misjudgements. And even for the 7 Applying AI • top experts there remains the notorious “unknown unknown” risk. More common is succumbing to the AI paradox: It is deceptively easy to build successful AI prototypes, but fiendishly hard to scale AI — much harder, in fact, than with traditional software or in Digital. While one can view this as an “implementation” issue, it needs to be appreciated in the value estimate: • • What scale (of possibly interacting AI applications) is required to derive the envisioned value? What degree of maintenance is needed (such as adapting to new changing data-sets/environments)? Forgetting these facets is a common cause for naively overestimating the achievable value. One aspect combining value, ease of implementation, and a few strategic considerations is AI make-or-buy decisions. There are some distinct characteristics and criteria when suppliers require company data to train an algorithmic solution — from IP/risk issues to contract management and collaboration, all the way to maintenance. We discuss this in some detail in an insert below. Finally, some AI flags should be set at this stage: They concern important implementation aspects that can increase the effort (or decrease the value) substantially and might require a separate “quick assessment” request. • Regulatory and/or ethics flag - mostly when sensitive people-related data and or decisions are involved • Risk and transparency issues. These include: • • 8 Cyber-security flag: Truly critical when processes are fully automated and might get infiltrated Black swan resilience/human-in-theloop flag: When extraordinary events could lead to severe malfunctioning (e.g. financial trading algorithms facing COVID-19) Step 3: Prioritization Prioritization seems trivial once a proper assessment is done, and you may assume you can just plot the results and pick the most attractive one. In practice, this is almost never the case for AI use cases. First, the complexity of the assessment typically requires an iterative process where you start with a rough assessment and prioritization, reassess what seemed attractive in more detail, conduct another review, and then perhaps check red flags before you provide a final go. Throughout this process, you’ll almost always discover two things: • • All your high-value use cases might have high complexity — so you try to decompose them into intermediate “viable products” and approach the roadmap in an agile way. Even more common is that your use cases, while well-separated on the value axis (by definition), have technological interdependencies. Clustering those can drastically speed up the collective implementation, while concurrently de-risking the approach. Execution While execution is not the focus of this report, we will comment on what aspects of the assessment and prioritization process require proper documentation and how they might influence the set-up of the execution. We will cover all the above points in the ensuing sections, but the summary may already allow for an initial understanding of the most important specifics of AI and sources of common failures quoted in the introduction. Applying AI Preparation AI is both the most significant business transformation force and the largest business opportunity driver we have seen in a long time. As a consequence, managers should be aware that a programmatic approach to leverage AI for their company encompasses many dimensions, which is summarized in The AI Strategy House. Use case ideation and prioritization is a key element of such a programmatic approach. In order to succeed, thorough preparation is required. First, you need to ensure everyone has a good understanding of AI — its power, but also its unusual properties — and that the key areas of potentially high business impact have been defined. An Introduction to AI workshop is mandatory, to ensure a common understanding and alignment of business, functional, and technical contributors on the team and beyond. This introduction should review basics, demystify the subject, provide concrete examples from your industry and beyond, and describe typical hurdles and pitfalls in a business context. This workshop can be held at different levels of sophistication, and for more mature companies becomes more of a joint review of AI, including examples of already successful company implementations and experiences. When we at appliedAI were approached by a midsized company to help them identify relevant AI use cases within their automation business, we first had to convince them to start with such a training as they expected us to bring in the necessary AI expertise. This training proved to be the decisive success factor for the use case workshops that followed, as it ensured that everyone involved — from the managing director to the domain experts — had the same understanding of AI so that concrete use cases could be determined and discussed quickly. Second, there is a bit of a chicken-and-egg challenge in requiring a top-down AI Vision for defining the broad areas where AI could add significant business value, and the need for some use case experience to actually do so properly. Since a common trap continues to be teams wasting time on marginal AI application areas, senior business leaders — building on an initial understanding of AI — should make the effort to define and document the key areas where they expect value and why. It may be in specific products and services, some core processes (such as supply chain or customer-facing activities), or support processes (such as finance and HR). This vision serves as a business guideline for the use case ideation process, even if it might be revisited based on experiences gained during the process. Specifically, there is one scenario where such an AI Vision top-level view is decisive: when there is the potential of a true business disruption via AI. Industries, such as finance and insurance, healthcare, automotive/mobility or media — to name a few — clearly have such exposed domains, which aggressive disruptors are trying to exploit. Addressing such a fundamental business model challenge via AI requires a separate approach, as it is rarely derived bottom-up from individual use cases. Finally, companies are at drastically diverse levels of AI maturity. We at appliedAI have developed a comprehensive AI Maturity Assessment, based on an efficient online survey, that allows companies to quickly assess themselves regarding AI. It covers all the ele- 9 Applying AI ments of the AI Strategy House (see sidebar). We strongly recommend that companies complete such an assessment before embarking on prioritizing use cases, as the feasibility of more sophisticated applications critically depends on the maturity in the enabling factors — AI infrastructure and data, organi- zation and talent, governance, risk mitigation, and ecosystem management. Lacking a comprehensive view of maturity will lead to highly inefficient and failure-prone processes of estimating the ease of implementation. With this, you are ready to embark on use case ideation. Ideation: Identifying relevant cases The ideation phase aims to identify new and relevant AI use cases within the organization. It requires bringing together deep domain expertise from the business and in-depth AI expertise. When the latter is not available from internal AI Centers of Excellence, leverage external expertise — but make sure that you use the process to strengthen the internal competence. To make the best use of scarce team time, use case ideation should follow an efficient process. Your company’s AI vision (see previous section) defines what areas to focus on to create maximum business value. At a high level, we distinguish two directions: • 10 For process-centric AI use cases, the focus lies on company processes, either internal or at the interface with customers, partners, or suppliers. Often, the competitive value lies in company core processes. Start with those that are costly and/or important to your business, then ask: Can this be automated? Could it be faster or more precise? Keep in mind that AI can be used either to support existing processes or create new ones. One example of a disruptively transfor- • med process is an insurance company using image recognition to automatically assess car accident claims with limited human supervision. This drastically lowers the cost of handling claims, while also saving time, thus boosting customer experience. Support processes in finance, HR, IT, and more should not be neglected. Truly strong organizations excel in them. Often, these processes allow the partial leverage of external suppliers (see make-orbuy below) and might thus trigger some quick wins. Product/service-centric AI use cases are often more challenging in their definition, as they require deeper technical expertise and customer experience. They may Applying AI profit from including design/UX experts for the ideation session. On the other hand, compared to process-centric cases, AI in company offerings demands less change management later in the implementation. The focus is of course on those areas with the highest revenue/profit potential and/or a strategic importance. Needs and friction points in customer journeys are at the core: How can AI be used to improve existing products or create new ones that address unmet needs of your current or potential customers? Entirely new AI-driven offerings can be generated by assessing potential uses of AI. This approach helps you develop solutions that actually solve existing user problems or daily challenges and, thus, increase their adoption rate. Ideally, the solution is intuitive and easily integrated into the customer’s daily habits. The input for identifying use cases is somewhat similar to traditional ideation sessions on the demand side (for example, through process maps and customer journey/friction points), but is highly AI-specific on the supply side (AI capabilities and data maps). Let us discuss them sequentially. Process Map Use a process map to systematically break down your processes into tasks and decisions, then identify individual elements that can be automated using AI. Remember, you want to consider automating tasks, not jobs. Demand Side Methods: Customer Journey Map The customer journey map is helpful for visualizing each step of the customer interaction and use of the product or service. The method involves considering the customer experience from the point of view of a specific personas (to force you to be very specific about your “customer”). For this approach, map out each persona’s main points of interaction, and identify critical sections that could be improved with AI solutions. Additionally, actual observations of the target group and personal interviews with users or external experts always deliver novel insights with respect to customer pain points. Computer Vision Process visual data and recognize objects Understand the semantics of images or video sequences Computer Audition Process and interpret audio signals With the spread of AI, still unconventional situations may arise where the users of your offerings are machine-based processes and decision makers, not humans (e.g. in production environments or finance). Getting access to the friction points in such processes is typically much more challenging, requiring access to company systems. AI can be particularly beneficial when there are multiple changing variables that determine a decision. As a learning system, AI can automate decision support and actions with complex dependencies on dynamic data, which can no longer be captured in traditional “rules.” For that, ideate in fields where AI can improve the overall process by eliminating pain points — or where you can redesign an entire process with the use of AI. Supply Side Methods: AI Capabilities Correctly applied, the most important novel input for AI use cases comes from studying the business-relevant capabilities AI has developed. As an immediate caveat, AI capabilities that are naively applied, following hype Computer Linguistics Process, interpret, and render text and speech Robotics and Control Analyze, interpret, and learn from data representing physical systems (incl. IoT) and control its behavior AI Capabilities Make predictions about future course of time series or likelihood of events Forecasting Process large amounts of data and find patterns and “logical” relationships Discovery Look for optimal solutions to problems with large solution space Planning and Search Generate images, music, speech, and more based on sample creations Creation 11 Applying AI reports or vendor marketing, have also been the source of sad flops. Thus, we at appliedAI have put significant effort into structuring these areas and making them accessible to business people. Overall, we distinguish the following eight capabilities. The first three are capabilities that allow unstructured data to be used: • Computer Vision • Computer Audition (including speech to text) • Computer Linguistics (chatbots, translators, etc.) These are critical ancillary capabilities that allow machines to act in the real world and interact with humans, and are often used as advanced input for further processing. Second, there are new analytics capabilities: • • • Forecasting Planning/Optimization Discovery (pattern recognition) These are at the core of many business applications. Wherever you still use rules-based, linear, or manual approaches for forecasting and optimization, AI methods are a candy store for quantum leaps in performance. Also, look for areas where learning can be centralized while keeping actions decentralized (from next-best action in sales to navigating trains/planes/ships/cars). Lastly, check for areas where speed is of critical value (machine handling, financial markets, etc.), as AI systems have the potential to be several orders of magnitude faster than humans. Finally there are truly advanced capabilities: • • Creation Advanced Robotics and Control There is a lingering belief that machines are not capable of creating new content. However, the last years have seen the rise of new generative methods, involving “adversarial” algorithms - meaning they generate new input data from target outputs, using two interacting algorithms. Such applications constitute an -albeit still embryonic - encroachment upon traditional domains of human creativity. Another truly sophisticated area is Advanced Robotics and Control, applicable primarily in industrial settings. It often involves agents 12 learning and simulating their environment, trading off exploration and exploitation. For all these capabilities it is truly critical to understand precisely: What are table stakes today? What is doable, but requires substantial effort (and data)? What is expected to become accessible over the next couple years (e.g. in vision/speech)? What could be game changers (in particular around creativity and goal-based control/simulation)? What is simply irrelevant science fiction? Understanding these points will also be critical in make-or-buy decisions and vendor selection (see insert). We at appliedAI continuously educate our partners in these areas, and have developed AI application playing cards for illustrative purposes and as a great tool for ideation workshop settings. Data Map The other critical supply-based input for AI use cases is data. First, you should understand all possible sources of data — internal, external, structured, and unstructured. You should also be aware of methods to generate data (augmentation and synthetic data, but also simulation and generative methods). Finally, there should be some understanding of the option to learn from related data (aka “transfer learning”). Building on this, you would start by assessing your company’s potential data assets and options for complementing them. Then determine which of the data assets are unique and could be the source for potential use cases. Finally, elaborate how you can apply the data in a useful way internally or for adjacent businesses. You may also already want to consider how accessible the data is today and how that can change over time. This will be a major focus in the assessment section. Obviously, all such demand- and supply-side input areas should be complemented by best practice use case examples from within your industry and beyond. We at appliedAI provide a comprehensive use case library and continuously collaborate with our industry and national/international institutional partners to grow this repository. Combining all the above in a concrete setting builds on classic brainstorming techniques Applying AI that are often used in a workshop setting. For this, you should prepare content in advance for your top processes and products — data maps, costs drivers, revenue share, etc. — and share that content with the group. Carefully select the right mix of participants, including people familiar with AI technologies, data infrastructure, and existing AI initiatives, as well as people familiar with the company’s products and services. As in all ideation sessions, make sure to follow the ideation rules in the workshop and defer early judgement — you will have plenty of opportunity for that later. We at appliedAI have a wide set of offerings to successfully facilitate such ideation workshops. When a European energy supplier set out to find AI use cases, they were faced with the challenge of the “AI divide”: On one the hand, there was a central AI team of experts who understood AI‘s capabilities. At the same time, however, the problems were „trapped“ in the various business areas by the domain experts. To overcome this gap, a series of use case workshops was set up with all the different business units, including time allocated for providing an introduction to AI. Although this is a long and costly process that requires the coordination and support of a large number of participants, it is necessary for good results. The outcome of the ideation phase should be captured in opportunity and solution space cards. Utilize the opportunity space card to identify the issue to be solved with AI (which product, service, or process is affected by the use case and the user story), and ask the following questions: What is the objective of the use case? Where does AI contribute? Which products or processes could it be applied to? Once again, the level of detail is important. “Introducing a chatbot” is not a helpful description of a use case. The following would be more useful: “Use AI to reduce customer service costs by predicting customer intentions and automatically handling the most frequent requests via a chatbot interface in the online customer service process.” After that, use the solution space card to indicate the capability used to solve the issue, the desired output of the solution, and what information or data is needed to train AI. Remember that a solution should always solve a specific problem. 13 Applying AI Assessment: Assessing use cases Once you have a good list of potential use cases, you are ready to address the second challenge: How to assess what a truly valuable and feasible use case is. To do so, you will need to evaluate each of the ideas on your list, starting with a precise description of each case. You need to decide how critical the use case is. Does it entail a high level of risk? Does it require extreme accuracy (e.g. for autonomous driving)? At the core, you need to make an assessment of both the value and the ease (or complexity) of implementation of a use case. Additionally, check a few common flags. For this, you can use the appliedAI AI Use Case Canvas. The following aspects should be assessed: Value • • Business value: What is the economic potential of the use case in terms of cost reduction, additional sales, speed-up, quality improvement, or customer satisfaction? Strategic alignment: How does the use case fit into the strategic goals of the company, and what advantage does it bring in terms of strategic market positioning, and improved resource efficiency, etc.? In principle, these dimensions are methodologically straightforward and familiar, from business decisions in general. There are, however, some AI-specific challenges that mainly result from the entwinement of data and algorithm in the inductive learning process: • 14 For novel use cases or untested data repertoires, there remains some uncertainty on the performance of the AI application. This means you need to assess the value depending on what accuracy can be achieved — or, alternatively, define minimum accuracy levels required. You should also be aware of the AI paradox: It is deceptively easy to build successful AI prototypes, but fiendishly hard to scale AI — much harder than with traditional software or Digital. This applies to the build phase, but even more to maintaining potentially interacting learning applications at scale in an environment of dynamically changing data. Thus, it is critical to define: • • • • What scale (of possibly interacting AI applications) is required to derive value? Over what time period is the value derived? What dynamic changes does the application need to adapt to? What degree of maintenance is needed? Forgetting to answer these questions are standard value-destruction traps. Ease of implementation Estimating the effort required for each AI use case is notoriously difficult without an Applying AI 15 Applying AI in-depth knowledge of the quality and availability of all required data over time and full information about the AI maturity. The appliedAI Use Case Canvas offers a framework to derive a solid estimate. internal data and the resulting algorithm contains critical business knowledge, deciding on, contracting and managing the relationship has some complex, unique issues. We discuss the topic in more detail in the sidebar. You need to evaluate four main categories (data, algorithm, process/systems, and know-how), while also estimating the duration, until you have a fully working and verified. Aggregating those provides a score for the overall ease of implementation. The higher the score, the easier it should be to develop. Finally, there are some AI flags that should be set at this stage. These are vital implementation aspects that can turn out to increase the effort (or decrease the value) substantially — and might require a separate “quick assessment” request. • An AI expert with sufficient knowledge of the product or process related to the use case is typically required for the assessment. Generally speaking, the more high-quality data you have, the less explicit process know-how you will require in the implementation phase; in the assessment phase, such know-how is indispensable. Below is a short description of the respective categories: • • • • Data: What data do you need for the use case? What quality of data do you have? What effort is required to integrate the relevant data and make it available? What dynamic updates are required over time, and how can they be performed? Algorithm: Are there other implementations of the use case already up and running, either within your company or elsewhere (including other companies or industries)? Processes/Systems: Which processes and systems are affected? Do you need to make major changes to existing processes or systems? Required Expertise: Do you have the necessary technical and domain knowledge? The AI Maturity Assessment performed in the preparation phase will help you tremendously in this evaluation. Often, a rough first assessment is completed in a workshop setting and validated in detail afterwards. But be aware that there remains an “unknown unknown” risk that might only surface in the PoC phase. One aspect that affects value, ease of implementation, and a few strategic considerations is AI make-or-buy decisions. Since AI requires training, it is not plug-and-play. Specifically, when suppliers require access to 16 • Regulatory and/or ethics: This is mostly relevant when sensitive people-related data and/or decisions are involved. For instance, a regulatory authority might require transparency of the algorithmic decision making — often in terms of outdated metrics stemming from rule-based decision systems. Or there might be some legal or consumer-driven “fairness” expectations (often with only vaguely defined criteria). Since AI-based decisions can happen at unprecedented speed and scale, while at the same time everything is measured, such topics can become of enormous importance and need to be flagged in a value assessment. Risk and transparency: There is a wide range of further risks that need to at least be documented in the assessment. • • Cybersecurity is becoming increasingly important the more we digitalize our businesses. Most executives think of data security, but automating with AI introduces a new quality of risk: process security (hackers have famously demonstrated the take-over of an autonomous car to make such a risk tangible). High-risk areas thus require an extra cybersecurity flag. Resilience: Depending on the envisioned lifetime of the use case, it may be important to expose it to an extensive test of resilience against future extreme events not represented in the test sets. Generalization to such events has not been assessed. This requires algorithms that at least recognize they are on uncertain territory, often combined with humanin-the-loop systems. Again, a quick check of such an exposure should be performed and potentially flagged. With a robust assessment in place, we are finally ready for prioritization. Applying AI Prioritization: Clustering and prioritizing use cases Once a set of use cases has been identified and assessed, you still have to make sure you focus on those that are most relevant — after all, the potential applications of AI are endless. In order to apply your AI competence most effectively, the use cases must be prioritized. The basic approach involves plotting the results of your assessment phase in a prioritization matrix and choosing the most attractive cases. Here, there are several considerations to take into account. First, the process is not as linear as it seems. Assessing use cases is quite complex, as we have seen. Typically, there is a first round of rough assessment and prioritization. Then, a deep dive on the most promising looking cases, followed by a second round of review and prioritization. In addition to this iterative approach, you will typically face two additional issues: Often, many high value use cases seem to face substantial implementation hurdles. It is worthwhile to assess to what extent these challenges can be resolved by breaking down the use case into smaller, more digestible units — each with well-defined value steps — and approaching the resulting roadmap in an agile way. A second issue occurs when use cases are technically interdependent. AI capabilities such as computer vision or natural language processing might be shared across use cases, and the same can hold true for data assets and pipelines. These often require substantial investments in skills and infrastructure to resolve. In other words, while the value of use cases is separable (this is the way we defined them) there may be cluster synergies that strongly affect the collective ease of implementation. Thus, it is useful to cluster use cases by the following parameters: • • • The required input data The AI capability it will require The product(s) or process(es) to which it could be applied Prioritize the clusters that have a large number of use cases with high value, and at least some use cases that are relatively easy to implement within a previously determined specific timeframe. Within the prioritized clusters, pick one or two cases for validation. Ultimately, this will mean you are dealing with a total of about three to six cases in one wave. Good candidates can have different characteristics. For example, it might be possible to implement some of them quickly and realize “quick wins.” Others might have a high strategic value or marketing relevance. Or they could be easily scalable to other products or processes. Ideally, these clusters are the base for building your use case roadmap. So far, we have rarely seen companies that consider the dependencies of their AI use 17 Applying AI cases. A notable exception is a major European manufacturing company that systematically considers the reusability of an AI model for future use cases as a key factor for prioritizing use cases. Prioritizing use cases works differently in companies that are just getting started with AI (i.e. “experimenters” in the maturity assessment). Here, it is especially important to manage the risk of a first use case, although this may not yet be the most valuable one. A good choice might be one for which data is available and a similar solution already exists in another part of the organization. When mobilization is an important goal, use cases should also be “visible”. Most likely the use case needs to be related to the core business. Visibility can be enhanced by the internal communications department, which should also ensure that the technical language is comprehensible and the value it adds to the business is clear. Is the goal to address a specific process? Or to explore new fields of technology? Set expectations accordingly. Besides to centrally driven prioritization and roadmap building efforts, there will usually also be some instances where decentralized units will drive their own AI solutions with their own resources and budgets. This is most often the case when the use case may not end up high on a company-wide prioritization list but will solve an urgent, unit-specific problem. Clear guidelines are required for such decentral activities. Typically, priority depends on the longevity of a use case and the likelihood for it to interact with other company processes in the future. For instance, distinct efforts for forecasting sales in business units will lead to escalating complexity when a consolidation of numbers is required. Also, at the very least, data governance and platform decisions should be coordinated holistically — such coherence is critical for being prepared for applications accessible at higher maturity levels. Decentral entrepreneurial AI activities should be encouraged (or at least tolerated), but coordination of the above-mentioned dimensions needs to be assured and the company’s overall AI vision should be respected at all times. Iterate as necessary Assessment of Use Case Ideas Ease of implementation High Economic value Strategic alignment Value Value Specification Data Algorithm Process/Systems Low Dimension Prioritization of Use Case Ideas Low Required know-how Prioritization of identified ideas along core dimensions of value and implementation effort (incl. AI specifics) • • • 18 Ease of implementation High Assess use case ideas along core dimensions of value and implementation effort (incl. AI specifics) Decompose very strategic, complex Use Case into smaller solutions to make them manageable as intermediate viable products Cluster use cases with similar underlying technology (capabilities/data) Applying AI Make-or-Buy Decisions for AI Use Cases Most organizations that want to apply AI do not think of themselves as technology companies and would intuitively try to find a supplier to “buy” from. Some even recall painful experiences from the early days of enterprise software applications. At first, all applications were approached as customized projects only to run into extensive maintenance issues. Until, finally, the relief of productized enterprise software arrived. The current state of AI implementations would seem to exhibit some parallels. That may be true, but there are critical differences — most importantly, that “make” and “buy” acquire quite a different meaning in the context of AI (i.e. inductive learning systems). The vast majority of raw AI algorithms are (still) open source, available for free, but without immediate business value. Only after training on data (often at least partially your own company data) does the trained algorithm exhibit intelligence. This is quite distinct from the mere parametrization of traditional enterprise software. An executive once compared this to new hires requiring training to provide business value. You can shorten the time by hiring them “pre-trained” by schools and universities, but they will still need to learn your specific company environment. Similarly, machine intelligence is not plugand-play. In AI, make-or-buy is more of a continuum than a binary decision. At one extreme, even when taking full ownership, your teams will never build algorithms from scratch. They will always use packages, often pre-trained on some AI capabilities. At the other extreme, even the most productized AI (think of a spam filter) needs to adapt to you, so there needs to be at least some agreement on data access, confidentiality, and more — and in a business context, the required adaptation is typically quite extensive. An area where many companies have started to experience the fluidity of the make-orbuy “decision” is chat bots. Most companies cannot — and should not — build them from scratch. On the other hand, if you ever tried to just implement a pre-trained version without further learning, your experience must have been terrible. The best vendors offer pre-trained models with graphical user interfaces to support the further process of customization (i.e. continuing to learn). This is a general trend we are seeing in the industry — for simple learning processes, the AI “build” process is being supported by ever more accessible tools, lowering the requirements for technical expertise. The correct reformulation of make-or-buy is: To what degree do you want to do it yourself vs partnering in regards to building and managing the AI application? And with whom should you partner? And how should you structure the partnership or contract? 19 Protect your core business Build distinctive capabilities (e.g. acquihire) Immediate sweet AI spots Do AI yourself and scale fast Outsource as much as possible Focus on contract management Potential AI business opportunity Explore, whether high value to other parties Low Strategic value potential vs competitors High Applying AI Low Unfair Advantage in AI Build (data, skills) Criteria: First, what is the strategic value for you? (Remember, you can only excel in what you own.) Second, Can you build it? (Your AI capabilities and your preferred access to data). A general consensus is that you should refrain from building your own AI solutions where ERP providers (SAP, Salesforce, and Oracle) or process specialists (such as ServiceNow or Workday) are already integrating AI capabilities of the data handled in their systems into their suite. You are towards the lower left side of the matrix (at a disadvantage for building the AI solutions) and, often, these are not at the core of your competitive differentiation. Further details of make-or-buy deserve a much deeper discussion and are the subject of a separate publication where we also address: • • 20 How to choose among major vendors, start-ups, academia, consultancies, etc. How to structure contracts: IP, confidentiality, maintenance, regulatory, and performance guarantees High Applying AI Execution: Getting started Before you move to the implementation of a prioritized use case, there is one last thing you need to do: double-check your assumptions, both from a business and technical perspective. Make sure you fully understand the characteristics driving the value of the use case (level of accuracy, scale, and maintenance requirements). Do you really have the data, and is it accessible via solid pipelines? What red flags does your use case have (regulatory, ethical, risk and transparency), and are you sure you can address them? Finally, are you truly clear on the make-vs-buy decision: Do you want to develop a case internally, find a partner to develop it, or buy a solution? It is also good practice to define gates and knock-out criteria that would make you abandon the pursuit. Now you are ready to start execution. Implementing AI use cases has its own body of experiences of best practices and traps, which we will cover in a future publication. 21 Applying AI Our partner Siemens is finding use cases in 5 days Conducting a use case ideation in an established organization is often challenging, as the necessary employees are typically locked into their routines and other projects. Our partner Siemens has chosen a unique approach to address these problems: The Siemens AI lab and its 5-day use case sprints. In the following they share their experience. As a leading provider of industrial solutions, Siemens is researching and developing AI technologies. “Our goal is to create value for our customers through AI, to improve our internal process efficiency, and to make a positive impact on society and our environment.” Founded in 2017, the Siemens AI Lab represents a platform that drives the progress and adoption of AI technology with meaningful impact for Siemens. The Munich-based Lab explores the potential of emerging AI technologies for industrial use cases. As part of the Data Analytics & AI research hub of the Corporate Technology division, the Siemens AI Lab acts as an open innovation platform where business stakeholders and their domain experts are matched with internal AI researchers to find suitable solutions together. With an entrepreneurial mindset and a fail-fast culture, teams develop tangible prototypes for the next gamechanger in a format spanning less than a week. The Lab provides tools and methods to bolster the potential of AI across multiple business units, with the aim of discovering scalable applications for the company. Siemens has adopted the format elsewhere, including Beijing and Berkeley. The offering 22 Picture: The Siemens AI Lab engine with its key elements Applying AI Acceleration Under the motto “fail fast in five days” the Siemens AI Lab teams up with internal customers and AI experts to build a proof of concept, a prototype, or a minimal viable product quickly and affordably. The Lab supports the project team throughout the whole process. For example, it grants access to over 100 AI experts from Corporate Technology, which offers the perfect starting point for a detailed assessment of any use case hypothesis and the required dataset. In order to prepare project teams for the intensive Acceleration sprints, questions, details, and the expected results are clarified beforehand. The Siemens AI Lab brings the right people together in the right place and complements the customer’s data and domain expertise with industrial experts as well as proven AI technologies. Lessons learned: 1. Be there from the start. Support the shaping of project goals and setup from the very beginning. 2. Co-locate the team. No distractions, all the expertise in one (physical) room. 3. Fast go/no-go decision. Implement and test a Proof of Concept idea in 5 days. Orientation Picture: Value Proposition Canvas adopted to AI specifics 23 Applying AI At the Siemens AI Lab, customers can learn in joint teams about the most impactful Industrial AI use cases and AI application to improve their business. Ingredients for a successful workshop 1. Start with the introduction of Industrial AI Introduction & Tech Session 2. Have a look at the Customer Situation & Data Value 3. Get inspiration from Abstracted Use Cases 4. Use Case Ideation & Design Thinking 5. Adapt it to the Customer Situation & Data Value 6. Create the Action Plan & Way Forward Exploration Piloted in early 2020, the Residency Program offers promising talents in applied AI the opportunity to drive Industrial AI research. In this 9-month program, participants solve fundamental challenges of Industrial AI and have access to a broad network of researchers within Siemens. “Culture eats strategy for breakfast.” Whether it is acceleration sprints, orientation workshops, or exploration research, the Siemens AI Lab only succeeds thanks to its innovative culture. Three factors are key to ensure the success of AI Lab: 24 1. 2. 3. Holacratic team organization Open-mindedness Agile project management Instead of having a job description to adhere to, the team members at the Siemens AI Lab are organized by purpose-driven roles that correspond to the “why” and “how” instead of the “what”. Being open-minded is not just important for the sake of the work, but also for how colleagues collaborate. Our goal is to expand the Siemens comfort zone in AI by building a sustainable portfolio of skills and tools that drives AI@Siemens towards a new innovation culture, where diversity, agility and openness are the building blocks of corporate intrapreneurship. Applying AI 25 25 Authors Contributors Hendrik Brakemeier is Senior AI Strategist at appliedAI. Before joining appliedAI Hendrik pursued a doctoral degree at the software and digital business group at TU Darmstadt during which he worked on the economics of data-based business models. Furthermore, he served as part of Accenture‘s analytics strategy and transformation advisory practice. The authors would like to thank the SiemensAI Lab, namely Vera Eger and Bernd Blumoser for their contribution to this paper. Philipp Gerbert is Future Shaper of UnternehmerTUM and Director at appliedAI. Previous activities include many years as Senior Partner and Lead of Digital Strategy at BCG, Fellow for AI in Business at the BCG Henderson Institute, as well as a Partner at the McKenna Group in Silicon Valley. He holds a PhD in Physics from the Massachusetts Institute of Technology (MIT). Philipp Hartmann serves appliedAI as Director of AI Strategy. Prior to joining appliedAI, he spent four years at McKinsey & Company as a strategy consultant. Philipp holds a PhD from Technical University of Munich where he investigated factors of competitive advantage in Artificial Intelligence. Andreas Liebl is Managing Director at UnternehmerTUM as well as appliedAI. Before joining UnternehmerTUM, he worked for McKinsey & Company for five years and completed his PhD at the Entrepreneurship Research Institute at the Technical University of Munich. Maria Schamberger is Senior AI Strategist at appliedAI. She has a rich background within the financial services industry from her former role as Vice President at the Allianz Group as well as consulting and research experience from McKinsey & Company. Maria studied Corporate Innovation at Stanford University and Banking at the Frankfurt School of Finance and Management. Alexander Waldmann is the Director of Operations at appliedAI. Before joining appliedAI he served as Visionary Lead of [x], UnternehmerTUM‘s unit for disruptive technologies and helped create ~300 startup-teams during that time. Alex studied Computer Science at the Technical University of Munich and MIT. The authors would like to thank Annette Bauer for her support in writing this report and Henrike Noack for designing this publication. Bernd „Benno“ Blumoser is the Innovation Head of the Siemens AI Lab, which he co-founded in late 2017. Open Innovation & Networks, Trend Scouting & Foresighting, as well as the Development of the Corporate AI Strategy are key areas which he influenced within Siemens in recent years and still thrives to drive them further. He holds a diploma in “international cultural and business studies” and an M.A. for “musical education” from the University of Passau, and started his career in 2006 at Siemens Management Consulting. Vera Eger is part of the Siemens AI Lab Family since November 2019. She studied Psychology in Regensburg and is doing her master’s in Economic-, Organizationaland Applied Social Psychology at the LMU. Vera is particularly interested in research about human-machine interactions and the resulting challenges for society and economy. About appliedAI The appliedAI Initiative, Europe’s largest non-profit initiative for the application of artificial intelligence technology, aims to bring Germany into the AI age and offers its wide ecosystem of established companies, researchers, and startups neutral ground on which to learn about AI, implement the technology, and connect with each other. NVIDIA, Google, MunichRe, Siemens, Deutsche Telekom, and many more are partners of the initiative, which started in early 2018. You can find more information about appliedAI at: www.appliedai.de UnternehmerTUM GmbH Lichtenbergstraße 6 D-85748 Garching Deutschland

Use Quizgecko on...
Browser
Browser