Prescriptive Analytics and Explainable AI PDF
Document Details
Uploaded by MeticulousCouplet
SKEMA Business School
Chitu Okoli
Tags
Summary
This document provides an overview of prescriptive analytics and explainable AI in business contexts. It covers various aspects of data analytics, predictive analytics, and how to derive meaningful conclusions from data.
Full Transcript
Prescriptive Analytics and Explainable AI AI in Business Contexts PGE M1 Chitu Okoli Professor of Digitalization SKEMA Business School, Paris Conscientious commerce Pure money commerce vs. Conscientious commerce § Pure money commerce § Buy...
Prescriptive Analytics and Explainable AI AI in Business Contexts PGE M1 Chitu Okoli Professor of Digitalization SKEMA Business School, Paris Conscientious commerce Pure money commerce vs. Conscientious commerce § Pure money commerce § Buy low, sell high, leave ’em dry § Conscientious commerce § It’s all about creating value for people § Every transaction must be a good deal for the other person § You’d rather be cheated than to cheat anyone § We need to take care of our conscience Image sources: Max Pixel, Strategia Forex, Start Your Business Magazine Artificial intelligence and data analytics Three stages of data analytics § Descriptive analytics § Predictive analytics § Prescriptive analytics Image from ICONS Library Descriptive analytics Image from ICONS Library Descriptive analytics and data visualization § Descriptive analytics § Analyze past data to describe it to see clear patterns and trends § Data visualization § Tells an interesting, insightful story § Intuitive to understand accurately § Tells the truth: intuitive interpretation is accurate, not misleading § It does NOT apply the tricks of How to lie with statistics Image from ICONS Library Role-playing during exercises: We are managers at a private health insurance provider § Health insurance lets people contribute beforehand to their future possible large healthcare costs. Image source: Prominent Insurance Brokers § Like the mutuelle in France. § In the United States, private health insurance usually covers ALL medical costs. § Some people have Medicaid (like France’s Assurance maladie), which covers basic costs. § Most don’t, so private health insurance is expensive and covers all costs. § We will play the role of health insurance managers for senior citizens. § Our health insurance plan members are all 66 years and older. § We want to be able to pay our members’ healthcare costs so that they receive adequate care when needed. § But we want our members to have fewer medical needs (and costs) so they can be healthy, and we can increase our profits. US National Medical Expenditure Survey (NMES) dataset Variables Description hospital_stays Number of hospital stays. (Prediction target) health Self-perceived health status, levels are "poor", "average" (reference category), "excellent". chronic_illnesses Number of chronic conditions. active Whether the individual has a condition that limits activities of daily living ("limited") or not ("normal"). region Region; levels are northeast, midwest, west, other (reference category). age Age in years. black Is the individual African-American? gender Female or male. married Is the individual married? years_education Number of years of education. income Family income in USD. employed Is the individual employed? private_insurance Is the individual covered by private insurance? medicaid Is the individual covered by Medicaid? § Sample of 4,406 senior citizens from the general population 66 years and older § Not specifically our plan members § Health insurance coverage § 75% have private insurance; 9% have public insurance (Medicaid) § 15% have no insurance at all AI-powered descriptive analytics with Microsoft Excel Excel AI - data analysis made easy - YouTube Predictive analytics Image source: ICONS Library Predictive analytics § Predictive analytics: analyze the past to predict the future, assuming that the future will continue to resemble the past § Primary focus: high-accuracy estimation or prediction of the target outcome § Input factors that lead to the outcome are incidental § Legitimate uses for predictive analytics § Prioritize: managers can understand how things work and then choose priorities § Simulate: accurate models can be used to simulate alternative scenarios § Anticipate: for outcomes over which managers have very little control § Benchmark: for outcomes over which managers have some control § Establish a benchmark to beat for prescriptive analytics if nothing changes § Supervised machine learning primarily focuses on predictive analytics § Automated machine learning (AutoML) often gives excellent results Image source: ICONS Library Predictive analytics Eric Siegel - Predictive Analytics - 7-minute keynote sample - YouTube Altair AI Studio (RapidMiner) Source: RapidMiner wooclap.com Which is the best-performing model to predict the number of Code: GNAGXT hospital stays? Mean Absolute Error (MAE): lower error values mean more accurate predictions wooclap.com Gradient Boosted Tree Model Code: GNAGXT What is the most important factor related to the number of hospital stays? Explain why you selected that factor based on these charts. Prescriptive analytics Image source: ICONS Library Prescriptive analytics § Prescriptive analytics: analyze the past for guidance on how to intervene to ensure that the future will be better than the past. § The goal is to intervene to improve the target outcomes, not to reproduce them. § Primary focus: input features (independent variables) § What factors result in the target outcome? § Interpret the influence of input features to suggest priorities for intervention. § Optimize and simulate outcomes based on input features to suggest concrete actions. Image source: ICONS Library Accurate predictions without explanations: helpful but limited § Accurate predictions are inherently valuable. § Target variables are important in themselves. § Prescriptive analytics requires accurate predictions, otherwise any prescription is unreliable. § But without explanations, accurate predictions are limited. § If managers do not understand a model, they are unlikely to trust it. § If we depend blindly on an accurate model, we learn nothing from it. § Without understanding how a model was determined, we cannot tell when the past no longer resembles the future and the predictive model ceases to be reliable. Source of images: 123RF Meaningful explanations for prescriptive analytics § Managerial action is guided by understanding how and why various variables are related to the target variable. § Meaningful explanations are achieved through explainable AI (XAI) approaches. § XAI is also called “interpretable machine learning (IML)”. Simulations Minimal prescriptive analytics based on accurate models § Simulations are based on accurate predictive models. § We can effectively estimate the effects of various values on the target variable. § Simulate the effects of prescribed actions using benchmark predictive models. § Simulation of the full range of possible values can guide action in multiple scenarios. § Even if we cannot explain an accurate model, simulations give us some idea of their implications. Altair AI Studio Simulator Explainable AI (XAI) Explainable AI (XAI) What is Explainable AI? - YouTube Stakeholders of XAI § Managers or product owners § Those who commission and authorize AI deployments. § They ensure the appropriateness and adequacy of AI for organizational objectives. § Users § Consumers and anyone affected by the model decisions. § They want to know about how decisions are taken and how they could potentially be affected by the AI. § Developers of AI § Programmers, data scientists, data engineers, model risk analysts, and other AI specialists who develop the technical features of the model. § They need to understand how AI works to effectively build models and improve their performance. § External Regulators § Entities who inspect the legal compliance of AI and the impact of its decisions on users. § They need to ensure that it is not harming users, whether voluntarily or involuntarily. Primary goals of the stakeholders of XAI § Algorithmic Transparency § Realistic Representation § Ethical Responsibility § Prescriptive Actionability Image source: pxfuel, Start Your Business Magazine, ICONS Library XAI for Algorithmic Transparency § Algorithmic Transparency refers to XAI that explains in human- understandable terms how the AI arrives at its results. § Balances between technical and human understanding: § High-level explanation of how the model works without getting into technical details. § Focuses on understanding the model’s inner workings without needing internal structure details. § Main audience: Primarily benefits developers; less relevant for managers, users, or regulators. XAI for Realistic Representation § Realistic Representation refers to XAI that explains how the AI reflects a faithful representation of the real-world scenario that the AI models. § Related concepts: Trustworthiness, confidence, and generalizability are connected to how well a model represents reality and can be applied in other contexts. § Role of domain experts: Specialists verify the model’s real-world correspondence; they may also act as users or managers. § Scientific knowledge: Often valued by medical, natural, and social scientists. § Model representation varies: Some machine learning models might relate closer to reality than others. § Main audience: Relevant for managers and developers focused on how AI models represent real-world phenomena. Image source: pxfuel XAI for Ethical Responsibility § Ethical Responsibility refers to XAI that provides information to ensure that AI respects human values of fairness, free will, liberty, privacy, and so on. § Fairness: Ensures impartiality by highlighting biases in the model's training data and results. § Privacy: Addresses the risk of AI models inadvertently breaching private information. § Trade-off between accuracy and explainability: Sometimes, a more ethically responsible model might be preferred even if it is less accurate. § Sometimes required by law: Industries and jurisdictions with strong regulatory requirements might require explicit documentation of AI compliance. § Main audience: Crucial for ALL stakeholders: managers, users, developers, and regulators. Image source: Start Your Business Magazine XAI for Prescriptive Actionability § Prescriptive Actionability refers to XAI that explains the implications of AI results for recommended human decisions. § Causality: Identifying cause-and-effect relationships between variables is needed to support decision-making. § Interactive explanations: Users should be able to interactively simulate and compare alternative scenarios. § Main audience: managers and users: § Managers prioritize actionable insights from AI to achieve real objectives, beyond just understanding AI's workings. XAI can persuade managers to adopt AI. § Users benefit from understanding actionable goals Image source: ICONS Library Relationships among stakeholders’ goals for XAI Image source: pxfuel, Start Your Business Magazine, ICONS Library Accumulated local effects (ALE) plots for XAI § Y values indicate average predictions. § Rug plots (numeric X) indicate the distribution of X and Y values. § Percentage sizes indicate the prevalence of each category (categorical X). § Median line indicates how much of an effect each X value has on the predictions § Approximately at the median band: not much effect § Far from the line: strong effect wooclap.com Accumulated Local Effects (ALE) on hospital stays Code: GNAGXT Which variables make a meaningful difference? Actionable explanation Process of actionable explanation Analysis (various methodologies) Step 2: Classify concepts Step 3: Conduct analysis Step 4: After analysis, classify Step 1: Gather all according to their ultimate according to various concepts according to concepts importance and time frame appropriate methodologies actionable explanation types available Key attributes of concepts in actionable explanation § Relevance § Ultimate § Relevant § Not relevant § Control § High § Low § No Relevance of concepts: Ultimate § Every project must have at least one distinct Ultimate concept. § There is usually only one Ultimate, but there might be more than one. § This is the target or label in supervised learning. § Either highly desirable or highly undesirable. § If numeric, either maximize or minimize it. § If categorical, certain categories are preferred while others to be avoided. § Managerial implications: § Explain the effects of concepts that lead up to the Ultimate. Relevance of concepts: Relevant § Studied and confirmed to be relevant in affecting the Ultimate. § Managerial implications: § If controllable, managers should take action to shape the concept in their favour. § If not controllable, managers should observe, anticipate and react to the changes in the concept. Relevance of concepts: Not relevant § Studied and confirmed to be irrelevant in affecting the Ultimate. § It is very valuable to be able to confidently designate a concept as Not Relevant. § Managers can deemphasize such concepts so that they do not waste resources (time, personnel, etc.) on their observation or control (at least, as far as the current project is concerned). § But what is Not Relevant today may become Relevant tomorrow. § Whenever the model is reassessed, concepts previously categorized as Not Relevant should always be reverified. § Frequency of re-verification depends on the cost of data collection. § Managerial implications: § Do not waste resources on their intervention. § Periodically verify if they continue to be Not Relevant. Controllability of concepts § The extent to which managers can intervene to take action to shape the concept in their favour. § The nature of the relationships determines the nature of the influence. § Concepts might interact to affect their controllability. § High control § Managers have very much or total influence on the values of the concept. § Low control § Managers have some influence on the values of the concept, but much of it depends on factors beyond the managers’ actions. § No control § Managers have no influence at all on the values of the concept. Managerial implications of controllability of concepts § High control § It is managers’ fundamental responsibility to shape the concept such that the Ultimate changes in the desired direction. § Low or no control § Although managers cannot directly control the values of the concept, they should measure and observe its values. § Thus, managers should understand its effects and anticipate them proactively. § Managers should identify effective responses to take based on changes in observed values. wooclap.com Over which variables do managers have high or moderate control? Code: GNAGXT Regardless of controllability, what actions should managers take to minimize or respond to increased hospital stays? Conclusion Summary § Three stages of data analytics provide increasingly valuable insights: § Descriptive analytics helps managers see the big picture in past data. § Predictive analytics helps them anticipate and benchmark their reasonable expectations for the future. § Prescriptive analytics suggests how they may shape the future in their favour. § Explainable AI (XAI) helps managers understand the factors contributing to a machine learning model’s predictions. § Actionable explanation helps managers focus on analysis results that they can most feasibly respond to that would have the greatest desirable impacts. Image source: ICONS Library