Lecture 11 - AI Ethics - no videos.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

HardWorkingAestheticism

Uploaded by HardWorkingAestheticism

Technical University of Moldova

2024

Tags

AI ethics moral philosophy technology ethics

Full Transcript

IT and Society Lecture 11: Moral Machines & The Ethical Dimensions of Artificial Intelligence Prof. Jens Grossklags, Ph.D. Dr. Severin Engelmann Professorship of Cyber Trust Department of Computer Science School of Computation, Information and Technology Technical University of Munich July 1, 2024...

IT and Society Lecture 11: Moral Machines & The Ethical Dimensions of Artificial Intelligence Prof. Jens Grossklags, Ph.D. Dr. Severin Engelmann Professorship of Cyber Trust Department of Computer Science School of Computation, Information and Technology Technical University of Munich July 1, 2024 Recap Misleading and Deceptive Information and Advice: Many approaches to prey on desperate individuals to spend time and money on non-conventional treatments Post-truth and alternative facts etc. affect many domains including classical science-oriented fields like medicine Complex problem amplified by echo chamber of social media + cost-free access to host content What are good solution approaches? – Can technology help sufficiently? Are the incentives there for platforms to intervene? – Can social network population reach consensus on what is acceptable? See Facebook vote case. – Fake news laws in France and Singapore: Do they apply? 2 Announcement Abstract: Smartphone apps can often be privacy-invasive. Yet, billions around the world routinely use such apps in their everyday lives despite expressing desires for privacy protection. This talk presents the results of a number of empirical studies in which we have attempted to understand this seeming paradox with qualitative and quantitative approaches covering a variety of usage scenarios, including contract tracing during a public health emergency. In the end, I will tie the insight together by developing the concepts of affective discomfort, hyperbolic scaling, and speculative vulnerability that can help understand why people continue to use seemingly creepy apps. Bio: Sameer Patil is an Associate Professor in the Kahlert School of Computing at the University of Utah. Previously, he has held several appointments in academia and industry, including Vienna University of Economics and Business (Austria), Helsinki Institute for Information Technology (Finland), University of Siegen (Germany), Yahoo Labs (USA), New York University (USA), and Indiana University Bloomington (USA). Sameer's research interests focus on human-centered investigations of cybersecurity, covering the fields of Human-Computer Interaction (HCI), Computer Supported Collaborative Work (CSCW), and social computing. His research has been funded by the National Science Foundation (NSF), the Department of Homeland Security (DHS), and Google. Sameer obtained a Ph.D. in Computer and Information Science from the University of California, Irvine and holds Master’s degrees in Information (HCI) and Computer Science & Engineering from the University of Michigan, Ann Arbor. This public lecture is part of and kindly supported by the TUM Global Visiting Professor Program 3 Today: 1. Introduction to ethics/moral philosophy – Utilitarianism – Deontology – Virtue Ethics What is the right thing to do? 4 Today: 2. Digital ethical dilemma: – Scenario with profound and detrimental consequences – Working example: Autonomous driving  How do moral principles cope with morally-charged scenarios caused by digital technology? 5 Ethics/Moral Philosophy Key question: What is the right thing to do? Scenario 1 6 What is the right thing to do? [Classic trolley problem thought experiments] 7 What is the right thing to do? Reasons? Scenario 2 8 What is the right thing to do? 9 What is the right thing to do? Reasons? Scenario 3 10 What is the right thing to do? 11 What is the right thing to do? Reasons? Scenario 4 12 What is the right thing to do? Save 4 vs Save 1? 13 What is the right thing to do? 14 Moral principles have emerged from these examples 1. Consequentialist principles (utilitarianism) Locates morality in the consequence of an act Asks the question: What creates the most overall utility for the individuals involved? 15 Moral principles have emerged from these examples 2. Categorical principles (deontology) Locates morality in certain absolute/universal moral duties and rights regardless of the consequences Asks the question: What is the intrinsic quality of the act itself? 16 Origins of Utilitarianism (Bentham) “Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do.” Jeremy Bentham in Principles of Morals & Legislation (1789) 17 Fundamentals of Utilitarianism Moral justification of the Homo oeconomicus – Maximize beneficial outcome (pleasure), minimize detrimental outcome (pain) – Pragmatic: Finite resources, best outcome for all parties involved – Most constitutions in the Anglo-Saxon world modeled after utilitarian principles (United Kingdom, USA, Australia etc.) 18 Origins of Deontology (Kant) “Act only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction.” Immanuel Kant in Groundwork for the Metaphysics of Morals (1785) 19 Fundamentals of Deontology Moral reasoning and action based on ideals – Categorical imperative: Moral decision making on the basis of duty (regardless of the resources given) whereby an individual must not be instrumentalized to serve the betterment of another person’s condition 20 Fundamentals of Deontology Adopted by the German constitution (Article 1, sentence 1): “Human dignity is inviolable” (eternal clause) [Die Würde des Menschen ist unantastbar.] Universal Declaration of Human Rights 21 Let’s return to our original question How do moral principles cope with morally-charged scenarios caused by digital technology? In any algorithmic system, a morally-charged decision process can be guided and evaluated by utilitarian, deontological or virtue ethics principles. 22 Preview to Next Week Fairness, accountability, and transparency of decision-making in algorithmic systems: – Predicting employment success in hiring – Predicting recidivism risk in court – Predicting loan repayment in credit scoring – Predicting recommendation success in targeted advertisement 23 How should machines make moral decisions? Key study: Awad et al. "The moral machine experiment." Nature 563:7729 (2018): 59-64. Publication version [use library access]: https://doi.org/10.1038/s41586-018-0637-6 24 Working paper [public access]: https://ore.exeter.ac.uk/repository/bitstream/handle/10871/39187/Moral%20Machine.pdf [Video excerpt of interview with Barack Obama discussing ethical aspects of autonomous driving] 25 Key Statements from Obama Video “Technology is essentially here.” “Technology could drastically reduce traffic fatalities.” “Technology could resolve global warming.” ”What are the values that we’ll embed in the car when it makes life-and-death choices?”  “That’s a moral decision.” 26 Who should decide the principles underlining such moral decision-making? Engineers? Ethicists? Politicians? The “average Joe”?  How would the average Joe decide? ”Any attempt to devise artificial intelligence ethics must be at least cognizant of public morality.” Awad, Edmond, et al. "The moral machine experiment." Nature 563:7729 (2018): 59-64. 27 Global study on moral reasoning across cultures 28 Binary, unavoidable outcome About the Study 39.61 million decisions recorded. From 233 countries and territories. 492,921 subjects completed the optional demographic survey at the end. Results: Individual variations Cultural clusters Country-level predictors 29 Decisions from 233 countries and territories 30 Nine Factors Influence Moral Reasoning 1. Sparing humans versus 6. Sparing the young versus the pets. old. 2. Staying on course versus 7. Sparing pedestrians who cross swerving. legally versus jaywalking. 3. Sparing passengers versus 8. Sparing the fit versus the less pedestrians. fit. 4. Sparing more lives versus 9. Sparing those with higher fewer lives. versus those with lower status. 5. Sparing men versus woman. 31 The 20 characters in the moral machine experiment: 32 The “Game” Created a Hype… [Video of an influencer introducing the Moral Machine Experiment] 33 The “Game” Created a Hype… [Video of a discussion between two participants of the Moral Machine Experiment] 34 Results of the Study Global Preferences Probability of sparing characters on the right side to characters on the left side (preference for right over left). 35 Results of the study: Global Preferences Relative advantage or penalty for each character, compared to an adult man or woman 36 Results of the Study: Cultural Clusters 37 Results of the Study: Cultural Clusters Cultural proximity results in converging preferences for machine ethics (in this context). Do between-cluster differences pose greater challenges?  YES! 38 Results of the Study Aggregated Preferences in the Western Cultures 39 Results of the Study Aggregated Preferences in Southern Cultures 40 Results of the Study: Aggregated Preferences in Eastern Cultures 41 Stronger and Weaker Cultural Differences Eastern: less preference for the young over the old. Southern: more preference for the young over the old. Southern: more preference for higher status. Southern: less preference for humans over animals. All: weak preference for sparing pedestrians over passengers. All: moderate preference for sparing the lawful over unlawful. For all lines except the last two, read: “Compared to the other clusters…” 42 Cultural Differences in the Southern Cluster Strong preference for sparing women over men. > Strong preference for sparing the fit over the “large”. > Strong preference for sparing high status over low status. > 43 Cultural Differences (Summary) What implications do these cultural differences in ethical preferences have for manufacturers of autonomous cars? 44 What implications do these cultural differences have for manufacturers of autonomous cars? Should they implement these ethical preferences? What do you think? 45 Many subjects had the following preferences: Everyone would be better off if AVs were utilitarian (in the sense of minimizing the number of casualties on the road), but they all have a personal incentive to ride in AVs that will protect them at all costs. 46 AVs May Cause a Social Dilemma: Road fatalities will be reduced (morally good). But people will not buy a car that kills them… I would never buy a But everyone self- else should! sacrificing car! What do you think? 47 Is virtue signaling the way out of this social dilemma? Virtue ethics Emphasizes the virtues, or moral character, in contrast to the approach that emphasizes duties or rules (deontology) or that emphasizes the consequences of actions (consequentialism). 48 Virtue signaling (receiving benefits) Moral praise? Tax reduction? Better medical More holidays? insurance? 49 How do countries usually make (legal) decisions about ethically-charged technologies (gene-editing, autonomous weapons, cloning etc.)? Ethics commissions: a group of ethicists and other experts (hired by the government). 50 June 2017 51 Experts (1) 52 Experts (2) 53 “system must be programmed to accept damage to animals and 1. Humans over pets property…if this means that < personal injury can be prevented“ “programming to reduce the 2. More over fewer < number of personal injuries may be justifiable” “any distinction based on personal features (age, gender, 3. Young over old < physical or mental constitution…) is strictly prohibited.” “any distinction based on personal features (age, gender, 4. High status over low status < physical or mental constitution…) is strictly prohibited.” “parties involved in the 5. Lawful over generation of mobility risks must unlawful < not sacrifice non-involved 54 parties” Takeaways Three major approaches to normative ethics: 1. Deontology (duty, rule-based) 2. Utilitarianism (consequences, outcome) 3. Virtue ethics (moral character) Potentially many digital technologies create morally-charged scenarios. Different cultures may have different ethical preferences (should these be taken into account?) or should experts make decisions? Autonomous vehicles create ethical and social dilemma 55 Thank you! See you next week! 56

Use Quizgecko on...
Browser
Browser