Summary

This document explores human-centricity in AI ethics, discussing various aspects like human-AI interaction, value-sensitive design, and the overall ethical implications of AI in different scenarios. It includes examples of AI systems and their potential biases.

Full Transcript

Official (Open) Human Centricity in AI Ethics 1. Human Centricity Human AI Interaction 3 Dimension Ethics Ethical Human-AI Interaction 2. Human Centric Design...

Official (Open) Human Centricity in AI Ethics 1. Human Centricity Human AI Interaction 3 Dimension Ethics Ethical Human-AI Interaction 2. Human Centric Design Value Sensitive Design (VSD) How to play Judgement Cards 3. AI Ethics & Governance Toolkit Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Human-Computer Interaction (HCI) vs Human-AI Interaction Human Computer Key difference with traditional HCI: Internal Locus of Control is gone! Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Human-AI Interaction : AI system is ready, human? Computer Vision Darker skinned individuals most misclassified Minority likely under-represented in dataset Online Records and Ads Male job seekers were more likely to be shown high paying jobs Black-sounding names was 25% more likely to get hits suggestive of criminal record Policing and Criminal Justice Recidivism algorithm deems black defendants are more likely to reoffend Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) AI Ethics drawing Public Attention In the real world, humans often constrain their actions according to a number of priorities: Business values Social norms Morality Religious values Overriding concern: AI systems may not obey such values when they try to maximize their objective functions How to design AI systems that act in line with our ethical values while achieving their design objectives? PDPC – Model AI Governance Framework (2nd Ed) - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for- organisation/ai/sgmodelaigovframework2.pdf Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Human Involvement The Model Framework proposes a risk impact assessment that uses 3 approaches to classify the degree of human oversight in the decision-making process. Human-in-the-loop – human retaining full control, human approval is required for every AI recommendations and AI only provides recommendations. Human-out-of-the-loop – human who are affected by AI system is unable to influence the outcomes, and AI system assumes full control with no option for human override. Human-over-the-loop – human has a supervisory role in AI system and can resume control (such as adjust parameters) when the AI system encounters unexpected or undesirable outcomes. PDPC – Model AI Governance Framework (2nd Ed) - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for- organisation/ai/sgmodelaigovframework2.pdf Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Effort in building Human Centricity in AI Ethics 3 Dimensions of Ethics Utilities Consequentialist Weighing the consequences of each choice and choosing the option which has the most moral Rules Values Respecting obligations, duties Ethics Acting and reasoning according and rights related to given to some moral values (e.g. situations bravery, justice, etc.) Deontological Virtue H. Yu, Z. Shen, C. Miao, C. Leung, V. R. Lesser & Q. Yang, "Building Ethics into Artificial Intelligence," in Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI'18), pp. 5527–5533, 2018. Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Incorporating Ethics into AI H. Yu, Z. Shen, C. Miao, C. Leung, V. R. Lesser & Q. Yang, "Building Ethics into Artificial Intelligence," in Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI'18), pp. 5527–5533, 2018. Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Exploring Ethical Dilemmas Designing knowledge representation schemas for discussion of ethical issues (e.g., features, duties, actions, cases, and principles) Accounting for cultural differences, application domain specificity, and the framing effect Making decisions rather than declaring preferences Source: http://moralmachine.mit.edu/ Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Individual Ethical Decision-making Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Collective Ethical Decision-making Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Ethical Human-AI Interaction Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) BoK Guiding Principles Transparent, Fair & Explainable Organisations should strive to ensure that their use or application of AI is undertaken in a manner that reflects the objectives of these principles as far as possible. This helps build trust and confidence in AI. Human-Centricity As AI is used to amplify human capabilities, the protection of the interests of humans, including their well-being and safety, should be primary considerations in the design, development and deployment of AI. AI are complex sociotechnical systems, HOW to raise awareness to AI product teams about the ethical considerations related to the technologies they are designing and building. To support product teams’ exploration of these ethical concerns, engaging, high impact methods that facilitate envisioning around specific ethical principles are needed. Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Human-Centric Design The way actual users experience your system is essential to assessing the true impact of its predictions, recommendations, and decisions. Design features with appropriate disclosures built-in: clarity and control is crucial to a good user experience. Consider augmentation and assistance: producing a single answer can be appropriate where there is a high probability that the answer satisfies a diversity of users and use cases. In other cases, it may be optimal for your system to suggest a few options to the user. Model potential adverse feedback early in the design process, followed by specific live testing and iteration for a small fraction of traffic before full deployment. Engage with a diverse set of users and use-case scenarios and incorporate feedback before and throughout project development. Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) e.g., for privacy norm, Value Sensitive Design design requirements could include "data encryption’ Value Sensitive Design is a methodology aimed at creating technology that e.g., privacy, e.g., if "privacy" aligns with human equity, is a value, a values while sustainability norm might be considering moral "minimize data and ethical collection principles “technology should be designed not just for functionality but also for social and ethical well- being” Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Value Sensitive Design (VSD) VSD emphasizes integrating values into technology design process through a structured progression from values to actionable design requirements, so that technology not only meets functional needs but also aligns with societal, ethical, and moral expectations. 1. Identify Values: Start by identifying key values relevant to the technology. 2. Establish Norms: Develop norms to contextualize these values within specific cultural or societal frameworks. 3. Define Design Requirements: Convert norms into practical design features or constraints to ensure technology aligns with human values. 4. Applications of VSD Ethical Technology Development: Ensures technologies like AI, IoT, or autonomous systems are designed with fairness, inclusivity, and accountability. Stakeholder Engagement: Involves diverse stakeholders (users, developers, policymakers) to ensure the design reflects varied perspectives. Long-term Impact: Focuses on the societal and ethical implications of technology over time. Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Official (Open) Value Sensitive Design (VSD) – Envisioning Cards both direct and indirect stakeholders; Emphasizes the need to generate and integrate insights about indirect stakeholders into the design process. By systematically identifying their roles and addressing their concerns, projects can achieve more ethical, inclusive, and impactful outcomes. Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Value Sensitive Design (VSD) – Envisioning Cards distinctions among designer values, values explicitly supported by the technology, and stakeholder values; Value tensions highlight the challenge of balancing competing values in system design. By identifying and addressing these tensions early through thoughtful design features, systems can better align with ethical principles, stakeholder needs, and societal expectations. This ensures technology is both functional and value-sensitive. Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Value Sensitive Design (VSD) – Envisioning Cards individual, group, and societal levels of analysis; Emphasizes the importance of considering national differences when deploying a system globally. By identifying challenges specific to each country and common concerns across regions, designers and developers can create systems that are ethical, adaptable, and widely accepted. Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Value Sensitive Design (VSD) – Envisioning Cards the integrative and iterative conceptual, technical, and empirical investigations; a commitment to progress (not perfection). Emphasizes the importance of assessing how technology will affect friendships and family relationships over time. By reflecting on both positive and negative influences, designers can create systems that foster stronger, healthier, and more sustainable relationships in the long run. Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Value Sensitive Design (VSD) – Envision Cards Judgement Call: A game that draws on value sensitive design (VSD) and design fiction to surface ethical concerns How to play Judgement Call: Product team identity their stakeholders Write fictional product reviews by pretending to be those stakeholders related to ethical principles. The reviews scaffold discussion in which specific ethical concerns are highlighted Solutions are then considered Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) How to Play Judgement Card Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Ethical Principles Card Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Ethical Principles Card Your role Card Your stance Card Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Step 1: Choose a Scenario There are two types of scenarios Product scenarios Fictional scenarios The game was designed to be used by product teams with real products, but using either type of scenario for gameplay can lead to excellent discussion about ethics and technology. Tips for clarifying a product scenario: Tips for choosing a fictional scenario: Take a moment to review the current design of the A good fictional scenario will include both a specific technology. technology and application area. “Autonomous drones for commercial package What elements of the design have recently changed? delivery” is better than “autonomous drones”. What features still need to be resolved? After you’ve decided on the scenario, spend a few minutes discussing the technology’s features and the social context. If the product is a platform technology, specify an application area Fictional scenarios are typically less detailed than product “Facial recognition technology used in an airport” is scenarios and the technology at hand may be unfamiliar to better than “facial recognition technology”. many players. Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Step 1: Choose a Scenario - Examples Step 2: Identify Stakeholders Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Step 2: Identify Stakeholders Direct Stakeholders are those who interact directly with the technology end users, designers, engineers, hackers, and administrators. Indirect Stakeholders do not interact with the technology but are affected by its use. advocacy groups, families of end users, regulators, and society at large. Excluded Stakeholders are those who cannot or do not use the technology Reasons for exclusion can include physical, cognitive, social, or situational constraints. For example, a technology that relies heavily on visual elements will exclude stakeholders with low-vision. Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Step 3: Draw a Hand For each round, each player draws one rating card, one stakeholder card, and one ethical principle card and uses these three cards to write a product review. Each player receives one wild card per game to play during any round. Draw a new rating card. For when you’d like to try the review with a different number of stars. Draw a new stakeholder card. For when you are unfamiliar with the stakeholder or uncomfortable representing their views. Give a zero star review. For when the technology failed the stakeholder completely and even one star is too many. Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Step 4: Write a Review Before you begin writing, think about the experiences and perspective of the stakeholder. Be specific about the features of the technology, even if they aren’t plausible. This will make the discussion more concrete. Don’t be afraid to use abbreviations, emoticons, and other playful language to express the stakeholder’s feelings about the technology. Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Step 5: Share and Discuss Once all reviews are completed, have one player collect, shuffle, and redistribute them. Take turns reading the reviews aloud. Listen for themes to emerge. After all the reviews have been read, discuss them as a group. Individually, the reviews highlight aspects of the technology from the perspective of a specific stakeholder. Together, they paint a more holistic picture by providing multiple perspectives of the same technology. Questions to consider in discussion:  Are there both positive and negative reviews about the same features?  Do any of the stakeholders’ concerns surprise you?  What changes could you make to the product based on what you learned today?  Was it challenging to write from the perspective of different stakeholders?  What can you change about the technology to alleviate some of the concerns you identified? Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Official (Open) Official (Open) Official (Open) Fairness in Design Toolkit Privacy Preservation https://youtu.be/nnowNLss_wQ https://federated-learning.org/ Official (Open) References AI E&G BoK – Section 3 – Internal Governance, Chapter 4.2 – Designing Programs with Human-Centricity in Mind H. Yu, Z. Shen, C. Miao, C. Leung, V. R. Lesser & Q. Yang, "Building Ethics into Artificial Intelligence," in Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI'18), pp. 5527–5533, 2018. S. Ballard, K. M. Chappell & K. Kennedy, “Judgment Call the Game: Using Value Sensitive Design and Design Fiction to Surface Ethical Concerns Related to Technology,” in Proceedings of the 2019 on Designing Interactive Systems Conference (DIS’19), pp. 421–433, 2019. Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5

Use Quizgecko on...
Browser
Browser