AI Ethics Governance Framework for Organizations PDF
Document Details
Uploaded by RosyBougainvillea5877
NTU
Tags
Summary
This document provides an AI ethics governance framework for organizations. It covers adapting the framework to organizational needs, managing AI risks, and governance in different sectors, such as banking and healthcare. The document draws on various resources and principles.
Full Transcript
Official (Open) AI Ethics Governance Framework for Organisations 1. AI ethical Governance Framework AI Ethics Governance Framework The benefit of AI ethical governance...
Official (Open) AI Ethics Governance Framework for Organisations 1. AI ethical Governance Framework AI Ethics Governance Framework The benefit of AI ethical governance Risks in trying to be ethical AI Ethics principles Corporate governance - IT governance models Ethics guidelines for Trustworthy AI – European Commission PDPC model AI governance framework Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 AI Ethics Governance Framework Official (Open) for Organisations 2. Adapting of AI Ethics Governance in the Organisations Moral culture Internal Governance structures Roles & responsibilities Scenario analysis AI ethics training & awareness 3. Managing & Addressing AI Risks Identifying risks Risk management Risk assessment & mitigation across AI lifecycle Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 AI Ethics Governance Framework Official (Open) for Organisations 4. Governance in Banking & Finance FEAT Principles (Fairness, Ethics, Accountability & Transparency) Veritas Initiative Digital Advisory Services 5. Governance in Healthcare Singapore’s Standing Operations management IHIS (Integrated Health Information Systems) EMR Governance (Electronic Medical Record) Drugs & Devices Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) AI Ethics Governance Framework AI ethics refers to a set of moral principles that help define the boundaries and responsibilities for actions and decisions related AI development & use. AI ethics governance is a process in which rules and actions are regulated, structured & maintained through an appropriate assignment of accountability to achieve AI solutions that are trustworthy & ethical. AI E&G BoK – Section 3 – Internal Governance, Chapter 3.1 –Setting up internal AI governance structures Trustworthy AI Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Why the need? Job Application AI systems are increasingly making decisions that have direct and serious impact on our livelihoods, freedom, physical and mental well-being. BBC News - The computers rejecting your job application - https://www.bbc.com/news/business-55932977 Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Why AI Ethics Governance important & relevant to me AI ethics governance involve in decision-making positions can influence how AI technology is designed, developed, deployed and/or used in the organisation or wider community. Thus, people should know how AI system can impact their lives and their rights with regards to privacy and protection from unfair discrimination. AI E&G BoK – Section 3 Internal Structures, Chapter 3.1 – Setting up internal AI governance structures (Part 2.2 – Role of the Board) Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) AI Ethics Governance Framework - Benefits Benefits for organisation to have an ethical and implementable AI governance framework: Can use own words to describe, don't need to be exactly the same but meaning must be there 1. Creates and maintain strong brand reputation 2. Builds trust among stakeholders 3. Increase public awareness and concerns about AI 4. Stay ahead of regulatory and governance standards Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) 1. Creates and maintain strong brand reputation. Practising ethical AI governance will build a reputable brand that is less likely to succumb to ethical lapses. Although awareness of potential AI issues and risks is starting to grow, AI ethics is still not widely discussed or well understood among the C– suite[Raconteur]. Organisations have to put in place a governance structures upfront that employees need to adopt when designing or deploying AI technology. Raconteur - Fears of reputational damage holds back AI deployment - https://www.raconteur.net/technology/artificial-intelligence/reputation-ai-adoption/ Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) 2. Builds trust among stakeholders The successful adoption of any technology depends on how well that technology is trusted by all stakeholders. Loss of trust can significantly impact subsequent undertakings by the organisations, despite their best and genuine efforts thereafter [CNA]. Technology is only as ethical as its creators and users design it to be. Companies have to evaluate how they can use technology in ways that CNA - WhatsApp’s new T&Cs could spark changes to how data and privacy are managed - https://www.channelnewsasia.com/news/commentary/whatsapp-terms-update- align with their core principles. facebook-telegram-signal-data-privacy-13986088 Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) 3. Increase public awareness and concerns about AI The growing concern about the widespread use of AI technology is shared across sex, age, income and education demographics. A poll of 20,000 people across 27 countries was compiled by polling firm Ipsos for the WEF’s Annual Meeting of the New Champions in Dalian (July ‘19). Almost half of those surveyed said companies using AI should be regulated more strictly. Attitude towards AI varied little over the WEF- Public Concern Around Use of Artificial Intelligence is Widespread, Poll Finds - https://www.weforum.org/press/2019/07/public-concern-around-use-of-artificial- different demographic groups. intelligence-is-widespread-poll-finds Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) 4. Stay ahead of regulatory and governance standards Example 1: The Changing AI Regulatory Landscape in USA Currently, the US approach is to issue guidelines (principles) and leave it to federal agencies and the industry to develop AI solutions in an unregulated environment. Attempts have been made in the past to introduce legislation (e.g. the Algorithmic Accountability Act) to make large companies more accountable for their AI system in terms of biasness, privacy or security The Verge – White House encourages hands-off approach to AI regulation - https://www.theverge.com/2020/1/7/21054653/america-us-ai-regulation-principles- federal-agencies-ostp-principles risk. But with limited success so far. The Verge – A new bill would force companies to check their algorithms for bias - https://www.theverge.com/2019/4/10/18304960/congress-algorithmic-accountability-act- wyden-clarke-booker-bill-introduced-house-senate Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) 4. Stay ahead of regulatory and governance standards Example 2: The Changing AI Regulatory Landscape in European Union The EU is currently planning legislation to govern AI systems, especially in areas considered to be of “high risk”. This legislation effort is based on the Ethics Guidelines for Trustworthy AI published by the EC High-level Expert Group (HLEG) on AI in April 2019. The guidelines deal with accountability, non-discrimination, transparency, robustness, privacy, environmental NS Tech – The European Commission’s AI legislation won’t be unveiled until 2021 - https://tech.newstatesman.com/policy/ursula-von-der-leyen-ai-legislation-2021 well-being and human oversight. EC High-level Expert Group on AI – Ethics Guidelines for Trustworthy AI - https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) 4. Stay ahead of regulatory and governance standards Example 3: The Changing AI Regulatory Landscape in Canada On 1 April 2019, the Government of Canada’s Directive on Automated Decision- Making (ADM) took effect with the goal of ensuring an ADM system in government services is implemented with minimal risk, it incorporated concepts of fairness, and makes consistent & interpretable decisions. The Directive requires an algorithmic impact assessment for each ADM system. This assessment results in a classification from Levels 1 to 4 according to is impact on the rights of individual & society & if the impact is reversible. Lexology - Federal Government’s Directive on Automated Decision-Making- https://www.lexology.com/library/detail.aspx?g=4c96398f-0f16-4f35-90a0-ae280838a2e1 Government of Canada – Responsible use of AI - https://www.canada.ca/en/government/system/digital-government/digital-government- innovations/responsible-use-ai.html Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) 4. Stay ahead of regulatory and governance standards Example 4: The Changing AI Regulatory Landscape in China On 17 June 2019, China’s Ministry of Science and Technology published on its website the Governance Principles for a New Generation of AI: Develop Responsible AI. These principles are not binding but seek to provide a framework and action guidelines for responsible AI governance. Earlier, on 28 May, the Beijing Academy of AI released the“Beijing MOST link to Governance Principles for a New Generation AI (in Chinese) http://most.gov.cn/kjbgz/201906/t20190617_147107.htm , (in English) https://perma.cc/V9FL-H6J7 AI Principles”. CGTN – China issues 8 principles for AI governance https://news.cgtn.com/news/2019-06-17/China-issues-eight-principles-for-AI-governance- HByDeSd3Ko/index.html Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) 4. Stay ahead of regulatory and governance standards Example 5: The Changing AI Regulatory Landscape in Singapore On 23 Jan 2019, the PDPC released its 1st edition of the Model AI Governance Framework at the WEF in Davos. The 2nd edition was released a year later. It does not currently impose any binding requirements but provides a framework for adoption by organisations who deploy AI solutions. The need for legislation of AI systems is growing. The issue is how extensive PDPC – Model AI Governance Framework (2nd Edition) https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for- organisation/ai/sgmodelaigovframework2.pdf and when it should be enacted The Straits Times – AI sans ethics can endanger everyone - https://www.straitstimes.com/opinion/ai-sans-ethics-can-endanger-everyone Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Risks when translating ethical principles into practices: Ethics Shopping – develop own set of ethical guidelines by picking and choosing from a variety available to fit what is the current practice without incentive to change ethical behaviour. Ethics Bluewashing – make misleading claims or implement superficial measures to appear more ethical that one really is. Ethics is merely a performance Ethics Lobbying – export unethical R&D activities to places where such activities are less problematic and then import back the outcomes of such unethical activities. Ethics Dumping – export unethical R&D activities to places Ethical takes where such activities are less problematic and then import back commitment and sustained effort from all the outcomes of such unethical activities. departments within an Ethics Shirking – reduce efforts in ethical practice in situations organisation as well as and places where the ethical standards are lower and the the organisational mistakenly perceived risk of appearing unethical seems minimal. mindset and culture change Floridi L., Translating Principles into Practices of Digital Ethics: 5 Risks of Being Ethical, Philosophy & Technology (2019) 32:185-193 - https://link.springer.com/article/10.1007/s13347-019-00354-x Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) AI Ethics Principles There has been numerous ethical guidelines and principles issued. Floridi & Cowls studied 6 reputable documents recently published by authoritative multi-stakeholder & proposed 5 convergence AI principles. However, the adoption and implementation of AI ethical principles still faces deep disagreements and conflicting priorities. Non- Explicability Beneficence Autonomy Justice Maleficence A new enabling Traditional Bioethics Principles principle for AI Beneficence – Do only good. AI technology is to benefit humanity. It should promote the well-being and dignity of humans, ensure sustainability and good environment for future generations. Non-Maleficence – Do no harm. Considerations for negative outcomes of overusing and misusing AI technology, particularly the infringements on personal privacy. Autonomy – Strike balance between the decision-making power retain by humans and that delegated to artificial agents, which should be restricted and made intrinsically reversible. Justice – Ensure fairness by avoiding unfair discrimination. Promotes shared benefits and shared prosperity, diversity and preserve solidarity. Explicability – Promote transparency by incorporating attributes of intelligibility (“how does it work?”) and accountability (“who is responsible for the way it works?”). L. Floridi and J. Cowls, A unified framework of five principles for AI in society, Harvard Data Science Review, Issue 1.1 (2019) - https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/7 Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Corporate Governance: Corporate governance is the system of rules, practices, and processes by which an organisation is directed and controlled. It involves balancing the interests of a company's many stakeholders, such as shareholders, management, customers, suppliers, financiers, the government, and the community. A company's board of directors is the primary force influencing corporate governance. Bad governance can cast doubt on a company's operations and its ultimate profitability. Corporate governance entails the areas of environmental awareness, ethical behaviour, corporate strategy, compensation, and risk management. Basic principles of corporate governance are accountability, transparency, fairness, & responsibility. Investopedia – What Is Corporate Governance? - https://www.investopedia.com/terms/c/corporategovernance.asp Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Is there a need another Governance Framework for AI?: Most organisations already have processes and systems in place for governance, planning, budgeting, risk management and technology audit. But there are two key differences between conventional IT and AI-based systems. AI models are non-deterministic and its operation is often opaque. AI models learns from data and evolves over time. Organisations should pay equal attention to the model code and training dataset. Governance issues like “how was it trained?” & continuous monitoring are important. AIEG BoK – Section 3, Chapter 3.2 - Assigning Roles and Responsibilities (Part 1 – Introduction) Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Where Should We Start?: AI solutions are deployed through a combination of software systems and data pipelines. Existing IT & technology governance models are good starting points. Two well-known governance models are ISO/IEC 38500 and COBIT 2019. AIEG BoK – Section 3 Internal Governance, Chapter 3.2 – Assigning roles and responsibilities (Part 2 - AI Ethics & Governance: Standards & Frameworks) Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) SO/IEC 38500 Governance Model An international standard created in 2015 to guide corporate governance of IT. It provides guiding principles and model for governing bodies to use in evaluating, directing and monitoring the use of IT within their organisation and is applicable for all organisations, regardless of purpose, design and ownership structure. The ISO/IEC 38500 standard sets out six principles for the good governance of IT. 1. Responsibility Individuals & groups in organisation understand and accepts their responsibilities and have the authority to carry them out. 2. Strategy Have a strategy plan that takes into account the current & future capabilities of IT, and the plan also satisfies the current & ongoing needs of the organisation. 3. Acquisition Acquisitions made on the basis of appropriate analysis, with clear & transparent decision making that balances short-term and long-term benefits, cost & risk. 4. Performance IT is fit for purpose in supporting the organisation, providing the level and quality of service required to meet current and future business needs. IT complies with all mandatory legislation and regulations. Policies & practices 5. Conformance are clearly defined, implemented & enforced. IT complies with all mandatory legislation and regulations. Policies & practices 6. Human Behaviour are clearly defined, implemented & enforced. AIEG BoK – Section 3 Internal Governance, Chapter 3.2 – Roles & Responsibilities (Part 2.1 – ISO/IEC Standards ISO/EIC 38500 Standards – https://www.iso.org/standard/62816.html Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) SO/IEC 38500 Governance Model The ISO 38500 standard was subsequently extended to ISO/IEC 38505-1:2017 to examine data and its use by an organisation. It sets our principles for the effective, efficient and acceptable use of data within the conformance with regulatory, legal and contractual obligation in handling data using the six principles laid out in ISO/IEC 38500. It discusses the risks of different classes of data and constraints on the use of data (e.g. privacy, copyright, commercial interests, ethical & societal obligations). It covers data accountability issues such as collection, storage, reporting, decisions, distribution and disposal. AIEG BoK – Section 3 Internal Governance, Chapter 3.2 – Roles & Responsibilities (Part 2.1 – ISO/IEC Standards ISO/EIC 38500 Standards – https://www.iso.org/standard/62816.html Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) COBIT 2019 Governance Model Control Objectives for Information and Related Technologies (COBIT 2019), created by ISACA was released in Nov 2018, and is a widely accepted framework for IT governance and management. It has 5 governance & management objectives. This framework encompasses 40 core governance & management processes to establish a governance strategy, and a new method for managing data. Governance Ensures the organisation objectives are 1. Evaluate, direct and monitor achieved. Management Plans, builds, runs & monitors activities in 2. Strategy alignment with direction of governance body 3. Acquisition to achieve objectives. 4. Performance 5. Conformance AIEG BoK – Section 3 Internal Governance, Chapter 3.2 – Roles & Responsibilities (Part 2.1 – ISO/IEC Standards ISO/EIC 38500 Standards – https://www.iso.org/standard/62816.html Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) The Ethics Guidelines for Trustworthy AI Standards and governance frameworks for IT & data like ISO/IEC 38500 and COBIT 2019 are useful for defining processes for AI governance and control. However, these standards alone do not address ethical challenges in developing & deploying AI systems. Organisations should be mindful of the various ethical guidelines available and incorporate ethical considerations at various stages of their AI work processes. Non- Beneficence Autonomy Justice Explicability Maleficence Ethical Themes in AI L. Floridi and J. Cowls, A unified framework of five principles for AI in society, Harvard Data Science Review, Issue 1.1 (2019) - https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/7 Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) The Ethics Guidelines for Trustworthy AI – European Commission (EC) These ethics guidelines were released by the EC in April 2019. It defines 4 ethical principles: Respect for human autonomy Prevention of harm Fairness Explicability These ethical principles are then translated into 7 requirements for AI system implementation. An assessment list allows organisations to operationalise the requirements by tailoring them to specific AI applications. EC High-level Expert Group on AI – Ethics Guidelines for Trustworthy AI – https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) The Ethics Guidelines for Trustworthy AI – European Commission (EC) The EC Ethics Guidelines for Trustworthy AI, with its 7 key requirements, provides organisations with useful guidance to develop, deploy and use AI systems. 1. Human agency & oversight 2. Technical robustness & safety 3. Privacy & data governance 4. Transparency 5. Diversity, non-discrimination & fairness 6. Societal & environmental wellbeing 7. Accountability EC High-level Expert Group on AI – Ethics Guidelines for Trustworthy AI – https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) EC’s Framework for Trustworthy AI 1. Human agency & oversight AI systems should support human autonomy and decision-making. Fundamental rights – impact assessment should be undertaken prior to system’s development in situations where the reach and capacity of the AI system may violate respect of rights and freedom of people in a democratic society. Examples of a Trustworthy AI Assessment list question under Fundamental Rights Did you consider whether the AI system should communicate to end user that a decision, content, advice or outcome is the result of an algorithmic decision? In case of a chat bot or other conversational system, are the human end users made aware that they are interacting with a non-human agent? IBM Watson Blog – Code of ethics for AI chat bots Who should a chatbot serve? Am I talking to a chatbot or a human? Who owns the data shared with a chatbot? Preventing chatbot abuse. How should chatbots handle privacy? EC High-level Expert Group on AI – Ethics Guidelines for Trustworthy AI – https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai IBM Watson Blog - The code of ethics for AI and chatbots that every brand should follow - https://www.ibm.com/blogs/watson/2017/10/the-code-of-ethics-for-ai-and-chatbots- that-every-brand-should-follow/ Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) EC’s Framework for Trustworthy AI 2. Technical robustness & safety AI should reliably minimise unintentional harm & prevent unacceptable harm. Resilience to attack & security – should be protected against vulnerabilities that can allow them to be exploited by adversaries (e.g. hacking). 3 common attacks that can compromise unsupervised machine learning algorithms and systems: Adversarial Inputs - involves constant probing of classifier with new inputs in an attempt to evade detection and bypass the trained classifier. (athemachine learning model that attacker purposely designed to cause model to make a mistake) Data Poisoning - involves the feeding of polluted training data to a classifier, blurring the (mainpulating the training dataset used boundary between what is classified at good and bad (in the adversaries' favour). to create AI and ML models) Model Stealing: involves the use of techniques to recover models (valuable intellectual property) or information about the data (potentially sensitive) used during training. Fallback plan (safety) – in case of problems, such as switching from statistical to rule-based procedure or asking human operator before continuing action. Accuracy – is the ability to make correct prediction or decision. A high level of accuracy is required in situations that directly affects human lives. Reliability & Reproducibility – to work properly within a specific range of inputs & situations. To exhibit consistent behaviour when repeated under similar conditions. Plug&Play – Machine Learning Security: 3 Risks To Be Aware Of - https://www.plugandplaytechcenter.com/resources/machine-learning-security-3-risks-be-aware/ EC High-level Expert Group on AI – Ethics Guidelines for Trustworthy AI – https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) EC’s Framework for Trustworthy AI 3. Privacy & data governance Ensure quality & integrity of data used and the process protects privacy of data. Privacy & data protection – to guarantee privacy & data protection throughout a system’s entire lifecycle. Ensure data is collected from users in a trusted manner and is not used to unlawfully and unfairly discriminate against them. Quality & integrity of data – such as socially constructed biases, inaccuracies & errors to be addressed before used in model training. Processes & data sets used must be tested at each step (i.e. during planning, training, testing & deployment). Access to data – have adequate data protocols governing who can access data & under what circumstance. EC High-level Expert Group on AI – Ethics Guidelines for Trustworthy AI – https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai Yonhap News Agency – Chatbot Luda controversy leave questions over AI ethics, data collection- https://en.yna.co.kr/view/AEN20210113004100320 Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) EC’s Framework for Trustworthy AI 4. Transparency Encompasses transparency of the data, system & business models. Traceability – the data set, processes & decisions of the AI system should be properly documented to facilitate identification of the reasons for the AI-decision. Explainability – requires decisions made by AI systems be understood & traced by a human user, especially those with high impact on human lives. For business model transparency, explanation of the degree an AI system influences business decision- making processes & the AI design & deployment rationale should be available. Communication – users have a right to be informed they are interacting with an AI system. Its capabilities & limitations should be communicated to relevant parties. EC High-level Expert Group on AI – Ethics Guidelines for Trustworthy AI – https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) EC’s Framework for Trustworthy AI 5. Diversity, non-discrimination & fairness Inclusion, diversity & accessibility should be consistently enabled. Avoidance of unfair bias – by putting in place oversight processes to analyse and address unfair and historic biases in the data set used during training and operation. Strengthening this analysis by encouraging a diversity of opinion within the development team. Accessibility & universal design – should be adopted to allow all people(regardless of their age, gender & physical abilities) to use AI products & services. Stakeholder participation – can help develop AI systems that are trustworthy and widely acceptable. It is advisable to consult stakeholders who may directly or indirectly be affected by the system throughout its lifecycle. EC High-level Expert Group on AI – Ethics Guidelines for Trustworthy AI – https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) EC’s Framework for Trustworthy AI 6. Societal & environmental well-being Sustainability & ecological responsibility of AI systems should be encouraged. Sustainable & environmentally friendly AI – during development, deployment & use should be encouraged through a critical examination of the resource usage and energy consumption during training. Social impact – ubiquitous exposure to social AI systems may inadvertently weaken social structures or compromise social relationships. The effects of such systems on our physical & mental wellbeing should be carefully monitored & considered. Society & democracy – the use of AI systems should be carefully considered in situations relating to democratic process such as political decision-making and electoral activities. EC High-level Expert Group on AI – Ethics Guidelines for Trustworthy AI – https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) EC’s Framework for Trustworthy AI 7. Accountability Auditability – enables assessment of algorithms, data & design processes. Safety-critical systems & that affecting fundamental rights must be independently audited. Handling negative impacts – the ability to report on actions that contribute to a certain system outcome & to respond to their consequences must be ensured. The use of impact assessment (e.g. red teaming or forms of Algorithmic Impact Assessment) before and during development, deployment & use can minimise negative impact. Trade-off – decisions should be reasoned & properly documented with decision makers held accountable for the manner the trade- offs were conceived. Redress – accessible mechanism to ensure adequate redress when unjust adverse impact occurs should be foreseen and made known to foster trust. EC High-level Expert Group on AI – Ethics Guidelines for Trustworthy AI – https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai WIRED – High-Stakes AI Decisions Need to Be Automatically Audited - https://www.wired.com/story/ai-needs-to-be-audited Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Other Ethics Guidelines The IEEE Ethically Aligned Design guidelines is a very comprehensive document that looks at the ethical design of A/IS from many perspectives. It discusses different ethics system, legal frameworks, policies for education and practical issues like methods for ethical R&D, affective computing, etc. The EAD1e guidelines are particularly relevant to those developing AI systems, managing AI projects or doing AI R&D. In Singapore, PDPC Model AI Governance Framework is more relevant to organisations planning to deploy an AI solutions. IEEE Ethically Aligned Design - https://ethicsinaction.ieee.org/ PDPC – Model AI Governance Framework (2nd Ed) - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organi Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) PDPC Model AI Governance Framework The Model Framework is based on two guiding principles that promote trust in AI and better understanding of its use. 1 Organisations should strive to ensure the AI applications deployed make decisions that are explainable, transparent and fair. PDPC – Model AI Governance Framework (2nd Ed) - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organi Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) PDPC Model AI Governance Framework 2 AI solutions should be Human-centric, with the well-being and safety of human being the primary considerations in the design, development and deployment of AI Human Risk – Minimise the risk of harm to humans. Human Oversight – Determine levels of human involvement in the AI- augmented decision-making process. Human-focused Design & Deployment – Understand how humans will behave, interact & respond to the AI systems so that its design & deployment will cater for the human user’s usage patterns, dignity and rights. PDPC – Model AI Governance Framework (2nd Ed) - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organi Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) PDPC Model AI Governance Framework Proposed guidance and measures for adopting responsible AI in 4 key areas: PDPC – Model AI Governance Framework (2nd Ed) - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organi Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Model AI Governance Framework – Case Study MSD is a leading global biopharmaceutical company. Its IT Hub in Singapore uses AI methods to understand employee engagement & attrition risks for one of MSD’s offices. Specialised role - of Data Scientist (AI Ethics & Data Privacy) was created with responsibility for ethical use of AI products. Ethics awareness – knowledge sharing sessions organised for relevant personnel & senior leaders to raise awareness of the relevance of ethics in developing and deploying AI models. User Training – HR personnel & unit leads were trained to interpret the AI model output and its decision in order to help them improve employee satisfaction. PDPC – Compendium of Use Cases - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgaigovusecases.pdf Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Model AI Governance Framework – Case Study MSD considered managing attrition risk was a sensitive subject. Consequence of allowing the algorithm to act on inaccurate prediction could result if unfair treatment (e.g. may impact employee benefits). Human oversight – was considered critical and a human- in-the-loop approach in AI-augmented decision-making process was adopted. Human-centred – This approach allowed AI model to flag out employees with high risk of attrition, while decisions that impact employees are made by management and HR team. PDPC – Compendium of Use Cases - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgaigovusecases.pdf Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Model AI Governance Framework – Case Study Project team explained the AI model’s results to its management& HR team to establish a fair assessment and build trust for the project. Ensure data quality - Project team was made to understand meaning & origin of features and a data professional from the business unit shared & explained transformations done to the dataset with team. Explainability – Explanations for the predictive scores at both model and individual levels were implemented. Explainability at AI Individual Model level level, insights is allowed obtained on the management & different factors HR to understand behind the the overall factors attrition risk for employee scores for two attrition. individuals. PDPC – Compendium of Use Cases - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgaigovusecases.pdf Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Model AI Governance Framework – Case Study Project team collaborated with User Experience (UX) team in design of user-friendly AI interface and data visualisation displays. UX team used observation and elicitation techniques rooted in design thinking to surface employee concerns and kept their interest central to the solution during process of developing the AI model. Counterfactual fairness testing – the model results were compared with human predictions of business unit leads. Differing results spurred useful data-driven conversations about directions for employee engagement & retention. Confidentiality – Access to results of employee attrition model is limited to selected individuals in the leadership team. PDPC – Compendium of Use Cases - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgaigovusecases.pdf Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Moral Culture – Core Values Core values are the fundamental beliefs of a person or organization. These guiding principles dictate behaviour and help distinguish between right and wrong decisions or actions. Some examples of companies’ core values. NYP’s values: Nurturing & Caring Integrity Core Values Can-Do-Spirit Innovative & Enterprising Teamwork Empowering Learners for Work & Life Mission Co-Creating with Industry for Growth & Sustainability Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Moral Culture – Core Values with AI Ethics Before allocating roles and responsibilities, it is essential to have a clearly-defined space within the organisation’s decision-making framework where ethical issues can be raised and addressed. (pointless) It is futile to assign people duties of ensuring ethical deployment of AI unless the organisational culture supports them in carrying out their responsibilities in a meaningful way. Strong and committed leadership from the senior management team is fundamental in creating this supportive moral culture with the organisation. Balancing Acts: Organisations have to often balance commercial objectives with the risk of deploying a new AI solution. Open discussions are needed with various stakeholders in the organisations to align priorities. Clarity & alignment with company’s core values will empower the project team to proceed or tweak AI solution to comply with ethical guidelines. AI E&G BoK – Section 3-Internal Governance, Chapter 3.1 - Setting up internal AI governance structure AI E&G BoK – Section 3-Internal Governance, Chapter 3.2 – Assigning Roles & Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Moral Culture Helps from outside organisation: External subject matter experts (SME) can provide organisation with unbiased & objective guidance to help governance & ethical considerations remain neutral and focused on doing what is right. The SME selected should be from diverse expertise, disciplines and backgrounds. Ideally, they should have no vested interest in the company and harbour no hidden agenda that can undermine the credibility of the governance structure. Google Blog - https://blog.google/technology/ai/external-advisory-council-help-advance-responsible-development-ai/ Vox – Google cancels AI ethics board in response to outcry - https://www.vox.com/future-perfect/2019/4/4/18295933/google-cancels-ai-ethics-board Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Internal Governance structures – Internal Structure All organisations have some internal control structures & policies to monitor their activities in order to meet objectives, satisfy ethical guidelines and take corrective actions when required. These are done pre-emptively during the process of evaluating, developing & deploying the AI solutions and post-operationally during early-stage deployment and on an ongoing basis. Internal governance structures should involve all stakeholders, both internal (e.g. directors, management at all levels, operational staff, etc) and external (e.g. suppliers, channel partners, etc) to the organisation. Such structures can be set up using a model that is centralised, decentralised or a hybrid of the two. AI E&G BoK – Section 3-Internal Governance, Chapter 3.1 - Setting up internal AI governance structure Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Internal structures – Centralised Centralised governance structures can set policies that can be applied more consistently and visibly across different departments in the organisation. Centralisation of the ethics review process helps company deal with a multitude of related issues and evolve the compliance policies across the whole organisation. Centralised decision-making is preferred when the deployment of an AI solution is deemed high risk or could be potentially contentious. Respective departments should bring the issue to senior management or AI ethics committee. AI E&G BoK – Section 3-Internal Governance, Chapter 3.1 - Setting up internal AI governance structure Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Internal structures – Decentralised Decentralisation of ethics review allows for more empowered and rapid decision- making by teams who are more knowledgeable about the issues at hand A governance process that is decentralised or democratised allows a wider range of potential ethical issues to be identified, particularly those specific to each department or local entity within the organisation. Decentralised decision-making can be facilitated by having clear policies that describes off-limits practices(i.e. blacklist of AI applications that would likely cause harm, injury or ethical compromises) (reference from PDC & WEF) AI E&G BoK – Section 3-Internal Governance, Chapter 3.1 - Setting up internal AI governance structure PDPC & WEF – – Implementation and Self-Assessment Guide for Organizations - https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for- Organisation/AI/SGIsago.pdf Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Centralised & Decentralised Structures – Which is better? Centralised structures facilitate a consistent and standardised process to evaluate risk and impact of AI solutions and provides greater control over the review process. Decentralised structures facilitate more timely decision-making processes by teams that are more familiar with the specific issues and context like local ethical and legal Hybrid norms. A compromise hybrid approach is to use a risk-impact assessment matrix to help decide on the probability and severity of impact of the AI solution, and if the decentralised decision should be escalated to a central governance team for review. AI E&G BoK – Section 3-Internal Governance, Chapter 3.1 - Setting up internal AI governance structure Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Roles and Resposibilities People within the organisational structure and their roles & responsibilities are fundamental to the effective deployment of trustworthy AI. There are two broad approaches to determine and assign roles and responsibilities to ensure ethical deployment of AI. Role-First Approach List out various new & existing roles that may be involved in addressing AI ethical issues, then identify people for the roles. Concern-First Approach Identify the concerns related to specific AI use cases and then determine the strategies, tasks and relevant teams needed to address the concerns. Once concerns, strategies and tasks have been identified, they are then allocated to appropriate teams within the organisation to take responsibilities. AIEG BoK – Section 3 Internal Governance, Chapter 3.2 – Roles & Responsibilities Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Roles and Resposibilities – Case Study in DBS DBS Bank developed an AI-based Anti-Money Laundering Filter Model (AMLFM) to identify predictive indicators of suspicious transactions to reduce the number of time- consuming false positives cases generated by the non-AI system. Certain internal governance structures ensured oversight of AI deployment. Responsible Data Use (RDU) committee was set up to oversee & govern deployment. The RDU committee evaluated and managed the risks of all data used by DBS. Senior leaders from different units provided diversity of views, checks & balances. RDU Committee A Global Rules & Models Committee (GRMC) was set up within the Group Legal, Compliance & Secretariat to assess AML surveillance rules, models & score setting. GLCS head gives deployment approval after successful review of AML filter model. GRM Committee PDPC – Compendium of Use Cases - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgaigovusecases.pdf Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) AI Ethics Training and Awareness Relevant staff involved in the management, development & use of AI systems must be contextually trained based on their individual’s role & responsibilities. Strategic Roles – (e.g. Chief AI Ethics Officer) should have broad understanding of the technical & ethical concerns in AI and the skills to communicate effectively. Compliance Roles – (e.g. Data Protection Officer) should have in-depth knowledge of current & emerging trends in privacy law (e.g. PDPA, GDPR, etc). AI Developer Roles – (e.g. AI Model Engineer) should be trained to interpret AI model outputs and manage biases in the data. Customer-Facing Roles – (e.g. Customer Relationship Officer) should be trained to be aware of the benefits, risks & limitations of the AI system, so they know when to alert subject-matter experts. AIEG BoK – Section 3 Internal Governance, Chapter 3.2 – Roles & Responsibilities DPC – Model AI Governance Framework (2nd Ed) - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) AI Ethics Training and Awareness – Case Study in Apple Credit Card Apple’s new credit card is being investigated by financial regulators after David H. Hansson’s Nov 2019 twitter post went viral triggering many similar complains about its algorithm discriminating against women. Should customer relation officer dealing with AI systems be properly trained to handle customer complaints with greater sensitivity and empathy? Should technical personnel be trained to monitor and investigate relevant customer feedback on irregular automated decisions? The Verge – Apple’s credit card is being investigated for discriminating against women- https://www.theverge.com/2019/11/11/20958953/apple-credit-card-gender- discrimination-algorithms-black-box-investigation Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Based on the functionalities of each role, what are their roles and responsibilities? Roles and Resposibilities – Roles by functioncary & responsibilites Roles and responsibilities of various leadership functionaries in an organisation: Functionary Roles & Responsibilities Functionary Roles & Responsibilities Board of Understand AI ethics issues and ensure oversight. Chief Risk Provide CEO & Board of Directors with Directors Ask probing questions and make informed decisions. Officer a probabilistic assessment of any CEO Overall management & execution. adverse outcome from AI projects. Strategic decision on where and how to use AI. Chief Legal Assess potential legal liabilities. Recommendations to the Board of Directors. Office / Prepare a mitigation strategy. AI Ethics Independent oversight of AI ethics solutions/deployment. Counsel Committee Draft and uphold AI ethics principles/Code of conduct. Chief Audit risk & security of databases Iteratively learn from internal & external examples. Information used by AI systems. Take the necessary actions to prevent ethics violations. Officer Audit risk of AI solutions procured Chief AI Manage administrative processes. or used on a SaaS model. Ethics Officer Conduct periodic ethics review of all AI-related projects. Chief Audit risk & security of AI customer Take the necessary actions to prevent ethics violations. Technology interfaces. Chief Data Ensure consent for the collection & use of customer data. Officer Audit risk of misuse of AI systems. Officer Ensure that outcomes are always beneficial to customer. Chief HR Manage ethical issues in hiring, Ensure that customer data is protected & secure. Officer assessment & promotions. Ensure that customers cannot be identified by AI technology. Manage the impact of AI technology Have to memorise on jobs & careers. AI E&G BoK – Section 3-Internal Governance, Chapter 3.1 - Setting up internal AI governance structure Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Risks Associated with AI Systems - examples Unexplainability – The BlackRock Black Box In 2018, quantitative analysts at BlackRock decided to shelve promising AI liquidity risk models because they were not able to explain the models’ output to senior management Unless financial services firms can find a way to make such models explainable to consumers, management, investors or regulators – some of their vast potential will go untapped. Biasness – Goldman Sach’s Wilful Blindness Not using gender in the dataset does not remove the bias as there are proxy data such as salary, job titles, etc that can correlate to the gender attribute. Possible bias should be discovered before deployment by a separate team who job is not to develop the algorithm but to pen-test & stress test them[ Pre-mortem analysis: anticipating pitfalls to increase project success – https://www.processexcellencenetwork.com/business-process-management-bpm/articles/pre-mortem- analysis-anticipating-pitfalls-to-incre Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Identifying Risks Pre-mortem analysis can be used by a cross-functional team of subject matter experts to postulate a project failure and work backward to determine what could potentially lead to that failure, thereby anticipating and mitigating risks upfront. Team brainstorms possible causes of failure & assign severity & likelihood scores for each. Risk mitigation & control plan is developed for those with high scores. Step 1 – Imagine a fiasco – facilitator proposes a hypothetical failure scenario to the team. Step 2 – Generate reasons – each person writes down all the reasons they can think of for the failure. Step 3 – Consolidate list – each takes turn to share one item. It is recorded on a flipchart until all concerns shared. Step 4 – Review priorities – address 2 or 3 or the items of the greatest concern, discussing mitigation & control strategies. Pre-mortem analysis: anticipating pitfalls to increase project success – https://www.processexcellencenetwork.com/business-process-management- bpm/articles/pre-mortem-analysis-anticipating-pitfalls-to-incre Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Risk Management – Consider Appropriate Justifications It is crucial for an organisation planning to implement an AI solutions to consider if there are appropriate justifications for such decisions to be automated. Scalability – cost-efficiency & improved performance are typical consideration that motivate adoption of AI systems as they bring enough value to justify deployment. Significant Harm – caused by automated decision (e.g. medical diagnosis, recidivism, etc) should be augmented with human decision-making, if still deem worth deploying. Routine & Frequent – decisions can be automated with AI, with irregular events & exceptions flagged for human decision making. External Risks – such as impact on political, social & economic agenda by an erroneous AI decision must be considered holistically is assessing risks. AI E&G BoK – Section 3 - Internal Governance, Chapter 3.3 – Managing & Addressing Risk Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Risk Management – Consider Human Involvement The Model Framework proposes a risk impact assessment that uses 3 approaches to classify the degree of human oversight in the decision-making process. Human-in-the-loop – human retaining full control, human approval is required for every AI recommendations and AI only provides recommendations. Human-out-of-the-loop – human who are affected by AI system is unable to influence the outcomes, and AI system assumes full control with no option for human override. Human-over-the-loop – human has a supervisory role in AI system and can resume control (such as adjust parameters) when the AI system encounters unexpected or undesirable outcomes. PDPC – Model AI Governance Framework (2nd Ed) - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for- organisation/ai/sgmodelaigovframework2.pdf Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Risk Management – Risk Impact Matrix To create trustworthy AI systems, organisation must consider risks that their system may pose to users, society, environment and the business. A risk impact matrix (RIM) can be used to assess the potential harm, their reversibility and likelihood. must be used in the assignment to determine Organisation can plot each risk on the RIM to guide the selection of an appropriate level of human oversight. not able to solve but can recommend using AI systems in unexpected outcome can immediately solve issues AI E&G BoK – Section 4 - Human-Centricity, Chapter 4.2 – Designing programs with human-centricity in mind Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Risk Management – Risk Impact Matrix (Green Zone) Use cases in the green region represents the most practical problems for deploying AI systems. Erroneous decisions have minimal impact on users beyond a minor inconvenience. Any harm is reversible so organisations can use a “human-out-of-the-loop” model. Level of Human-Oversight AI is well-suited for solving Human Out-of-the Loop these problems Example: A shopping purchase recommendation system No harm done as customer are not bound to accept the occasional poor automated recommendations. AI E&G BoK – Section 4 - Human-Centricity, Chapter 4.2 – Designing programs with human-centricity in mind Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Risk Management – Risk Impact Matrix (Yellow Zone) Use cases in the yellow region will benefit from AI automation but it may be unethical if decision is fully automated with no consideration for end-user feedback. Potential for minor or moderate reversible harm to users & the business warrants keeping human judgement on standby. Level of Human-Oversight AI should be used cautiously Human Over-the Loop for these problems Example: A news recommendation system. No need for a human review every decision but a human needs to monitor & review flagged articles for harmful or offensive contents. AI E&G BoK – Section 4 - Human-Centricity, Chapter 4.2 – Designing programs with human-centricity in mind Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Risk Management – Risk Impact Matrix (Orange Zone) Use cases in the orangeorange region need human decision-making and oversight as errors can cause moderate to significant harm to individuals and society. Careful & extensive testing is required, and such AI systems must have “human- in-the-loop” at all times. Level of Human-Oversight AI should be used to amplify human Human in- capabilities for these problem. the Loop Must not replace human judgement Example: An AI cancer diagnosis tool must not replace but support the radiologist in decision-making. Both false positive or false negative may cause harm to the patient. AI E&G BoK – Section 4 - Human-Centricity, Chapter 4.2 – Designing programs with human-centricity in mind RSNA – AI for Mammography and Digital Breast Tomosynthesis: Current Concepts and Future Perspectives – https://pubs.rsna.org/doi/full/10.1148/radiol.2019182627 Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Risk Management – Risk Impact Matrix (Dark Red Zone) Use cases in the dark red region are too dangerous to use AI as errors can cause irreversible and life-altering impact on individuals & society. Manual process preferred as users may become complacent or fatigued & make errors when accepting or rejecting an automated decision. Level of Human-Oversight It is too dangerous to use AI for these DO NOT DO problems Example: Automated large volume trading or missile launches. A tiny lapse in human attention or delay in intervention can result in catastrophic outcomes. AI E&G BoK – Section 4 - Human-Centricity, Chapter 4.2 – Designing programs with human-centricity in mind Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Risk Assessment & Mitigation across AI Lifecycle – 4 Phases Business teams and AI designer & developers must be aware of the typical sources of risk in the AI lifecycle. The four phases of the AI lifecycle are design, development, deployment and post-deployment. Quantifying and categorising identified risks based on their severity of impact will help organisation evolve appropriate mitigation strategies. This will also allow the management and governance teams to weight the benefits versus the associated risks. AI E&G BoK – Section 4 - Human-Centricity, Chapter 4.1 – Balancing commercial objectives against real/perceived risks Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Risk Assessment & Mitigation across AI Lifecycle – Design Phase Key Risks: Not understanding commercial objectives or user’s intent and not translating commercial intent into the right AI problem statement. Mitigation: Document business intent or objectives and get approval from cross- functional governance team (i.e. reps. from business, technical, legal and ethics). Example: DBS Bank set out to apply ML capabilities in combination with the existing rule-based system for their anti-money laundering (AML) programme. To ensure robust oversight of the AI design for their AML system, they set up a Responsible Data Use (RDU) framework and committee consisting of senior leaders from different DBS units to ensure diversity, due checks and balances. RDU Committee AI E&G BoK – Section 4 - Human-Centricity, Chapter 4.1 – Balancing commercial objectives against real/perceived risks Forbes - The Future Of Work Now: AI-Driven Transaction Surveillance At DBS Bank - https://www.forbes.com/sites/tomdavenport/2020/10/23/the-future- of-work-now-ai-driven-transaction-surveillance-at-dbs-bank/?sh=780c74293f7f Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Risk Assessment & Mitigation across AI Lifecycle – Development Phase will be ask in the test like what are the risks and how to mitigate. Key Risks: Selection of datasets that violates company’s & society’s definition of fairness. Data samples are not representative of the population of concern. Mitigation: Use entire dataset if possible. Perform data profiling & exploratory analysis. Share results with governance team to ensure no discrimination or weightage towards superficial attributes. Example: To mitigate the risks of inherent bias in DBS Bank’s AML model, the full dataset was used to train, test and validate the AML Filter Model. The DBS team focused on understanding the data lineage. Data from its banking system with traceable links to customer transaction activities were used to build the model. Inherent bias was minimised using the full dataset, separated into different datasets for training, testing & validation. AI E&G BoK – Section 4 - Human-Centricity, Chapter 4.1 – Balancing commercial objectives against real/perceived risks PDPC – Compendium of Use Cases - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgaigovusecases.pdf Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Risk Assessment & Mitigation across AI Lifecycle – Deployment Phase Key Risks: Decisions made by system cannot be explained and front-line staff are not properly trained to deal with subsequent customer complaints. Mitigation: Develop standard operation procedures and tools (e.g. data visualisation tools) for the interpretation & explanation of the AI outcomes to key stakeholders. Ensure all levels are properly trained to handle & communicate the AI process. Example: DBS took almost 2 years to develop and test the system, giving the team an intimate understanding of how the AML Filter Model (FM) arrives at its decisions. A good understanding of the data lineage & transaction alert triggers, coupled with a transparent computation of the results generated by the AML FM, provide DBS with the ability to explain how the AI model arrived at is particular risk rating prediction. AI E&G BoK – Section 4 - Human-Centricity, Chapter 4.1 – Balancing commercial objectives against real/perceived risks PDPC – Compendium of Use Cases - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgaigovusecases.pdf Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Risk Assessment & Mitigation across AI Lifecycle – Post-deployed Phase Key Risks: New data coming in may alter the dynamics and accuracy of the AI model, leading to poor or questionable decisions or actions. Mitigation: Perform regular data profiling & quality analysis to detect major drifts. Design & deploy robust AI model metrics that are generated & tracked regularly. Develop escalation matrices that enables the right governance unit to be notified to take action to tweak or kill model depending on deterioration level. Example: DBS tracks the model metrics every month to ensure stability of the AML Filter Model The ML team reviews the model every six months. Any fine-tuning recommendations by the ML team will be reviewed and approved by the Global Rules & Models Committee before deployment. AI E&G BoK – Section 4 - Human-Centricity, Chapter 4.1 – Balancing commercial objectives against real/perceived risks PDPC – Compendium of Use Cases - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgaigovusecases.pdf WBUR – Is 'Google Flu Trends' Prescient Or Wrong? - https://www.wbur.org/commonhealth/2013/01/13/google-flu-trends-cdc Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Governace in Banking & Finance – FEAT Principles Monetary Authority of Singapore (MAS) is Singapore’s Central Bank and financial regulator. MAS establishes rules for financial institutions which are implemented through legislation, regulations, directions and notices. Banking & finance has the 1st-highest level of government regulation and oversight. On November 12, 2018, MAS published broad Principles to promote Fairness, Ethics, Accountability and Transparency (FEAT) in the use of AI and Data Analytics (AIDA) in Singapore’s Financial Institutions (FI). The principles provide guidance to firms offering financial products and services on the responsible use of AIDA, to strengthen internal governance around data management and use. FEAT Principles “are not intended to be prescriptive” & are not “intended to replace existing relevant internal governance frameworks”. However, MAS does recommend that FI “consider (these principles) while assessing existing or developing new internal frameworks to govern the use of AIDA MAS paper, Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector published November 12, 2018. https://www.mas.gov.sg/~/media/MAS/News%20and%20Publications/Monographs%20and%20Information%20Papers/FEAT%20Principles%20Final.pdf Reference: SCS-NTU Certificate Programme in AI Ethics & Governance (AI E&G BoK) Module 1-5 Official (Open) Governace in Banking & Finance – FEAT Principles Ethic