GRC Week 11-13 PDF
Document Details
Uploaded by PleasedPearTree
Nanyang Polytechnic
2022
Tags
Summary
This document is a presentation on AI governance, risk, and compliance, specifically focusing on AI ethics and governance. It covers topics such as AI model governance framework, human-centricity in AI implementations, risk-impact matrices, and guiding principles. The presentation also details internal governance structures, operations management, stakeholder communication, and relevant case studies.
Full Transcript
Governance, Risk & Compliance TOPIC 10 : AI ETHICS & GOVERNANCE 2 AY2022/23 S1 1 Objectives § AI Model Governance Framework § Human-Centricity in AI Implementations § Risk-Impact Matrix of Human I...
Governance, Risk & Compliance TOPIC 10 : AI ETHICS & GOVERNANCE 2 AY2022/23 S1 1 Objectives § AI Model Governance Framework § Human-Centricity in AI Implementations § Risk-Impact Matrix of Human Involvement AY2022/23 S1 2 Model AI Governance Framework (SG) § Recognizing the importance of AI technologies, Singapore released the first edition of the Model AI Governance Framework in 2019, with a second edition in 2020. § It provides practical guidance to organizations to address key ethical and governance issues surrounding AI implementations. § https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for- Organisation/AI/SGModelAIGovFramework2.pdf Features of the Model § Algorithm-agnostic § It does not focus on specific AI methodology. It applies to the design, application and use of AI in general. § Technology-agnostic § It does not focus on specific systems, software or technologies. It will apply, regardless of the technologies used. § Sector-agnostic § Model includes considerations that are common across different industries, which may adopt AI technologies § Scale and Business-model-agnostic § It does not focus on organizations of a particular scale or the form of business (B2C, B2B). AY2022/23 S1 4 Objectives of the Model § Build stakeholder confidence in AI, through organizations' responsible use of AI to manage different risks in AI deployment. § Assist organizations in demonstrating reasonable efforts to align internal policies, structures and processes with relevant practices in data management and protection such as PDPA and the OECD Privacy Principles. AY2022/23 S1 5 Guiding Principles 1. Organizations using AI in decision-making should ensure that the decision- making process is explainable, transparent and fair. 2. AI solutions should be human-centric. AI systems should protect the interests of human beings, including their well-being and safety. It should be the primary considerations in the design, development and deployment of AI. AY2022/23 S1 6 Model AI Governance Framework, 2020 Internal Governance Structures & Measures § Clear roles and responsibilities need to be defined to monitor and manage the implementation, use and maintenance of AI systems. § Relevant roles be defined with specific responsibilities towards the use of AI. § Risks posed by AI systems can be managed within the organization’s risk management framework/system. § Ethical considerations can be inline with the organization’s current corporate values and similar structures. Model AI Governance Framework, 2020 Operations Management § Ensure (unfair) bias is minimized through data and model design. § Good governance ensures AI implementations balance the (competing) needs of operational effectiveness & efficiency and the organization’s obligations to various stakeholders. § Since AI systems are built (and trained) on data, the quality and selection of data is of utmost importance. § Data lineage – understanding where data comes from to its end-use § Data provenance record – having a trace of where the data originates and subsequent transformation Model AI Governance Framework, 2020 Stakeholder Interaction and Communication § Ensure appropriate policies are crafted and made known to users an stakeholders. § Provide two-way communication to gather feedback for the purpose of identifying areas for improvement and enhancement. § Communication style needs to be appropriate for different audience. § In the case of consumers, consider the option of opting out, if possible. Human Centricity in AI Implementations § Human centricity in AI implementations refers to the incorporation of human- centric properties such as transparency, trustworthiness and explainability. § Another aspect is the level of human involvement in the operation of the AI system, such that transparency, trustworthiness and explainability properties are maintained. § Maintaining control of the AI system under humans is just one of the methods to assure human centricity. However, this often goes against the very properties of AI systems – its ability to make intelligent decisions at a much faster rate, with higher accuracy. AY2022/23 S1 10 Model AI Governance Framework, 2020 Level of Human Involvement § The unique feature of AI is its potential ability replace the role of humans during operations. § For various reasons, organizations may intentionally involve the role of humans in key areas to ensure trustworthiness despite its potential downsides. § Human-in-the-loop – human oversight is active and involved, with the human retaining full control and AI providing recommendations. § Human-out-of-the-loop – no human oversight. AI has full control without the option of human override. § Human-over-the-loop – human oversight is involved in a monitoring or supervisory role. Human can take over control whenever AI encounters undesirable events. Human Involvement Model Involvement Model Key Considerations Examples Human-in-the-loop AI cannot be left to make decisions Medical diagnosis independently, either due to high impact Operational Technology of decisions and/or unacceptable applications such as utilities performance of AI. Human-out-the-loop Application of AI cannot integrate the Personal assistants – Siri, role of the human in process. The impact Alexa, etc of the decisions made are deemed as Online recommendation low. Sub-performance does not result in engines high-impact scenarios. Human-over-the-loop Application of AI cannot integrate the Simulators, such as flight, role of the human in process, but driving or marine simulators requires some intervention or Supply chain systems augmentation of the human supervision. Risk Impact Matrix § A Risk Impact Matrix (RIM) is used to assess the risk, based on potential harm, their reversibility and likelihood. § Based on the result, the level of human involvement in the AI system can be determined. § Green – Human out of the loop § Yellow – Human over the loop § Orange– Human in the loop § Dark Red/Red - Unadvisable AY2022/23 S1 13 Risk Impact Matrix (Green Zone) § Currently the most practical of all zones in deploying AI systems. § Benefits and efficiency due to use of AI systems is brought to the forefront as adoption involves potentially minor impact or the harm is easily reversible. § A human out-of-the-loop implementation allows the AI system to realise its true potential. Example A shopping purchase recommendation system, in which no or little harm is done as customers are not bound to accept the occasional ‘poor’ recommendations. Most serious issue could be the recommendation of ‘inappropriate’ items based on customer’s gender or race. AY2022/23 S1 14 Risk Impact Matrix ( Yellow Zone) § Implementation of AI automation will be beneficial but too risky for a fully automated implementation. § Should implement a feedback system to alert abnormal behaviour. § Human over-the-loop implementation is recommended. Example A news recommendation system where not all news items requires oversight, but a human may be needed to monitor and review flagged items for harmful or potential libellous content. AY2022/23 S1 15 Risk Impact Matrix (Orange Zone) § Implementations in the orange require human oversight as errors can cause moderate or significant harm to individuals. § Due to the potential harm, which is not typically reversible, a human must always be in the loop to make the final decisions. § The AI system typically makes recommendations, not decisions. Example A medical diagnosis system that helps the doctor to make recommendations but the doctor makes the ‘final call’ as to the actual diagnosis. AY2022/23 S1 16 Risk Impact Matrix (Red Zone) § Severity and impact makes use cases in the red zone too dangerous. Any errors caused may lead to irreversible impact, such as loss of life. § With current state of AI technologies, these use cases have to rely on human manual operation. Even if an AI system is involved, humans may become complacent or fatigued and make errors when accepting or rejecting a recommendation from an AI system. Example Automated large volume trading or missile launches. Any lapse in human judgement can result in catastrophic outcomes. AY2022/23 S1 17 Conclusion § Model AI Governance Framework is a technology agnostic framework, developed by Singapore to address the governance and management of AI technologies. § It addresses various issues pertaining to the adoption of AI technologies as they become widespread in society. § The RIM is a comprehensive tool to guide implementors and decision makers on the risk issues regarding the adoption of AI technologies in various use- cases. AY2022/23 S1 18 Governance, Risk & Compliance TOPIC 9 : AI ETHICS & GOVERNANCE 1 AY2022/23 S1 1 Objectives ▪ Introduction to AI and AI Ethics ▪ Need for AI Ethics ▪ AI Risk and its mitigations ▪ AI Ethics Body of Knowledge AY2022/23 S1 2 The use of software to mimic the way humans learn and solve complex problems in a bid to allow machines to do What is AI the same. (Artificial Originally proposed by Alan Turing, who explored the Intelligence) mathematical possibility of artificial intelligence in his 1950 paper, Computing Machinery and Intelligence. Can Machines Think?, Harvard University, 2017. https://sitn.hms.harvard.edu/f Since then, algorithms, methods of ‘testing for AI’ have been proposed and developed. lash/2017/history-artificial- intelligence/ Our ability to increase computing performance, store big data and connecting up disparate systems (via the Internet) has increased the applicability of AI in everyday life. AY2022/23 S1 3 AI Governance ▪ Artificial Intelligence (AI) or Machine Learning (ML) are technologies that promise to bring a new level of automation by having machines or programs make ‘intelligent’ decisions at a pace faster than humans. ▪ Like any technologies, AI needs to be designed, implemented and maintained with an eye towards helping the organization to achieve its mission. ▪ Due to the unique nature of AI, special care needs to be taken to ensure that AI is able to perform its stated objectives without violating any policies or (ethical) expectations placed on it by its implementors. ▪ A common term used to describe this is trustworthy AI. AY2022/23 S1 4 AI Governance and IT Governance ▪ AI is essential a ‘branch’ of IT, albeit currently an ill-defined, less mature technology that is still undergoing innovation, changes and application – Emerging Technology. ▪ Management of emerging technologies is an essential part of today’s IT Governance practice. COBIT 2019 has a process devoted towards the management of Innovation (APO04 – Managed Innovation). ▪ Approach towards AI can (currently) benefit from managing it as an Emerging Technology, to consider the heightened risk and lack of established processes and procedural best practices. AY2022/23 S1 5 Need for AI Ethics ▪ Due to the ‘intelligent’ nature of AI, this technology is expected to take over some (or most) of the decision-making process in place of humans. ▪ As humans, we often need to make decisions based on our ethics (or code of morality), so too, must AI systems make similar ethical decisions. ▪ AI systems are capable of making unfair and/or discriminatory decisions, AI Ethics is needed to ensure that the risk of such decisions are mitigated or eliminated, if possible. ▪ AI Ethics require us to maintain sufficient control and oversight over AI systems, as they begin to make more decisions in place of humans. AY2022/23 S1 6 Need for AI Ethics ▪ Some questions we can ask of our AI implementations are: ▪ How can we ensure that the AI system does not run rogue or make bad decisions?? ▪ How can we continue to keep human centricity at the core of all AI systems without impeding the benefits of AI? ▪ How can the outcomes that AI delivers fall within the boundaries that we have set? ▪ Should AI be regulated, especially in sensitive sectors such as healthcare, law and engineering? ▪ What is Singapore’s approach to AI Governance? AY2022/23 S1 7 AI Risks ▪ Even as we develop AI technologies and implement them in various uses, risks associated to their use continue to evolve. Some of them that we should consider today are: ▪ Unexplainability ▪ Biasness or Discrimination ▪ Misclassification AY2022/23 S1 8 Unexplainability ▪ AI systems tend to be based on complex training algorithms, which are the result of training the machine with training data. ▪ Unexplainability refers to the inability to explain the decisions made by AI systems. ▪ Explainability is a pre-requisite for gaining trust and acceptance from stakeholders, among other things. AY2022/23 S1 9 Unexplainability ▪ BlackRock is a large asset management company that leverages on AI to develop liquidity risk (financial) models. ▪ However, its inability to explain how the AI system produces the models caused it to shelve the project, even though initial results show promise. ▪ https://www.risk.net/asset-management/6119616/blackrock- shelves-unexplainable-ai-liquidity-models AY2022/23 S1 10 Unexplainability ▪ Explainability is required for : ▪ Understanding of the process by which the AI system arrives at a decision. ▪ Confidence in the continued performance of the AI system (assurance). ▪ Gaining trust and acceptance from users. ▪ Confidence that the system will not function in unexpected ways. ▪ Mitigations for unexplainability : ▪ Increased effort in understanding of the AI system. ▪ Implement human oversight over decisions made by the AI system. AY2022/23 S1 11 Biasness or Discrimination ▪ Refers to the tendency of AI systems to produce results that favour a particular group of people due to gender, race or other inherent characteristics. ▪ Improperly designed or trained AI systems can produce biased results, that can have disastrous consequences. ▪ Elimination or reduction of biasness is fairness. AY2022/23 S1 12 Biasness or Discrimination ▪ Biasness is usually the product of an AI system trained using biased training data. ▪ A biased collection of training data will likely result in an AI system that tends to produce biased results. ▪ Mitigations include: ▪Ensuring training data includes equal representations of groups that are present in the data. ▪Tuning the AI system manually to reduce bias. AY2022/23 S1 13 Biasedness Case Study (Goldman Sachs) ▪ Goldman Sachs is the issuing bank for Apple’s first credit card – Apple Card. ▪ Credit lines were granted to customers based on an AI algorithm that takes in data related to credit worthiness. However, gender information was NOT included. ▪ In late 2019, users noticed that smaller lines of credit were offered to women than to men. It created a social media frenzy, which was very detrimental to Apple, which prided itself on equal opportunities, both gender-wise and racial- wise. AY2022/23 S1 14 Biasedness Case Study (Goldman Sachs) ▪ It was discovered that the data, although did not include gender, was inherently biased against women, due to the inherent bias in real-life data. ▪ The gender-blind algorithm was helpless when it was trained with data that correlate with gender. ▪ Mitigations: ▪ Understand the data, and attempt to reduce bias, either through stricter evaluations of training data, or statistical adjustments. ▪ Implement an algorithm that reduces the impact of the (known) bias. ▪ https://www.wired.com/story/the-apple-card-didnt-see-genderand-thats-the-problem/ AY2022/23 S1 15 Bias in University Admissions ▪ For some time, universities have experimented and implemented various AI systems to assist in the onerous but essential task of improving admission outcomes. ▪ In most cases, universities are interested in potential candidates who are most likely to result in positive outcomes, that is, good results, profile, success, etc. ▪ However, since these systems rely on past data to identify ‘good’ candidates, very often they tend to steer towards certain races or gender. ▪ This results in a bias towards a particular group of candidates. ▪ A more ‘fair’ outcome could be that the proportion of the student admissions be reflective of the general proportion of the local population based on race or gender. AY2022/23 S1 16 Misclassification ▪ Refers to the inadvertent and unexpected result of a wrong decision made by an AI system due to issues not considered by the system, including language diversity, cultural differences and varying contexts. ▪ Misclassification is usually the result of an incomplete or poorly trained AI system, being applied in a complex scenario where insufficient factors were included in the training of the system. ▪ In April 2021, Facebook had to deal with the consequences of the automatic removal of the small French town of Bitche, which was misclassified as an English insult. ▪ https://www.bbc.com/news/world-europe-56731027 AY2022/23 S1 17 Misclassification ▪ Mitigations: ▪ Evaluate carefully the ‘fit’ of the AI system onto the context in which the system is applied. The AI system may be inadequate. ▪ Include exceptions where human involvement is activated in anomalous scenarios. AY2022/23 S1 18 AI Ethics BoK ▪ The AI Ethics Body of Knowledge is developed by SCS to address the adoption of AI-based technologies as Singapore seeks to harness this technology to build the future. ▪ It seeks to inform, educate and build consensual best practices in the Singapore AI ecosystem. ▪ It aims to be a handbook for the three stakeholders - AI solution providers, businesses and end-user organisations, and individuals or consumers. AY2022/23 S1 19 AI Ethics BoK ▪ The BoK is built based on two high-level guiding principles that promote trust in AI and understanding of the use of AI technologies: ▪ Transparent, Fair & Explainable ▪ Organisations should strive to ensure that their use or application of AI is undertaken in a manner that reflects the objectives of these principles as far as possible. This helps build trust and confidence in AI. ▪ Human-Centricity ▪ As AI is used to amplify human capabilities, the protection of the interests of humans, including their well-being and safety, should be primary considerations in the design, development and deployment of AI. AY2022/23 S1 20 AI Ethics BoK Approach to AI ▪ Internal Governance ▪ Internal governance structures and measures. Accountability involves adapting existing structures or setting up new internal governance structures and measures to incorporate values, risks and responsibilities relating to algorithmic decision-making. ▪ Human Centricity ▪ A methodology to aid organisations in setting its risk appetite for the use of AI. This includes determining acceptable risks and identifying appropriate levels of human involvement in AI-augmented decision-making. ▪ Operations Management ▪ Issues to be considered when developing, selecting and maintaining AI models, including data management (such as auditability). ▪ Stakeholder Communications ▪ Stakeholder interaction and communications. Strategies for communicating with an organisation’s stakeholders and the management of relationships with them. AY2022/23 S1 21 Conclusion ▪ As Singapore strives to develop the AI ecosystem and harness AI-related technologies, it is important to ensure that AI is implemented within the boundaries of ethical behaviour and governance principles. ▪ AI-related risks are real and need to be identified and mitigated. ▪ SCS developd the AI Ethics BoK to help guide the implementation of AI-related technologies and to encourage discussions among the stakeholders in this ecosystem. AY2022/23 S1 22 Governance, Risk & Compliance TOPIC 11 : AI ETHICS & GOVERNANCE 3 AY2022/23 S1 1 Objectives § EC Recommendations on AI Governance § Data and AI Governance AY2022/23 S1 2 EC Recommendations on AI Governance § The European Commission released the Ethics Guidelines for Trustworthy AI in April 2019. § It defines FOUR Ethical Principles: § Respect for human autonomy § Prevention of harm § Fairness § Explicability § Based on these principles, 7 requirements for AI system implementations were proposed. AY2022/23 S1 3 EC Recommendations on AI Governance § Respect for human autonomy § AI systems should not encroach upon human freedom and autonomy of human beings. § AI systems should be designed to augment, complement and empower human cognitive, social and cultural skills. § In some cases, human oversight may be necessary in order to achieve this principle. § Prevention of harm § Includes both mental and physical harm. § AI systems must be safe and secure, while taking into account special needs of vulnerable persons. § Prevention of harm also entails consideration of the natural environment and all living beings. AY2022/23 S1 4 EC Recommendations on AI Governance § Fairness § AI systems should ensure equal and just distribution of both benefits and costs. § If possible, AI systems should be implemented such at it can increases societal fairness, even if the current situation is not. § Use of AI systems should not lead to people being deceived or unjustifiably impaired. § Includes the ability for affected parties to contest and seek redress should fairness be seen to be violated. § Explicability § AI systems should be transparent and the capabilities and purpose of such systems openly communicated. § Decisions made by the AI system should be explainable to those directly and indirectly affected. AY2022/23 S1 5 Requirements of EC Framework § Human agency and oversight § Fundamental rights, human agency and human oversight § Technical robustness and safety § Resiliency to attack, including security, fall-back plans and general safety. § Includes accuracy, reliability and reproducibility § Privacy and data governance § Respect for privacy, quality and integrity of data and access to data. § Transparency § Traceability, explainability and communication AY2022/23 S1 6 Requirements of EC Framework § Diversity, non-discrimination and fairness § Avoidance of unfair bias, accessibility and universal design, and stakeholder participation. § Societal and environmental well-being § Sustainability and environmental friendliness, social impact, society and democracy. § Accountability § Auditability, minimisation and reporting of negative impact, trade-offs and redress. AY2022/23 S1 7 Realising AI AY2022/23 S1 8 Realising AI § Technical methods § Adopting architecture for AI systems that can ensure that requirements for trustworthy AI are adopted in the design, implementation and operation of AI systems. § Ensure ethics and rule-of-law are implemented in the design of AI systems. § Adopting explanation methods to explain why AI systems behave in a particular way. § Testing and validating AI systems. § Identify and implement quality of service indicators. AY2022/23 S1 9 Realising AI § Non-technical methods § Implementing regulations to safeguard and enable trustworthy AI. § Enacting codes of conduct and having organizations pledge to implement them. § Setting and implementing standards for trustworthy AI. § Having AI systems certified to a set of requirements. § Accountability via governance frameworks. § Education and awareness. § Stakeholder participation and dialogue § Having diversity and inclusive design teams that implement AI systems. AY2022/23 S1 10 Data & AI Governance § Today’s AI systems and models depend on large datasets, both for training and verification purposes. § Use of datasets require sound and secure management of data to ensure data quality and security. § AI Governance requires sound data quality policies so as to reduce the risk of malfunction in AI systems, such as misclassification and biasness. § Good data accountability practices are required for effective AI governance. AY2022/23 S1 11 Data Accountability Practices (Data Lineage) § Refers to the knowledge of where data originated from, how it was collected, processed and moved within the organization. § Backward data lineage looks at data from its end-use and backdate it back to its source. § Forward data lineage begins from the data’s source and follows it to its end-use. § End-to-end data lineage combines both forward and backward lineage for a holistic view of the use of data. § Data provenance record can be used to trace the data’s lineage, as well as, the quality aspects of the data. AY2022/23 S1 12 Data Accountability Practices (Data Lineage) § In some cases, the source of data could not be ascertained nor controlled. This main hinder attempts to determine nor control the quality of data from its source. § Organizations can mitigate risk by putting in place controls on data at the earliest point possible and to react quickly by monitoring key metrics associated with the data. AY2022/23 S1 13 Data Accountability Practices (Data Quality) § Factors affecting the quality of data include: § Accuracy of the dataset – how well they reflect the true characteristics of the entities purported to be described by the dataset. § Completeness of the dataset – how comprehensive is the dataset, with respect to the context(s) being described by the data. § Veracity of the dataset – credibility of the data, that is, whether the data originated from a reliable source. § Timeliness of the dataset – how recently was the data compiled or updated and whether it is still valid. § Relevance of the dataset to the context being applied. AY2022/23 S1 14 Data Accountability Practices (Data Quality) § Integrity of the dataset – if the dataset has been joined from multiple sources or datasets, how well the extraction and transformation have been performed. § Usability of the dataset – is it in a machine-understandable form. § Human interventions – whether any human has filtered, applied labels or edited the data. AY2022/23 S1 15 Data Accountability Practices (Data Bias) § Inherent bias § Bias can occur in datasets, which can result in unintended discriminatory decisions by the AI system trained on this data. § Organizations should be aware of the risk of inherent bias, which can be one of the following: § Selection bias : A dataset consisting of data which is not fully representative of the actual environment. E.g., including only data of persons of a particular race. § Measurement bias : A dataset obtained using a device that gives a certain tendency towards a particular direction. AY2022/23 S1 16 Data Accountability Practices (Training, Testing and Validation) § As far as possible, have separate groups of data used for training, testing and validation. § This practice can mitigate the risk of systematic bias through tendencies such as over-fitting, which are prevalent in AI systems. § Where the amount of data does not permit the separation into groups, organization should take steps to mitigate this through various techniques, depending on the type of AI system used. AY2022/23 S1 17 Data Accountability Practices (Updating) § Datasets should be reviewed periodically to ensure accuracy, quality, currency, relevance and reliability. § Where available, new data can be updated into the datasets and used to update the AI models to improve performance. § Should the new input data be generated from current AI systems, when used to update the datasets, any bias already present in the system can be enhanced, resulting in reinforcement bias. AY2022/23 S1 18 Case Study (Suade) § Suade is a financial institution that implements a AI-enabled solution that helps Financial Institutions generate data and reports that comply with various regulatory requirements. § Data tagging – Suade tags datasets with meta data so that various information about the datasets are available and evaluated in the event of a need. E.g, data source, where was the dataset used, etc. § Where the data being tagged are subject to personal bias (data taggers), multiple taggers work on each dataset to reduce the chance of a personal bias by an individual tagger. § Prior to training, Suade validates the datasets to ensure that there are no errors in data formatting and content. AY2022/23 S1 19 Case Study (MSD) § MSD is a multinational pharmaceutical company that implemented an in-house chatbot to answer IT-related queries. § The design team conducted user research with users to understand their expectations of the chatbot in order to provide a user friendly interface. § The team studied human behaviour patterns when interacting with the chatbot in order to create a human-centric AI system. § The bot is designed to forward unanswerable queries to a human for follow-up, where necessary. AY2022/23 S1 20 Conclusion § The European Commission’s recommendations for trustworthy AI provides a strong basis for an ethical and governance-centric approach towards AI implementation. § Data forms the foundation of AI system training. Hence, sound data governance methodologies need to be implemented in order to ensure sound AI implementation and operations. AY2022/23 S1 21