AI Ethics and Oversight Quiz
45 Questions
4 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

What is the primary reason for having human oversight in AI systems within the orange zone of the Risk Impact Matrix?

  • Errors made by these systems can lead to minimal consequences.
  • AI systems in this zone can replace human decision-making entirely.
  • Missing human judgment may cause moderate to significant harm. (correct)
  • AI systems can operate independently without any oversight.
  • In what scenario should AI be considered a supportive tool rather than a replacement for human judgment?

  • In news recommendation systems for enhancing engagement.
  • In financial forecasting where speed is prioritized.
  • In medical diagnoses, like AI cancer diagnosis tools. (correct)
  • For administrative processes where decisions have little impact.
  • Which of the following statements best describes the concept of 'human-in-the-loop' in AI systems?

  • AI systems should operate autonomously, with no human intervention.
  • Humans should review every output produced by AI systems without exception.
  • Humans should monitor and intervene only in specific situations when errors are detected. (correct)
  • AI systems must completely replace human judgment to function effectively.
  • What type of problems warrant a 'human overlay' approach when utilizing AI?

    <p>Problems that require careful monitoring due to potential risks.</p> Signup and view all the answers

    Which of the following is an example of a scenario where human oversight is critical when using AI?

    <p>An AI system diagnosing cancer alongside a radiologist.</p> Signup and view all the answers

    What is the primary objective of the Algorithmic Accountability Act?

    <p>To increase accountability for AI systems regarding bias, privacy, or security</p> Signup and view all the answers

    What area is the EU focusing on with its upcoming AI legislation?

    <p>High-risk AI systems</p> Signup and view all the answers

    Which of the following is NOT included in the Ethics Guidelines for Trustworthy AI?

    <p>Profit maximization</p> Signup and view all the answers

    According to the content, what was the outcome of the Algorithmic Accountability Act so far?

    <p>It has had limited success</p> Signup and view all the answers

    What does the EU's AI legislation aim to enhance besides accountability?

    <p>Environmental standards</p> Signup and view all the answers

    Who published the Ethics Guidelines for Trustworthy AI?

    <p>European Commission High-level Expert Group on AI</p> Signup and view all the answers

    Why is the AI legislation in the EU considered urgent?

    <p>To address the ethical implications of AI development</p> Signup and view all the answers

    What principle is emphasized alongside privacy in the guideline framework for trustworthy AI?

    <p>Non-discrimination</p> Signup and view all the answers

    What is the primary focus of the COBIT 2019 Governance Model?

    <p>Framework for IT governance and management</p> Signup and view all the answers

    Which of the following is NOT one of the governance objectives included in COBIT 2019?

    <p>Ensure regulation compliance</p> Signup and view all the answers

    How many core governance and management processes are encompassed within the COBIT 2019 framework?

    <p>40</p> Signup and view all the answers

    Which standard is mentioned as a guideline for IT and data governance in relation to AI systems?

    <p>ISO/IEC 38500</p> Signup and view all the answers

    What critical aspect is emphasized for organizations developing AI systems?

    <p>Incorporating ethical considerations</p> Signup and view all the answers

    What is one primary responsibility of the Board of Directors concerning AI ethics?

    <p>Understand AI ethics issues and ensure oversight</p> Signup and view all the answers

    Which of the following statements about COBIT 2019 is true?

    <p>It includes a method for managing data.</p> Signup and view all the answers

    Which role is responsible for assessing potential legal liabilities associated with AI?

    <p>Chief Legal Officer</p> Signup and view all the answers

    What is a limitation of the ISO/IEC 38500 and COBIT 2019 standards according to the content?

    <p>They do not address ethical challenges in AI.</p> Signup and view all the answers

    Which of the following best describes the role of management according to COBIT 2019?

    <p>To align activities with governance direction</p> Signup and view all the answers

    What kind of assessment does the Chief Risk Officer provide to the CEO and Board of Directors?

    <p>Probabilistic assessment of adverse outcomes</p> Signup and view all the answers

    What is a key responsibility of the AI Ethics Committee?

    <p>Draft and uphold AI ethics principles</p> Signup and view all the answers

    Which functionary is involved in auditing the risk and security of databases used by AI systems?

    <p>Chief Audit Officer</p> Signup and view all the answers

    Should technical personnel be trained to monitor customer feedback on automated decisions?

    <p>Yes, to address relevant customer complaints effectively</p> Signup and view all the answers

    What is the role of the CEO regarding AI in an organization?

    <p>Overall management and strategic decisions on AI usage</p> Signup and view all the answers

    Why is it important for customer relation officers dealing with AI to be trained in sensitive complaint handling?

    <p>To maintain customer satisfaction and trust</p> Signup and view all the answers

    What should be preferred in scenarios where AI use could lead to irreversible impacts?

    <p>Manual processes</p> Signup and view all the answers

    Which of the following use cases is considered too dangerous for AI deployment?

    <p>Automated large volume trading</p> Signup and view all the answers

    In which phase of the AI lifecycle should risks be quantified and categorized?

    <p>Throughout all four phases</p> Signup and view all the answers

    What is the primary reason to weigh benefits against associated risks in AI management?

    <p>To develop appropriate mitigation strategies</p> Signup and view all the answers

    What is a potential risk of relying on automated decisions without sufficient human oversight?

    <p>Complacency or fatigue among users</p> Signup and view all the answers

    What is characterized by a dark red zone in the risk impact matrix?

    <p>High-risk AI applications</p> Signup and view all the answers

    Which is NOT one of the four phases of the AI lifecycle?

    <p>Analysis</p> Signup and view all the answers

    What is the consequence of a tiny lapse in human attention in high-risk AI scenarios?

    <p>It can cause catastrophic outcomes.</p> Signup and view all the answers

    What is one of the responsibilities of the Chief AI Officer?

    <p>Manage administrative processes related to AI</p> Signup and view all the answers

    Which role is responsible for conducting periodic ethics reviews of AI-related projects?

    <p>Ethics Officer</p> Signup and view all the answers

    What is a significant risk associated with AI systems as highlighted by the example of BlackRock?

    <p>Unexplainability of model outputs</p> Signup and view all the answers

    What action should a Chief Data Officer ensure regarding customer data?

    <p>Ensure consent for collection and use of customer data</p> Signup and view all the answers

    Which issue is related to bias in AI systems as exemplified by Goldman Sachs?

    <p>Proxy data can still induce bias</p> Signup and view all the answers

    What should the Chief HR Officer manage related to AI technology?

    <p>Ethical concerns in hiring and promotions</p> Signup and view all the answers

    Which of the following is NOT a responsibility of the Chief AI Officer?

    <p>Conducting ethical reviews</p> Signup and view all the answers

    What is a potential consequence of unexplainable AI models in financial services?

    <p>Poor trust from consumers and stakeholders</p> Signup and view all the answers

    Study Notes

    AI Ethics Governance Framework for Organisations

    • AI Ethics Governance Framework outlines the benefits and risks of ethical AI governance for organizations.
    • It details AI ethics principles, corporate governance – IT governance models, ethics guidelines for trustworthy AI from the European Commission and the PDPC model AI governance framework.

    Adapting AI Ethics Governance in Organizations

    • Moral culture, internal governance structures, roles & responsibilities, and scenario analysis are crucial for adapting AI ethics in organizations..
    • Al ethics training and awareness are key components of this adaptation process.

    Governance in Banking & Finance

    • FEAT Principles (Fairness, Ethics, Accountability & Transparency) and Veritas Initiative are critical governance frameworks for AI in banking and finance.
    • Digital advisory services are also considered.

    AI Ethics Governance Framework

    • AI ethics refers to a set of moral principles defining boundaries and responsibilities in AI development and use.
    • AI ethics governance is a process for regulating rules and actions, structured. and maintained through appropriate assignment of accountability.

    Why the need? Job Application

    • AI systems are impacting livelihoods, freedom, and well-being.
    • AI is used in job applications across various stages, from ad writing to interviews.

    Why AI Ethics Governance is important & relevant

    • AI ethics governance in decision-making positions influences how AI technologies are designed, developed, deployed, and used within an organization or community.
    • Individuals need to be aware of the impact AI systems have on their lives, including privacy and protection from unfair discrimination.

    AI Ethics Governance Framework - Benefits

    • Creates & maintains strong brand reputation
    • Builds trust among stakeholders
    • Increases public awareness & concerns about AI
    • Stays ahead of regulatory & governance standards

    Creates and maintain strong brand reputation

    • Practicing ethical AI governance builds a reputable brand less likely to be affected by ethical lapses.
    • Awareness of potential AI issues and risks is growing, but AI ethics is still not broadly discussed or well understood.

    Builds trust among stakeholders

    • Successful technology adoption depends on trust from stakeholders.
    • Loss of trust can significantly impact organizations' future actions and genuine efforts.
    • Technology's ethics depend on the creators' and users' design choices.

    Increase public awareness and concerns about AI

    • Growing concern about extensive AI use is shared across different demographics, like sex, age, income, and education.
    • A poll of 20,000 people across 27 countries showed that almost half believe companies using AI should be more strictly regulated.

    Stay ahead of regulatory and governance standards

    • The US approach to AI regulation involves guidelines and leaves the development of AI solutions to federal agencies and the industry, with limited success
    • The EU is developing legislation to govern high-risk AI systems based on Ethics Guidelines for Trustworthy AI and other crucial values. Likewise, Canada and China have taken progressive measures to govern AI deployment.

    The Changing AI Regulatory Landscape in Singapore

    • Singapore's Personal Data Protection Commission (PDPC) released its first edition of the Model AI Governance Framework.
    • A second edition was published later.
    • The need for AI system regulations is growing; the issue is how extensive it should be and when it should be enacted.

    Risks when translating ethical principles into practices

    • Ethics shopping involves choosing ethical guidelines to fit current practices without incentives for change.
    • Ethics bluewashing makes misleading claims or uses superficial measures to appear more ethical than one is.
    • Ethics lobbying exports unethical R&D to locations with less stringent practices, then imports outcomes.

    AI Ethics Principles

    • Numerous ethical guidelines and principles have been issued.
    • Floridi and Cowls' study of several documents highlights the divergence in AI ethics principles and the existence of disagreements and conflicting priorities across different organizations.
    • Principles like beneficence, non-maleficence, autonomy, justice, and explicability are crucial for trustworthy AI.

    Corporate Governance

    • Corporate governance is the system of rules, practices, and processes governing how an organization is directed and controlled.
    • It balances stakeholders' interests (shareholders, management, customers, etc.).
    • The organization’s board primarily influences governance and ensures ethical conduct, environmental awareness, compensation, and risk management

    Is there a need another Governance Framework for AI?:

    • Most organizations have governance processes for IT, but AI differs from conventional IT in its non-deterministic and often opaque operations. AI models learn and evolve over time, and therefore, requires explicit consideration of the model code and training datasets along with continuous monitoring.

    Where Should We Start?:

    • AI solutions are deployed through a combination of software systems and data pipelines. Existing IT and technology governance models (ISO/IEC 38500 and COBIT 2019) are good starting points.

    SO/IEC 38500 Governance Model

    • ISO/IEC 38500 is an international standard guiding corporate governance of IT.
    • It provides guiding principles for evaluating, directing, and monitoring the use of IT across organizations, regardless of purpose, design, or ownership.
    • The standard has six principles: responsibility, strategy, acquisition, performance, conformance, and human behaviour.

    COBIT 2019 Governance Model

    • COBIT 2019 is a widely accepted framework for IT governance and management created by ISACA.
    • It includes 5 governance & management objectives encompassing 40 core governance and management processes to establish a governance strategy and manage data effectively.

    The Ethics Guidelines for Trustworthy AI - European Commission (EC)

    • The EC Ethics Guidelines for Trustworthy AI, released in April 2019, has seven key requirements, providing guidance for developing, deploying, and using AI systems. This includes, respect for human autonomy, prevention of harm, fairness, and explicability.
    • The guidelines translate these principles into requirements and offer an assessment list for organizations to operationalize them and tailor them to specific AI applications.

    EC's Framework for Trustworthy Al – Human Agency & Oversight

    • AI systems should support human autonomy and decision-making.
    • The assessment includes if the system communicates decisions as algorithmic and the nature of the interaction when using a chat bot.

    EC's Framework for Trustworthy Al – Technical Robustness & Safety

    • Al systems need to minimize unintentional harm and prevent unacceptable harm or vulnerabilities exploited by adversaries.
    • Common attacks include adversarial inputs which aims to make systems err, data poisoning that manipulates training data, and model stealing.

    EC's Framework for Trustworthy Al – Privacy & Data Governance

    • Organizations must ensure quality and integrity of used data and protect privacy.
    • Processes must protect privacy throughout the whole lifecycle, data must be collected responsibly, and it should not be used to violate user rights or unjustly discriminate against them.
    • Quality & integrity of data should be analyzed before training, testing, and deployment with appropriate precautions to avoid biases & inaccuracies

    EC's Framework for Trustworthy Al – Transparency

    • Transparency involves the traceability and explainability of decisions made by an Al system, including knowledge of processes and rationale for AI system's deployment to influence business decisions.
    • Users have the right to be informed and communicated about the capabilities and limitations of an AI system.

    EC's Framework for Trustworthy Al – Diversity, Non-Discrimination & Fairness

    • Ensure diversity, inclusion, and accessibility in Al systems
    • Identify, avoid, and rectify biases in data training through overseeing processes.
    • Encourage a diverse team development process.
    • Al systems should incorporate universal design, considering the wider range of users, and stakeholder participation

    EC's Framework for Trustworthy Al – Societal & Environmental Well-being

    • Sustainability and ecological responsibility should be prioritized during Al design, development, deployment, and use
    • Monitor the potential social impacts of using Al systems to mitigate its risk to society, physical, and mental well-being.
    • Consider societal and democratic implications of Al system use in policymaking and electoral activities.

    EC's Framework for Trustworthy Al – Accountability

    • Auditability enables the assessment of algorithms, data, and processes.
    • Ensure actions contributing to outcomes are properly accounted for.
    • Appropriate mechanisms are essential for decision-making and accountability across stakeholders to address any adverse impacts. The system must include provisions for redress for adverse impacts.

    Other Ethics Guidelines

    • IEEE Ethically Aligned Design provides a comprehensive framework for AI system design using an AI system (e.g. ethics systems, legal frameworks).
    • The EAD1e guidelines are relevant to Al systems development, project management, and R&D.
    • Singapore's PDPC Model Al Governance Framework is more suitable for organizations that plan to deploy Al solutions.

    PDPC Model AI Governance Framework

    • The framework is based on the essential principles that promote trust in Al and better understanding of Al systems.
    • Organizations should strive to ensure that AI applications make decisions that are explainable, transparent, and fair. These principles include transparency, explicability, and fairness.

    PDPC Model AI Governance Framework – Human-Centric Approach

    • AI solutions need to be Human-centric, focusing on the well-being and safety of humans and is prioritized over other considerations.
    • Human risk minimisation to protect humans from harm
    • Human oversight determining the extent of human involvement in the AI decision-making process.
    • A Human-centered design process is also essential to ensure ethical development, deployment, and application

    PDPC Model AI Governance Framework – Operational Guidance

    • The framework provides guidance and measures in four key areas: (1)Internal governance structures (2)Level involvement in AI decision making (3) Operations management (4)Stakeholder interaction and communication

    Model Al Governance Framework - Case Study (MSD)

    • MSD, a biopharmaceutical company, used AI to improve employee engagement and attrition risks.
    • The company created the role of a data scientist for ethical AI practices, implemented ethics awareness training for personnel, and trained HR on using the AI model's output to improve employee satisfaction.
    • HR and managers were trained in the interpreting and using the Al outputs to ensure decisions were made and supported to balance commercial objectives with ethical concerns.

    Model Al Governance Framework - Case Study (MSD) - Attrition Risk Management

    • MSD recognized the sensitivity surrounding workforce attrition and used a human-in-the-loop approach in its AI-augmented process for workforce decisions.
    • The model flagged employees. Decisions impacting employees were made by managers and HR to mitigate unfair treatment.

    Model Al Governance Framework - Case Study (MSD) - Explainability

    • The project team explained Al results to management and ensured data quality.
    • The meaning and origin of features were explained by data professionals from the business unit to the team.
    • Clear explanations were provided for predictive scores at both the model level and the individual level.
    • The approach facilitated better understanding and built trust.

    Model Al Governance Framework - Case Study (MSD) – Stakeholder Interaction and Communication

    • The project team in collaboration with UX design teams developed user-friendly interfaces.
    • Elicitation techniques were used to understand staff concerns and this directly affected the model's design.
    • Fairness testing was done by comparing the results with human predictions to achieve data-driven conversations with business unit leaders. Results were limited to relevant stakeholders (leadership teams).

    Moral Culture – Core Values

    • Core values are fundamental beliefs guiding behavior and decisions.
    • They differentiate between right and wrong actions.
    • Companies (e.g., Microsoft, NYP) outline core values guiding organizational conduct and decisions.

    Moral Culture – Core Values with AI Ethics

    • Establish clear areas within the organization's decision-making framework to address and raise any ethical concern regarding AI deployment, and allocate roles to carry out responsibilities.
    • Strong leadership, proactive discussions with stakeholders, clarity, and alignment with core values are essential elements to support, reinforce, and monitor the ethical deployment of AI and ensure organizational objectives can be balanced with ethical concerns.

    Moral Culture – Helps from outside organizations

    • External subject matter experts can offer objective and unbiased guidance and prevent biased decision making.
    • These SMEs should come from diverse expertise to foster credibility

    Internal Governance Structures - Internal Structure

    • Organizations have established internal control structures and policies
    • Policies must be proactive during the Al solution’s development, including early deployment, and ongoing review for accountability.
    • Such models can be centralized, decentralized, or a hybrid model.

    Internal Structures - Centralised

    • Centralized governance structure consistently implements policies across departments
    • The centralized decision-making process is appropriate for high-risk or contentious Al deployments
    • In these cases, units escalate issues to higher levels for review.

    Internal Structures – Decentralized

    • Decentralized ethics review empowers teams with in-depth expertise and accelerates decision-making.
    • Policies can be tailored to specific departments to address local ethical issues.
    • This approach can also facilitate clearer guidelines on best practices

    Centralised & Decentralised Structures - Comparison

    • Centralised structures provide consistent, standardized processes for evaluating AI risk and impact, while decentralization promotes quicker, localized decision-making.
    • A hybrid approach combining centralized and decentralized strategies is recommended for better risk management.

    Roles and Responsibilities

    • Organizational roles and responsibility are fundamental to the successful and trustworthy deployment of AI.
    • Two approaches address role assignment: (a)Role-first: list out roles & assign people. (b)Concern-first: identify concerns and then assign roles & tasks to address those concerns.

    Roles and Responsibilities - Case Study in DBS

    • DBS Bank's Anti-Money Laundering Filter Model (AMLFM), supported by internal governance structures, ensures robust oversight and deployment of Al.
    • The bank utilizes a Responsible Data Use (RDU) committee and a Global Rules and Models Committee (GRMC) to evaluate and manage risks associated with implementing the AMLFM and approve its deployment.

    Al Ethics Training and Awareness

    • Staff involved in Al management/development/use must receive contextually relevant training.
    • Training focuses on the individual's role and responsibilities regarding ethics in Al, technical skills, compliance, and understanding current privacy laws.
    • Training should prepare stakeholders to interpret AI model outputs and identify/mitigate biases in data.

    Al Ethics Training and Awareness – Case Study in Apple Credit Card

    • Apple's new credit card faced investigations after a complaint about potential algorithm discrimination.
    • The case prompted discussion about the need for customer relationship officers to be well-trained to handle complaints and to be empathetic when interacting with customers using Al-based systems.
    • The need for tech personnel to monitor customer feedback and investigate any irregularities in automated decisions.

    Roles and Responsibilities - Roles by functionary

    • Presents various leadership roles within an organization and highlights their responsibilities. These include responsibilities for risk, legal, information, technology, and human resources roles.

    Risks Associated with Al Systems – Examples

    • Al systems can face challenges with explainability, like the BlackRock case, because their processes can be hard to understand.
    • Unfair biases in datasets used to train AI models, such as in the Goldman Sachs case, can manifest in the outputs.

    Identifying Risks

    • Pre-mortem analysis, a method of brainstorming and assessing risks, can be used to anticipate project failures and mitigate them proactively.
    • This method helps to anticipate possible causes and assign severity and likelihood to them, facilitating risk mitigation planning.

    Risk Management - Consider Appropriate Justifications

    • Scalability, cost-efficiency, and performance improvements often justify Al solutions.
    • A significant harm that automated decisions could cause may require human involvement to avoid deployment.
    • Routine decisions can effectively be automated while rare events should be managed with human oversight, to minimize risks from potentially erroneous Al decisions.

    Risk Management - Consider Human Involvement

    • The Model Framework proposes a risk impact assessment with three approaches to classify human oversight levels in decision-making processes.
    • These approaches include human-in-the-loop, human-out-of-the-loop, and human-over-the-loop.

    Risk Management – Risk Impact Matrix

    • Risk Impact Matrices (RIMs) help organizations evaluate potential harm, likelihood, and reversibility of risks from Al solutions to inform decisions on the level of human involvement needed. Based on the results of the analysis, organizations can properly classify their risks and choose the level of human oversight

    Risk Management - Risk Impact Matrix – Green Zone

    • Use cases in the green zone are most practical for deploying AI systems.
    • Erronious decisions only impact users with minimal inconvenience.
    • These types of AI systems are properly categorized as "human-out-of-loop"

    Risk Management - Risk Impact Matrix – Yellow Zone

    • Al usage in this zone warrants human oversight, but full automation may not be ethical.
    • Cases may cause harm but are reversible, prompting the need for human oversight during risky or potentially harmful decisions by the AI.

    Risk Management – Risk Impact Matrix – Orange Zone

    • Orange zone use cases need human oversight due to potential for moderate to significant harm.
    • This warrants careful testing and a human-in-the-loop approach to verify and minimize harm.

    Risk Management – Risk Impact Matrix – Dark Red Zone

    • Use cases in the dark red zone involve high risks leading to irreversible harm to individuals/society.
    • Human-in-the-loop oversight is essential, to prevent or greatly mitigate the potential harm.

    Risk Assessment & Mitigation across Al Lifecycle – 4 Phases

    • Business teams and AI designers must understand the phases of AI lifecycle. The four phases are design, development, deployment, and post deployment.
    • Quantifying risks and creating mitigation strategies depending on their severity.

    Risk Assessment & Mitigation across Al Lifecycle – Design Phase

    • Understanding commercial objectives and user intent for AI problems is vital for successful deployment
    • To mitigate risk, well-documented processes, and approval from a cross-functional team (e.g. business, technical, legal, and ethics) are necessary before deploying an AI.
    • DBS bank’s example demonstrates this process by establishing a committee to oversee the ethical use of AI in their systems.

    Risk Assessment & Mitigation across Al Lifecycle – Development Phase

    • Dataset selection for training algorithms must accurately represent the population of concern.
    • Exploratory data analysis is crucial to ensure no discrimination or bias toward superficial attributes.
    • DBS Bank's example demonstrates the use of the whole dataset for training, testing, and validating using a traceable approach, then categorized to mitigate inherent biases.

    Risk Management & Mitigation across Al Lifecycle – Deployment Phase

    • AI outcome needs explanation for front-line staff to effectively respond to customer complaints.
    • Standard operating procedures (SOP) and resources are required to effectively explain AI outcomes to stakeholders, and training for all levels is critical.
    • A comprehensive understanding of the data lineage and transaction alert triggers, coupled with a transparent computation process, improves the explanation of algorithms and risk predictions.

    Risk Assessment & Mitigation across Al Lifecycle – Post-deployment Phase

    • New data coming into an AI model may affect its dynamics and accuracy over time.
    • Regular data monitoring, profiling, and quality analysis is crucial to detect and address any significant drifts.
    • Designing Al systems with robust escalation and/or tracking mechanisms can quickly alert and help governance teams to address changes to ensure system reliability is monitored.
    • DBS's example demonstrates the importance of continuous monitoring of the AML filter model to ensure its efficacy and stability through review of the results and approval of the findings by the appropriate committee every six months.

    Governance in Banking & Finance – FEAT Principles

    • Singapore's Monetary Authority (MAS) promotes fairness, ethics, accountability, and transparency (FEAT) in AI and data analytics in financial institutions.
    • These principles guide financial institutions offering products and services concerning AI and data usage.

    Governance in Banking & Finance – FEAT Principles - Ethics

    • The use of AIDA must align with the organization's ethical standards, values, and conduct.
    • AIDA-driven decisions are held to the same ethical standards as human decisions.

    Governance in Banking & Finance – FEAT Principles – Internal Accountability

    • AIDA decision-making processes require internal approval before use.
    • Firms using AIDA are accountable for both internally and externally developed AIDA models.
    • Firms utilizing AIDA should increase internal and board awareness of its usage.

    Governance in Banking & Finance – FEAT Principles – External Accountability

    • Stakeholders should be able to question decisions, make appeals, and request reviews (through proper channels).
    • Organizations can consider providing verified and related data for review when necessary.

    Governance in Banking & Finance – Veritas Initiative

    • The MAS-led Veritas initiative provides a framework for incorporating FEAT principles into AIDA solutions using mathematically-verifiable methods.
    • The framework uses open-source tools for use in retail banking, corporate finance, and foreign markets, and focuses on customer marketing, risk scoring, and fraud detection.

    Governance in Banking & Finance – Digital Advisory Services

    • MAS provides guidelines on Digital Advisory Services (robo-advisors) for investment products.
    • Robo-advisors use algorithm-based tools and operate similarly to conventional financial advisors.
    • MAS requires effective oversight and governance by boards of directors and senior management.

    Governance in Healthcare - Operation Management in CDSS

    • Clinical decision support systems (CDSS) focus on improving diagnosis and error reduction using medical records and patient history via IT applications.
    • Data can be sourced from Electronic Medical Records (EMRs) or Electronic Health Records (EHRs) for a complete healthcare history, which may differ depending on healthcare provider.
    • Use of an Al-powered CDSS can predict diagnoses and treatment plans providing holistic healthcare history for patients.

    Governance in Healthcare - Operation Management in OPI

    • Robotic process automation (RPA) with Al is utilized for routine healthcare tasks, including insurance claims processing, clinical documentation, and medical records management.
    • Al supports staff by automating medical diagnostic related groups (DRGs), freeing up staff from time-consuming manual coding.

    Governance in Healthcare – IHIS

    • Integrated Health Information Systems (IHIS) create a central data authority for all integrated hospitals in Singapore for digitalization, connection, and analysis of all aspects of healthcare in Singapore.
    • IHIS creates a system to collate patient medical data and place it in the National Electronic Health Record (NEHR) for a seamless patient history.
    • Al-systems that analyze discharge summaries are used at IHIS to predict readmissions with a high accuracy of 86%

    Governance in Healthcare – EMR Governance

    • EMR systems manage healthcare records electronically with secure access.
    • Clinicians need EMR integration for compliance and approval from the appropriate governance bodies before embedding Al solutions.
    • Implementing Al solutions for drug reactions (e.g., HLA screening) can be proactive, preventing adverse effects before initiation.

    Governance in Healthcare - Medical Devices

    • Singapore's Health Sciences Authority (HSA) encompasses several specialized agencies, with guidelines for AI-enabled medical devices, including regulations from the WHO, in line with other relevant acts.
    • Al implementations are governed in conjunction with existing medical device regulations.

    Governance in Healthcare - Drugs Discovery

    • Large amounts of research and development (R&D) are involved in bringing a drug to market.
    • Big pharmaceutical companies are adopting AI-powered systems for faster & more efficient drug discovery processes.
    • Examples include utilizing AI to find immuno-oncology drugs (Pfizer), metabolic-disease therapies (Sanofi), and cancer treatments (Genentech).

    Governance in Two Critical Sector - Finance & Healthcare

    • The aim of governance for technology usage such as Al is not to inhibit its use but to ensure it is utilized ethically and safely.
    • Singapore's approach entails providing frameworks and guidelines, educating stakeholders about risks associated with using Al, and developing appropriate tools to ensure Al application, safety and reliability.

    Studying That Suits You

    Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

    Quiz Team

    Related Documents

    Description

    Test your understanding of ethical considerations and human oversight in AI systems. This quiz covers critical aspects such as the risks associated with AI, the importance of human judgment, and legislation surrounding AI accountability. Explore how ethical guidelines and laws shape the future of artificial intelligence.

    More Like This

    AI Ethics and Fundamentals Quiz
    5 questions
    AI Ethics and Human Values
    48 questions

    AI Ethics and Human Values

    RosyBougainvillea5877 avatar
    RosyBougainvillea5877
    Human Centricity in AI Ethics Quiz
    140 questions
    Use Quizgecko on...
    Browser
    Browser