Full Transcript

Transparency and Explainability in AI By: Quinn Downing, James Saber, Jimmy Schwinne Defining Transparency and Explainability in AI The goal of transparency in Artificial Intelligence is to ensure that stakeholders have a clear understanding of AI systems and how it makes decisions. Exp...

Transparency and Explainability in AI By: Quinn Downing, James Saber, Jimmy Schwinne Defining Transparency and Explainability in AI The goal of transparency in Artificial Intelligence is to ensure that stakeholders have a clear understanding of AI systems and how it makes decisions. Explainability refers to the ability to describe how the model's algorithm reached its decisions in a way that is comprehensible to nonexperts. With the fast adoption of AI a growing concern is how will these systems maintain transparency and integrity for users AI ethics requires avoiding bias, ensuring privacy of users and their data, and mitigating risks. Understanding transparency and explainability is essential for building trust in AI systems, ensuring ethical usage, and meeting regulatory standards. Transparency fosters trust by helping users understand how decisions are made. User Trust If users can’t grasp AI and decisions, they’re less Accountabi likely to trust and adopt the technology. lity GDPR mandates transparency in automated decision-making Ethical AI and Bias Reduction Explainability is essential for detecting and correcting biases. Explainability allows organizations to identify and remove biases in their models. AI used in hiring or loan approvals has faced scrutiny for biased decisions Better Decision Making and Public Trust Began with simple decisions and automation, now AI is making important technical decisions How to gain trust Transparency Ensure that stakeholders have a clear understanding of AI systems and how it makes decisions Explainability Ability to describe how the model's algorithm reached its decisions in a way that is comprehensible to nonexperts Case Study #1- OpenAI Open AI – creators of ChatGPT accused of non-transparency within data usage in building/training models. OpenAI breached in 2023, leading to scrutiny regarding security and privacy in AI-models. - The breach was not disclosed by the company to law enforcement or the general public. - Information from an employee discussion forum on OpenAI's technology was taken by the hacker. - Class-Action According to Forbes, “This has led to lawsuits from artists and writers claiming that their material was used without permission.” The hack and subsequent silence has led to doubt on the effectiveness of OpenAI's data security protocols, with growing demands for transparency and explainability in AI usage. Case Study #2 – Google Gemini Aimed to provide a versatile tool capable of creating diverse images Model Issues o Inconsistent o Over-corrected for inclusivity Need for consumer trust o Trust relies on accuracy and reliability o Transparency about development process Quiz Questions generate through - https://www.jotform.com/myforms/

Use Quizgecko on...
Browser
Browser