Fair Information Practices Lecture Notes PDF
Document Details
Uploaded by Deleted User
Tags
Summary
These are lecture notes focusing on Fair Information Practices (FIPs). They cover the guidelines for handling data with privacy, security, and fairness. Topics include disambiguation, history, and common principles. There is also a mention of AI ethics in the notes.
Full Transcript
Fair Information Practices Guidelines for handling data with privacy, security, and fairness In this lecture: Disambiguation What are the FIPs? History of the FIPs Common principles Disambiguation Fair Information Practices (FIPs) Fair Informat...
Fair Information Practices Guidelines for handling data with privacy, security, and fairness In this lecture: Disambiguation What are the FIPs? History of the FIPs Common principles Disambiguation Fair Information Practices (FIPs) Fair Information Practice Principles (FIPPs) Fair Information Privacy Principles (FIPPs) Federal Information Processing Standards (FIPS) What are the FIPs? Guidelines for handling data with privacy, security, and fairness OECD Guidelines (1980) Various global interpretations History of the FIPs U.S. Department of Health, Education, and Welfare (HEW) Report (1973) 5 principles No secret collection Access and amendment Consent No secondary use Appropriate safeguards Common Principles Access/individual participation Purpose specification Data minimization/collection limitation Data quality, relevance Safeguards/security Notice/openness Accountability Use limitation Mnemonic Device At Paradise, Dalmatian Dogs Snooze Near Aerial Unicorns. Access / Individual Participation Access: data subjects have right to request access (and amend) their personally identifiable information (PII) Individual participation: data should be collected from data subject (not second, third party) Purpose Specification Purpose of collection should be specified (e.g., why is that data being collected?) Data Minimization AKA collection limitation Only collect what is necessary Maintain data only for as long as needed Data Quality, Relevance Data should be accurate, complete, up to date Data collected relevant to purpose specified Safeguards / Security Appropriate administrative, technical, and physical safeguards Notice / Openness Notice: advance statement of data collection Openness: transparency of policies, procedures, etc. Accountability Organization takes ownership over policies, procedures, etc. Use Limitation Data collected used only for specified purposes No secondary use Review: What are the FIPs? Common principles Fair Information Practices Access/individual participation Guidelines for handling data with privacy, Purpose specification security, and fairness Data minimization/collection limitation OECD Guidelines (1980) Data quality, relevance Safeguards/security Notice/openness Accountability Use limitation Buzzword Bingo Important terms for understanding AI ethics In this lecture: Accountability Robustness Contestability Safety Explainability (XAI) Transparency Fairness Trustworthy AI Reliability Accountability Obligation and responsibility of creators, regulators Ensure system operates in ethical, fair, transparency, compliant manner System actions, decisions, outcomes are traceable Contestability Ensure AI system output, actions can be questioned, challenged Promote transparency, accountability Explainability (XAI) Ability to describe AI’s output, decision making Promotes trust, transparency Fairness Appropriate standards determined for each system Consistent, accurate prioritization of relatively equal treatment of individuals, groups Decisions should not adversely impact sensitive/protected characteristics (e.g., gender, race, religion, etc.) Reliability Ensuring a system behaves as expected Performs intended function consistently, accurately, especially with unseen data Robustness Maintains functionality Performs accurately in a variety of circumstances E.g., new environments, unseen data, adversarial attacks Safety Systems designed to minimize potential harm to individuals, groups, society, environment Transparency Extent to which information is made available E.g., use of AI, functioning of model, decision making Trustworthy AI Principle-based AI governance Used interchangeabley with “responsible AI”, “ethical AI” Review: Accountability Robustness Contestability Safety Explainability (XAI) Transparency Fairness Trustworthy AI Reliability OECD AI Principles Values to guide AI actors to develop trustworthy AI and AI policies In this lecture: About the principles OECD AI principles About the Principles 2019: Initially adopted 2024: Updated First intergovernmental AI standard Used to shape policies, create risk management frameworks (RMF) 47 adherents, including EU, US, United Nations OECD AI Principles Inclusive growth, sustainable development, well-being Human rights, democratic values, fairness, privacy Transparency and explainability Robustness, security, safety Accountability AI Principle #1 Inclusive growth, sustainable development, and well being “Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, well-being, sustainable development and environmental sustainability.” AI Principle #2 Human rights, democratic values, fairness, privacy “AI actors should respect the rule of law, human rights, democratic and human- centred values throughout the AI system lifecycle. These include non-discrimination and equality, freedom, dignity, autonomy of individuals, privacy and data protection, diversity, fairness, social justice, and internationally recognised labour rights. This also includes addressing misinformation and disinformation amplified by AI, while respecting freedom of expression and other rights and freedoms protected by applicable international law. “To this end, AI actors should implement mechanisms and safeguards, such as capacity for human agency and oversight, including to address risks arising from uses outside of intended purpose, intentional misuse, or unintentional misuse in a manner appropriate to the context and consistent with the state of the art.” AI Principle #3 Transparency and explainability “AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art: to foster a general understanding of AI systems, including their capabilities and limitations, to make stakeholders aware of their interactions with AI systems, including in the workplace, where feasible and useful, to provide plain and easy-to-understand information on the sources of data/input, factors, processes and/or logic that led to the prediction, content, recommendation or decision, to enable those affected by an AI system to understand the output, and, to provide information that enable those adversely affected by an AI system to challenge its output.” AI Principle #4 Robustness, security, and safety “AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety and/or security risks. “Mechanisms should be in place, as appropriate, to ensure that if AI systems risk causing undue harm or exhibit undesired behaviour, they can be overridden, repaired, and/or decommissioned safely as needed. “Mechanisms should also, where technically feasible, be in place to bolster information integrity while ensuring respect for freedom of expression.” AI Principle #5 Accountability “AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of the art. “To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outputs and responses to inquiry, appropriate to the context and consistent with the state of the art. “AI actors, should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on an ongoing basis and adopt responsible business conduct to address risks related to AI systems, including, as appropriate, via co-operation between different AI actors, suppliers of AI knowledge and AI resources, AI system users, and other stakeholders. Risks include those related to harmful bias, human rights including safety, security, and privacy, as well as labour and intellectual property rights.” Review: Inclusive growth, sustainable development, well-being Human rights, democratic values, fairness, privacy Transparency and explainability Robustness, security, safety Accountability Blueprint for an AI Bill of Rights Principles from the White House Office of Science and Technology Policy In this lecture: What is the WH OSTP? Blueprint principles: Safe and effective systems Algorithmic discrimination protections Data privacy Notice and explanation Human alternatives, consideration, and fallback What is the WH OSTP? Office of Science and Technology Policy (OSTP) Mission: Advise the U.S. President Strengthen American science and technology Work with Executive and Legislative branches Engage with external partners (e.g., academia, industry, local governments) Ensure equity, inclusion, and integrity in all aspects of science and technology Blueprint Principles Safe and effective systems Algorithmic discrimination protections Data privacy Notice and explanation Human alternatives, consideration, fallback Safe and Effective Systems Diverse input Pre-deployment testing Continuous monitoring Independent evaluation and monitoring Algorithmic Discrimination Protection “Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts.” Systems designed and used equitably Proactive equity assessments Use of representative data Algorithmic Discrimination Protection Accessibility Plain language reporting Algorithmic impact assessment Publication of assessments Data Privacy Protection from abusive practices Data subject agency: collection, use, access, transfer, deletion Sensitive data Only used for necessary functions Protected by ethical review Use prohibitions Data Privacy Free from unchecked surveillance Surveillance technologies subject to increased oversight Pre-deployment assessments Privacy and civil liberties impact assessment Notice and Explanation Know automated system being used Understand system impact, outcomes Plain language documentation Human Alternatives Alternatives, consideration, and fallback ADM opt-out, where appropriate Escalation to human review option Systems within sensitive domains: meaningful oversight Reporting should include description of human governance Review: What is OSTP? Principles Office of Science and Technology Policy Safe and effective systems (OSTP) Algorithmic discrimination protections Advises the U.S. President Data privacy Strengthen American science and Notice and explanation technology Human alternatives, consideration, fallback UNESCO Recommendation Recommendation on the Ethics of Artificial Intelligence In this lecture: What is UNESCO? Values Principles What is UNESCO? United Nations Educational, Scientific, and Cultural Organization Promotes world peace and security through international cooperation in education, arts, sciences, and culture 194 member states Partners with NGOs, intergovernmental organizations, private sector Values Respect, protection, promotion of human rights, fundamental freedoms, human dignity Environment and ecosystem flourishing Ensuring diversity and inclusiveness Living in peaceful, just, and interconnected societies Principles (1) Proportionality and Do No Harm Safety and security Fairness and non-discrimination Sustainability Right to privacy and data protection Human oversight and determination Principles (2) Transparency and explainability Responsibility and accountability Awareness and literacy Multi-stakeholder adaptive governance and collaboration Review (1): What is UNESCO? Values United Nations Educational, Scientific, and Respect, protection, promotion of human Cultural Organization rights, fundamental freedoms, human dignity Promotes world peace and security through international cooperation in education, arts, Environment and ecosystem flourishing sciences, and culture Ensuring diversity and inclusiveness Living in peaceful, just, and interconnected societies Review (2): Principles Proportionality and Do No Harm Transparency and explainability Safety and security Responsibility and accountability Fairness and non-discrimination Awareness and literacy Sustainability Multi-stakeholder adaptive governance and collaboration Right to privacy and data protection Human oversight and determination Asilomar AI Principles 23 total principles across 3 categories In this lecture: What is Asilomar? Asilomar AI principles What is Asilomar? Refers to Asilomar Conference Grounds Pacific Grove, California Home to 2017 Asilomar Conference on Beneficial AI Organized by Future of Life Institute More than 100 thought leaders Asilomar AI Principles 23 principles across 3 categories Research (5) Ethics and values (13) Longer-term issues (5) Asilomar Principles (1) Safety Responsibility / accountability Human values E.g., dignity, rights, freedoms, cultural diversity Personal privacy Liberty and privacy Human control Asilomar Principles (2) Value alignment Shared benefit: systems should empower as many people as possible Shared prosperity: benefit all of humanity Asilomar Principles (3) Failure transparency: determine why system caused harm Judicial transparency: involvement in judicial decision making should be explainable, auditable Non-subversion: power conferred by control of AI should respect and improve social and civil processes AI arms race: in lethal autonomous weapons should be avoided Review (1): What is Asilomar? Asilomar Principles Asilomar Conference Grounds, Pacific 23 principles across 3 categories Grove, CA Research (5) Ethics and values (13) Asilomar Conference on Beneficial AI (2017) Longer-term issues (5) Future of Life Institute Review (2): Principles (1) Principles (2) Safety Value alignment Responsibility / accountability Shared benefit Human values Shared prosperity Personal privacy Liberty and privacy Human control Review (3): Principles (3) Failure transparency Judicial transparency Non-subversion AI arms race Ethically Aligned Design IEEE Global Initiative on Ethics and Autonomous and Intelligent Systems In this lecture: What is the IEEE? What is the IEEE Global Initiative? Objective General principles What is the IEEE? The Institute of Electrical and Electronics Engineers Largest technical organization for engineering, electronic engineering Formed in 1963 What is the IEEE Global Initiative? 420,000 members across 160 countries Mission: “To ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.” Document title: Ethically Aligned Design, v2 Objective Articulate high-level ethical concerns that apply to autonomous and intelligent systems (A/IS) Prioritize benefits to humanity and the natural environment Mitigate risks and negative impacts General Principles Human rights Well-being Accountability Transparency Awareness of misuse Review (1): What is the IEEE? What is the IEEE Global Initiative? The Institute of Electrical and Electronics Implement ethical considerations into Engineers autonomous and intelligent systems Largest technical organization for engineering, electronic engineering Review (2): Objective General principles Articulate high-level ethical concerns that Human rights apply to autonomous and intelligent systems (A/IS) Well-being Prioritize benefits to humanity and the Accountability natural environment Transparency Mitigate risks and negative impacts Awareness of misuse CNIL AI Action Plan Four key objectives from the French Data Protection Authority In this lecture: What is the CNIL? CNIL AI Action Plan What is the CNIL? Commission nationale de l’informatique et des libertes French Data Protection Authority CNIL AI Action Plan Four key objectives: Understand AI and its impact on people Respect personal data Support innovation players in France and Europe Auditing and controls Understand AI and its Impact Fairness and transparency Protect publicly available data (e.g., against scraping) Protect data transmitted by users (e.g., user inputs) Understand consequences of AI on individual rights Protect against bias, discrimination Understand, mitigate security challenges Respect Personal Data Guide General Data Protection Regulation (GDPR) compliance Publish policy on “enhanced” surveillance Guide on applicable data sharing and re-use Guidance on ethical use of AI systems Support Players in France & Europe Provide sandbox, support to encourage innovation Auditing and Controls Protect against fraud Investigate complaints Ensure EU-coordinated approach to data processing Ensure AI providers’ Data Protection Impact Assessment (DPIA) compliance Review: What is the CNIL? CNIL AI Action Plan Commission nationale de l’informatique et Understand AI and its impact on people des libertes Respect personal data French Data Protection Authority Support innovation players in France and Europe Auditing and controls Ethics Guidelines for Trustworthy AI European Commission High-Level Expert Group on AI In this lecture: What is the European Commission? Ethics Guidelines for Trustworthy AI Guidelines’ 7 requirements What is the European Commission? European Union’s (EU) politically independent executive arm Proposes new legislation 27 “commissioners” Headed by a President Ethics Guidelines for Trustworthy AI Trustworthy AI should be: Lawful: respect all applicable laws and regulations Ethical: respect principles and values Robust: from technical perspective, with consideration for social environment Guidelines’ 7 Requirements Human agency and oversight Technical robustness and safety Privacy and data governance Transparency Diversity, non-discrimination, and fairness Societal and environmental well-being Accountability Review (1): European Commission Trustworthy AI European Union’s (EU) politically Lawful independent executive arm Ethical Proposes new legislation Robust Review (2): Guidelines’ 7 requirements Human agency and oversight Technical robustness and safety Privacy and data governance Transparency Diversity, non-discrimination, and fairness Societal and environmental well-being Accountability Creating a Culture of Ethical AI Key principles, foundational controls, and roles and responsibilities In this lecture: Key principles Foundational controls Creating a culture of ethical AI Key Principles Lawfulness People and planet Protection from bias, discrimination Choice over personal data Appropriate human intervention Accountability Foundational Controls Develop organization-specific principles Build cross-functional, demographically diverse oversight body Assess, implement appropriate policies and procedures to identify, mitigate risks E.g., disparate impact, privacy, cybersecurity, data governance Creating a Culture of Ethical AI Roles and positions Legal and compliance Equitable design Transparency and explainability Privacy and cybersecurity Data governance Legal and Compliance Relevant policies and procedures Legal review of AI Mitigate bias Equitable Design Cross-functional, demographically diverse teams Throughout AI lifecycle: plan, design, develop, deploy/implement Evaluate products, processes Transparency and Explainability AI products require appropriate labeling, notice AI decisions should be explainable to consumers Cybersecurity and Privacy Use of PII to develop, train AI should be disclosed in privacy notices Consent must comply with applicable regulations Consumers should be able to access, delete PII as applicable Data minimization AI should mitigate risks associated with unauthorized access, exfiltration Data Governance Data governance: overall management of data availability, usability, and integrity throughout lifecycle across an organization Ensure data quality, integrity Review (1): Key principles Foundational controls Lawfulness Organization-specific principles People and planet Cross-functional, demographically diverse oversight body Protection from bias, discrimination Implement policies, procedures to identify, Choice over personal data mitigate risks Appropriate human intervention Accountability Review (1): Creating a Culture of Ethical AI Legal and compliance Equitable design Transparency and explainability Privacy and cybersecurity Data governance Harms To individuals and groups In this lecture: Those at risk Harms to individuals General Specific Privacy Economic Harms to groups Those at Risk Individuals Groups Society Companies / Ecosystems institutions Harms to Individuals (General) Civil rights Economic opportunity Safety Harms to Individuals (Specific) Employment and hiring (e.g., Amazon resume algorithm) Insurance and social benefits Housing (e.g., tenant selection) Education (e.g., enrollment, recruitment) Credit (e.g., financial lending, differential pricing) Harms to Individuals (Privacy) Aggregation: combining de-identified data can reidentify it Incorporation into training data Inference: deriving logical conclusions from existing data Secondary use: data collected for one purpose, used for another Lack of transparency: inputs may be used to retrain model Inaccuracy: hallucinations Harms to Individuals (Economic) Job displacement (e.g., job automated) AI bias discriminating against certain groups Opportunities may fail to reach certain groups Group Harms (1) Facial recognition Algorithms unreliable High false positive rate on people of color London police system demonstrated 81% inaccuracy Mass surveillance Protected groups may receive less privacy Group Harms (2) Civil rights Right to protest, freedom of assembly, profiling Identify protestors Deepening of racial and socio-economic divides Increased discrimination Distrust Review (1): Those at risk Individuals: General Individuals Civil rights Groups Economic opportunity Society Safety Companies / institutions Ecosystems Review (2): Individuals: Specific Individuals: Privacy Employment and hiring Aggregation Insurance and social benefits Incorporation into training data Housing Inference Education Secondary use Credit Lack of transparency Inaccuracy Review (3): Individuals: Economic Groups Job displacement Facial recognition AI bias discriminating against certain Mass surveillance groups Civil rights Opportunities may fail to reach certain Deepening racial divides groups Harms To society In this lecture: Harms to society Harms to Society (1) Democratic process Trust in institution E.g., hallucinations, deepfakes make it difficult to tell fact from fiction Access to public services (e.g., education, health care) Employment (e.g., job redistribution) Harms to Society (2) Disinformation: Deliberately deceptive or misleading information; used to intentionally confuse Think “d” = deliberate Misinformation: Incorrect or misleading information; includes inaccurate, incomplete; intent may not be to cause harm Think “mis” = mistake Harms to Society (3) Deepfakes: synthetic content intentionally manipulated or generated to cause harm, spread disinformation Hallucinations: generative AI creates contradictory or factually inaccurate content Echo chambers: individuals exposed only to ideologically similar content; lack exposure to differing viewpoints Safety: lethal autonomous weapons; AI systems without oversight could lead to accident Profiling Tracking, predictive analytics Aggregate behavior, data over multiple apps, websites Create profile of behavior, preferences, habits ML infers predictions Target advertisements, deepfakes, mis/disinformation (e.g., Cambridge Analytica scandal) Data may carry over to different users of same device Cambridge Analytica App collected data on 87 million Facebook users Created profiles Targeted political advertising Review: Harms to society Democratic process Deepfakes Trust in institution Hallucinations Access to public services Echo chambers Employment Safety Misinformation Profiling Disinformation Harms To companies / institutions and ecosystems In this lecture: Harms to companies Harms to ecosystems Harms to Companies / Institutions Reputational Cultural Economic Legal/regulatory Acceleration risks Reputational Loss of customer trust, revenue Brand impact Share price drop Company target of social media campaign Cultural AI exceptionalism: Assumption that computer systems are infallible, better than humans Employees less likely to challenge outputs How to avoid Evaluate system using multiple factors System output Human interpretation, analysis of output Consider secondary, unintended outputs Economic Litigation Regulatory fines Class action suits Legal / Regulatory Existing laws, regulations may apply E.g., privacy, trade, tax Sanctions Fines Injunctions Acceleration Risks “Move fast and break things” Unable to foresee potential problems Factors: Volume of data Speed of processing Complexity of AI Harms to Ecosystems (General) Natural resource depletion Environment Supply chains Harms to Ecosystems (Specific) How AI Can Help the Environment Autonomous vehicles Higher agricultural yields Weather forecasting, satellites identify wildfires, droughts, flooding, etc. Review: Harms to companies Harms to ecosystems Reputational Natural resources Cultural Environment Economic Supply chains Legal / regulatory Acceleration risks Types of Bias There are many different biases that can affect AI planning, design, development, and deployment. In this lecture: What is a bias? Types of bias What is bias? A preference or inclination that inhibits impartiality An unfair act or policy stemming from prejudice Can impact outcomes, create risks to individual rights, liberties Types of Bias (1) Algorithmic: systematic, repeatable errors; create “unfair” outcomes, such as privileging one group over another Computational: systematic error or deviation from true value of prediction Cognitive: inaccurate judgement, distorted thinking Societal: systemic prejudice, favoritism, discrimination in favor of one group (or against another) Types of Bias (2) Implicit: unconscious association, belief, attitude toward a social group that can affect behavior; stereotyping Sampling: when data sample does not represent statistical diversity of population Temporal: when model does not work consistently as expected Types of Bias (3) Overfitting: model works well with training data, but not unseen data Underfitting: model fails to capture complexity of training data due to too few parameters, insufficient set of features in training data Edge cases/outliers: data that falls outside boundaries of training data Noise: data that negatively impacts ML model Review (1): What is a bias? Types of bias (1) A preference or inclination that inhibits Algorithmic impartiality Computational An unfair act or policy stemming from prejudice Cognitive Societal Implicit Sampling Temporal Review (2): Types of bias (2) Overfitting Underfitting Edge / outlier Noise Trustworthy AI Human-centric, accountable, transparent, and operates legally, fairly In this lecture: AI opportunities Importance of trustworthy AI What is trustworthy AI? Four main characteristics Human-centric Accountable Transparent Acts in legal, fair manner How to operationalize AI Opportunities Can be faster, more accurate Process vast amounts of data Support medical assessments, legal reviews Automate, accelerate repetitive tasks Importance of Trustworthy AI Suspicion when technology replaces humans Increased potential for cybersecurity incidents Potential embedded bias, privacy concerns What is Trustworthy AI? Often interchangeable with “responsible AI”, “ethical AI” Principle-based AI governance Principles: Accountability Explainability Non-discrimination Privacy Safety Security Transparency Four Main Characteristics Human-centric Accountable Transparent Acts in legal, fair manner Human-centric Amplifies human agency Positive impact on human condition Accountable Obligation and responsibility of creators, operators, regulators Ensure AI is compliant, fair, safe, secure, resilient, reliable, valid Ensure outputs are traceable to responsible entity Transparent Extent to which information is available about how AI works Includes openness, comprehensiveness, accountability Explainable: sufficient information about how output/decision is reached Understandable by target audience Operates as Expected Legal, fair manner How to Operationalize Get leadership buy-in Create principles for organization Understand AI’s role, business purpose Embed trustworthy AI into RMF Develop technical standards, playbooks, guidelines Update organizational structures with roles, responsibilities Ensure human oversight (e.g., monitoring, oversight) Review (1): AI opportunities Importance of trustworthy AI Can be faster, more accurate Suspicion when technology replaces humans Process vast amounts of data Increased potential for cybersecurity Support medical assessments, legal reviews incidents Automate, accelerate repetitive tasks Potential embedded bias, privacy concerns Review (2): What is trustworthy AI? How to operationalize Human-centric Leadership buy-in Accountable Create principles Transparent Understand AI’s role, business purpose Acts in legal, fair manner Embed trustworthy AI into RMF Develop technical standards, playbooks, guidelines Update organizational structures with roles, responsibilities Ensure human oversight