Lesson 11: Quest for Social Justice - AI & Human Values Fall 2024, ETH Zurich PDF
Document Details
Uploaded by FreshFlerovium14
ETH Zurich
2024
Dr. Kebene Wodajo
Tags
Summary
This document explores the interplay between social justice and artificial intelligence, including concerns about biased decision-making in public services, criminal justice, and predictive policing. It also touches upon the design and implementation of AI systems and highlights the need for a consideration of human biases and systemic inequalities. The presentation, delivered on November 28, 2024 at ETH Zurich, explores social justice issues and discusses ways to address those issues related to AI.
Full Transcript
Lesson 11: Quest for Social Justice AI & Human Values Fall 2024, ETH Zurich Dr. Kebene Wodajo 28 Nov. 2024 Animating questions How is social justice of concern in the age of AI? What is the interplay between substantive and procedural justice and how might it be engaged in the real world, t...
Lesson 11: Quest for Social Justice AI & Human Values Fall 2024, ETH Zurich Dr. Kebene Wodajo 28 Nov. 2024 Animating questions How is social justice of concern in the age of AI? What is the interplay between substantive and procedural justice and how might it be engaged in the real world, technically and socially? What are the mainstream ways in which issues of social justice and AI are understood? What do people do about them? What's missing? What social justice issues are of concern in the age of AI? Criminal justice, Socio-economic justice, Racial justice, Labour justice, Environmental justice, Historical justice etc… 28.11.2024 3 Examples: Biased decision-making Public services Economic mobility and access to key public services Case example: Dutch SyRI case System Risk Indicator, is a risk profiling method employed by the Dutch government to detect individual risks of welfare, tax and other types of fraud. The system offered immense informational power that could be deployed by the Dutch state against …citizens by bringing databases of different executive bodies together and effectively building dossiers on citizens. It’s believed that SyRI was used to unfairly target people as being likely to commit fraud based on their place of living or socio-economic background - primarily deployed in low- income neighbourhoods. The court ruled that the SyRI legislation is unlawful because it does not comply with the right to privacy and equality under the European Convention on Human Rights See, DFF, 2020 & Digital Freedom Fund, https://digitalfreedomfund.org/the-syri-welfare-fraud-risk-scoring-algorithm/ Examples… Criminal justice system, predictive policing “In Italy, a predictive system used by police called Delia includes ethnicity data to profile and ‘predict’ people’s future criminality. Other systems seek to ‘predict’ where crime will be committed, repeatedly targeting areas with high populations of racialised people or more deprived communities.” “In the Netherlands, the ‘Top 600’ list attempts to ‘predict’ which young people will commit certain crimes. One in three of the ‘Top 600’ – many of whom have reported being followed and harassed by police – are of [a certain ethnic & national] See also Public Sector Tech Watch, https://joinup.ec.europa.eu/collection/public-sector-tech- descent.” watch/cases-viewer-statistics Report by Fair Trial, https://www.fairtrials.org/campaigns/ai-algorithms-data/ How is inequity engineered through & into AI systems? What are the sources of bias, for example? Types of AI bias Bias can be introduced purposefully or inadvertently into an AI system, or it can emerge as the AI is used in an application: Statistical and computational biases Systemic biases Human biases NIST Special Publication 1270, 2022 Theorising social justice in the context of AI systems Prinicples of justice First principles of justice: Basic liberties Substantive justice ( fairness of the Equality, freedom of thought… outcomes ) Second principles of justice: Social and economic inequalities are to satisfy: Fair equality of opportunity Procedural justice (fairness of the The differnce prinicple processes ) “we don’t just want better funded schools…. We also demand the power to shape the programs and institutions in our communities” (Benjamin, 2019, p.168) – this requires us to consider not only the ends but also the means. Theorising social justice… These are matters of: Distributive justice: the disciplinary power of institutions and how this is mediated by AI systems → distributive justice spells out how major institutions ought to allocate opportunities and resources The claim here is not only that the basic structure contains social and technical elements, but also that these elements interact dynamically to constitute new forms of stable institutional practice and behaviour. Examples Workplace monitoring through AI systems Employer must inform employees and their representatives the use of HRAIS that has impact on the employment relationship (EU context) Environmental justice How are these social justice issues being addressed? Approaches to address social justice issues Legislation & regulatory Adjudication: lawsuits Technical fix “Design justice” instruments (soft & hard laws) Legislation & regulatory instruments (often rights- based) Legislation and regulatory instruments (soft & hard laws): AI Act, DSA, DMA, GDPR, PLD, AILD, UNGPs etc. mHRDD UN B-Tech project, UNGPs Adjudication Lawsuits based on legislations, e.g., GDPR, DSA, potentially PLD or AILD Hacker, The European AI liability directives (2023) What's missing or left unsaid and why should we care? The peculiarity of systemic and structural injustices Are produced collectively by multiple actors over a period of time Are built into the system that created intersectional inequalities “…we will have to change the people…. To change those people, we will have to change the culture in which they – and we – live. To change that culture, we’ll have to work … towards a radical rethinking of the way we live …rethinking will eventually need to involve all of us” (Benjamin 2019,p.182) NIST Special Publication 1270, 2022 Breakout Groups Brainstorm how would you “design justice” - what alternatives do you see – or are on the horizon? Counterdata/coding: rewriting and What other ways are being controling narratives with data imagined? Chicago Stories, wttw Community based movements “Data-based technologies are changing our lives, and the systems our communities currently rely on are being revamped. These data systems do and will continue to have a profound impact on our ability to thrive. To confront this change, we must first understand how we are both hurt and helped by data-based technologies. This work is important because our data is our stories. When our data is manipulated, distorted, stolen, or misused, our communities are stifled, and our ability to prosper decreases.” Our Data Bodies Indigenous Data Sovereignty (IDSov) Indigenous Peoples, communities, and Nations to “govern the collection, ownership, and application” of datasets created with or about Indigenous communities, Indigenous Lands, and the community’s non-human relations. Afro-isms Key takeaways Social justice concerns in the context of AI systems take different forms – it is not static but emergent. Multiplicity of sources of inequity (e.g., bias) in AI systems continuously feed into each other in a perpetual feedback loop – technical, institutional/historical, and human. The moral properties of AI systems are not necessarily internal to the models (i.e., the technical) but are rather a product of the social systems within which they are designed, developed, regulated, and deployed. Mainstream approaches to addressing social justice sometimes overlook systemic and structural injustice. Imagining beyond: an abolitionist ethos – through redesigning/codesigning for justice, counter-coding/data, rewriting, community-based movements, and reasserting/reclaiming agency (e.g., through data sovereignty, self-determination, etc.).