Ethics, Law, and AI PDF
Document Details
Uploaded by Deleted User
Larese
Tags
Related
- Knowledge-Engineering-in-Artificial-Intelligence.pdf
- The Rise of Artificial Intelligence PDF
- Toward Understanding the Impact of Artificial Intelligence on Labor PDF
- PROF ISSUE FINALS REVIEWER PDF
- Toward Understanding the Impact of Artificial Intelligence on Labor PDF
- Toward Understanding the Impact of Artificial Intelligence on Labor PDF
Summary
This document, likely a module or chapter from a book, discusses the economic and political impacts of artificial intelligence and automation. It explores the potential for job displacement across industries like agriculture, manufacturing, and retail, showcasing historical trends, and the possibility for both utopian and dystopian futures related to AI. The text details the concept of human obsolescence and the evolution of automation technologies.
Full Transcript
ETHICS, LAW AND AI - Larese MODULE ON THE ECONOMIC AND POLITICAL IMPACTS OF AI INTRODUCTION In the book “ Automation and Utopia” written by John Danher, there is the idea that human obsolescence is imminent, meaning that humans would be replaced by something newer and better or more fashionable, “o...
ETHICS, LAW AND AI - Larese MODULE ON THE ECONOMIC AND POLITICAL IMPACTS OF AI INTRODUCTION In the book “ Automation and Utopia” written by John Danher, there is the idea that human obsolescence is imminent, meaning that humans would be replaced by something newer and better or more fashionable, “obsolescence” does not imply a state of nonexistence or death, rather than no longer being useful or used. The idea that humans might become unnecessary seems strange, especially since we are living in a time where we have more control over the planet than ever before. Our technology has allowed us to shape the Earth to suit our needs. However, the same technology is now making humans less important, as automation is growing quickly, leading us into a new age where machines might take over. THE AGRICULTURAL REVOLUTION Until approximately 10,000 years ago, most humans lived in small, hunter-gatherer tribes that did not settle down. That all changed with agriculture, which enabled a boom in population. Complex, sedentary societies emerged, with large government official doms, laws, and institutions. Until quite recently many economies in the Western world were, in effect, agricultural in nature, with the majority of the population employed in tilling fields, harvesting crops, and tending to livestock. All that began to change a little over two hundred years ago: In Western European countries 30–70% of the population was employed in agriculture in the year 1800. By the year 2012, the figures had declined to below 5%. In the United States, for example, approximately 40% of the population was employed in agriculture as recently as the year 1900. By the year 2000, the figure had declined to 2%. However, agricultural productivity has increased throughout this period thanks to technology and the rise of machine labor, on which farmers could rely to do the work instead of requiring armies and humans. It is important to notice, anyways, that humans have not been completely replaced and that certain tasks (such as fruit picking) have been resistant to automation. Nevertheless, some companies, such as Abundant Robotics and FFRobotics are trying to satisfy the American fruit growers’ demand to also automate this process (early trials of Abundant apple picking robots have been impressive and have led companies such as Google to invest in the future of the technology). THE INDUSTRIAL REVOLUTION Starting in the United Kingdom from 1750 and spreading across the rest of the Western world, the Industrial Revolution is the process in which our predominantly agricultural economies were displaced by predominantly industrial ones. The Industrial Revolution has always been premised on human obsolescence. It brought with it the first major wave of automating technologies. Skilled human labor was replaced by the relentless, and sometimes brutal, efficiency of the machine. Since then, the automation of manufacturing has been normalized and extended. The assembly line of a modern factory is the paradigm of automation. RESHORING AND THE CASE OF THE US TEXTILE INDUSTRY It is however fair to mention that, at least within the manufacturing sector in the United States and other developed countries, the introduction of labor-saving innovations is having a mixed impact on employment. 1. The US textile industry was decimated in the 1990s as production moved to low wage countries, especially China, India, and Mexico. About 1.2 million jobs vanished between 1990 and 2012. 2. The last few years, however, have seen a dramatic rebound in production. Between 2009 and 2012, US textile and apparel exports rose by 37% to a total of nearly $23 billion. This case is an example of a now significant “reshoring” trend under way (i.e. the practice of transferring a business operation that was moved overseas back to the country from which it was originally relocated). A CAVEAT Automation technology is improving so much that machines can do the work of even the cheapest workers in other countries. At the same time, hiring workers overseas is becoming more expensive, and new political factors are influencing decisions. While robots can replace some simple jobs, they also help U.S. factories compete with low-wage countries. However, there's a downside to bringing factories back to the U.S. The new jobs created by reshoring might not last long. As robots and new technologies keep getting better, many factories could become almost fully automated, which would reduce the need for human workers even more. THE LAST RESISTANCE? As humans are being replaced by machines in industries like manufacturing and farming in Western countries, many jobs have shifted to the service sector. This includes jobs like hairdressing, food preparation, customer support, and managing client relationships. These types of jobs involve physical skills and emotional intelligence, which have traditionally been difficult to automate. Some people believe that these jobs offer hope because they rely on skills that machines can't easily replicate. However, even the service sector is starting to lose jobs to automation. For example, machines like ATMs and self-checkout systems have already replaced some service jobs, and in the next decade, many more service jobs are expected to be automated. THE CASE OF ONLINE RETAILERS Let’s take, as a case study, the retail sector (i.e., all companies that sell goods and services to consumers). Three major forces will shape employment in the retail sector going forward. The first will be the continuing disruption of the industry by online retailers like Amazon, eBay, and Netflix. The competitive advantage that online suppliers have over brick and mortar stores is already, of course, evident with the demise of major retail chains like Blockbuster. Both Amazon and eBay are providing same-day delivery in a number of US and European cities, with the objective of undermining one of the last major advantages that local retail stores still enjoy: the ability to provide immediate gratification after a purchase. In theory, the encroachment of online retailers should not necessarily destroy jobs, but rather, would transition them from traditional retail settings to the warehouses and distribution centers used by the online companies. However, the reality is that once jobs move to a warehouse, they become far easier to automate, due to the enormous progress in warehouse robotics. THE CASE OF SELF-SERVICE The second transformative force is likely to be the explosive growth of the fully automated self-service retail sector (i.e., intelligent vending machines, which make it possible to reduce the costs of real estate, labor, and theft by customers and employees and kiosks). In addition to providing 24-hour service, many of the machines include video screens and are able to offer targeted point-of-sale advertising similar to what a human sales clerk might do, they also offer many of the advantages of online ordering, with also the benefit of instant delivery. THE CASE OF ROBOTICS The third major force likely to disrupt employment in the retail sector will be the introduction of increased automation and robotics into stores. This would allow brick and mortar retailers to remain competitive. The same innovations that are enabling manufacturing robots to advance the frontier in areas like physical dexterity and visual recognition will eventually allow retail automation to begin moving from warehouses into more challenging and varied environments like stocking shelves in stores. THE CASE OF MEDICAL DIAGNOSIS The automation of diagnosis is perhaps the best example. Sebastian Thrun, the founder of Google X, wants to create a future “medical panopticon” where we are constantly under the diagnostic scrutiny of machine-learning algorithms that can detect cancers faster, earlier, and more accurately. Our cell phones would analyze shifting speech patterns to diagnose Alzheimer’s. A steering wheel would pick up incipient Parkinson’s through small hesitations and tremors. A bathtub would perform sequential scans as you bathe, via harmless ultrasound or magnetic resonance, to determine whether there’s a new mass in an ovary that requires investigation. Big Data would watch, record, and evaluate you: we would shuttle from the grasp of one algorithm to the next. There is little room for human diagnosticians in this picture. THE CASE OF MEDICAL CARE The provision of care in healthcare is evolving due to automation. While care has traditionally been seen as something resistant to automation, the growing elderly population and a decreasing number of young caregivers have shifted this view. As a result, significant resources are now being invested in developing carebots to address this care gap. These carebots are already widely used in Japan, where some people prefer them over human caregivers, and they are being tested in Europe, particularly for patients with dementia and early-onset Alzheimer’s. UTOPIAS Cyborg utopia: Humans will integrate with technology, becoming cyborgs. This would have undoubted advantages: not only would it allow us to preserve and extend what we value in the world, but also to overcome those physical limitations that prevent us from thriving and fulfilling ourselves. Virtual utopia: Humans will retreat to virtual worlds that are created and sustained by the technological infrastructure we have built. At first glance, this seems tantamount to giving up, but there are compelling philosophical and practical reasons for favoring this approach. DYSTOPIAS Premise 1: The trajectory of artificial intelligence reaches up to systems that have a human level of intelligence. These systems would themselves have the ability to develop AI systems that surpass the human level of intelligence: they would be superintelligent systems out of human control and hard to predict. Premise 2: Superintelligence does not imply benevolence (this is contrary to Kantian traditions in ethics that have argued higher levels of rationality or intelligence would go along with a better understanding of what is moral and better ability to act morally). Rationality and morality are entirely independent dimensions. This is sometimes explicitly argued for as an “orthogonality thesis”. Conclusion: The superintelligent systems may well have preferences that conflict with the existence of humans on Earth and may thus decide to end that existence. Given their superior intelligence, they will have the power to do so (or they may happen to end it because they would not really care). ECONOMIC AND PHILOSOPHICAL PRELIMINARIES AlphaGo is a program, developed by DeepMind that uses deep neural networks combined with human training to play the board game Go (i.e., a complex, open ended game that is significantly more challenging to model than games like chess). In 2015 AlphaGo defeated the Go champion Lee Sedol, an event that is seen as an AI landmark. The match against Sedol was notable not just for the result, but also for the unusual moves that AlphaGo made during its gameplay. A reporter from Wired wrote that it showed Machines are now capable of moments of genius [... ] in Game Two, the Google machine made a move that no human ever would. The Go player Sedol described the move of the game by saying: Yesterday, I was surprised. But today I am speechless. In 2017, DeepMind unveiled AlphaZero (the successor to AlphaGo). AlphaZero extended their approach through the use of a “pure reinforcement learning” that dropped even high-level human instructions, simply playing against itself with the positions on the board as inputs. The researchers from DeepMind characterized AlphaZero’s performance as “superhuman,” purporting to “master” the game “without human knowledge”. Later, DeepMind CEO Demis Hassabis compared the system’s performance to a chess-playing alien or “chess from another dimension”. The case study shows that an AI system is described using aesthetic categories: beauty, mystery, surprise, and virtuosic genius, i.e., in terms of the sublime. Leaders in the field, such as the computer scientist Stuart J. Russell, have started considering the deep learning models magical: “We are just beginning now to get some theoretical understanding of when and why the deep learning hypothesis is correct, but to a large extent, it’s still a kind of magic, because it really didn’t have to happen that way. There seems to be a property of images in the real world, and there is some property of sound and speech signals in the real world, such that when you connect that kind of data to a deep network it will – for some reason – be relatively easy to learn a good predictor. But why this happens is still anyone’s guess.” THE CASE OF DETECTING SEXUAL ORIENTATION FROM FACIAL IMAGES In 2018 Y. Wang and M. Kosinski of Stanford University published an article titled “Deep Neural Networks Are More Accurate Than Humans at Detecting Sexual Orientation From Facial Images”. The study followed a familiar pattern for applying deep learning techniques in social settings: i. The researchers used Face++ to extract faces from images originally posted on an unnamed U.S. dating website. This website included self-identified data on sexual orientation that could be used for validation. ii. Human workers on Amazon Mechanical Turk cleaned the face data, verifying gender and race (the study only looked at Caucasian faces) and a few other parameters. They only considered gender as a binary category: men or women. iii. The researchers extracted facial features from the set of cleaned images using a deep neural network called VGG-Face, which translates each facial image into 4,096 scores. iv. The authors used a model to classify each face’s sexual orientation based on the VGG-Face scores. The deep neural networks could correctly identify sexuality from a facial image 81% of the time for men and 71% of the time for women, an accuracy rate higher than the human judges, who scored 61% for male and 54% for female images. Despite the seemingly clear-cut percentages, the social implications of these results are not easily interpretable. Even the authors find it difficult to explain the reasons why their model produced these higher accuracy scores. They suggest that it is due to the ability of deep learning to somehow process social signals at superhuman levels: “the findings reported in this work show that our faces contain more information about sexual orientation than can be perceived or interpreted by the human brain”. DETERMINISM Determinism is the philosophical view that events are completely determined by previously existing causes, inferring that also human decisions and actions, are causally inevitable and also predictable. Deep learning systems are at their most deterministic when they are applied to ascribe identity or other social characteristics from a set of inputs understood as signals. That includes predicting sexuality from a photograph of a face, whether a person will or will not commit a crime after being released on bail, whether a person is a credit risk, or whether a crime was “gang-related”. These systems are applied in critical social areas with consequences that even their designers may not fully understand or control. Kate Crawford terms this ensemble enchanted determinism: “A discourse that presents deep learning techniques as magical, outside the scope of present scientific knowledge, yet also deterministic, in that deep learning systems can nonetheless detect patterns that give unprecedented access to people’s identities, emotions and social character” WEBER’S THEORY OF DISENCHANTMENT This phenomenon can be profitably analyzed through the means of Max Weber’s theory of disenchantment. Disenchantment, or “de-magnification”, which is a translation of German phrase “Entzauberung”, is an epochal diagnosis of Western modernity, that encompasses a widespread decline in mystical or religious forces and their replacement by processes of “rationalization and intellectualization”. Disenchantment “means that principally there are no mysterious incalculable forces that come into play, but rather that one can, in principle, master all things by calculation”. Rationalization allows us to control the world in ways that were previously unimaginable, producing a calculative confidence in public life. As Kate Crawford notices: “What makes contemporary deep learning systems interesting is their ambivalent position with respect to Weber’s larger thesis. They certainly embody aspects of a disenchanted world in that they work to master or control new domains of social life through technical forms of calculation. At the same time, these systems seem to violate the epistemology of disenchantment,the idea that there are no longer “mysterious” forces acting in the world”. CRITIQUE OF THE AUTOMATION-OBSOLESCENCE AND THE ENCHANTED- DETERMINISM DISCOURSES The automation-obsolescence discourse supports the idea that AI is following a path regardless of human actions. The enchanted-determinism discourse confers AI the status of an enchanted object. Both discourses purport the ideology of Cartesian dualism in AI: the fantasy that AI systems are disembodied brains that absorb and produce knowledge independently from their creators, infrastructures, and the world at large and the idea that AI are free from subjective human decision-making, which is positioned as arbitrary and biased by comparison. But these illusions might create a blindness to forms of risk. They might: cover over the ways in which AI can reproduce and intensify discriminatory or harmful processes of prediction and categorization when applied to humans and social institutions. situate AI and deep learning applications outside of understanding, regulation, responsibility. distract from the far more relevant questions: Who builds AI systems? Who chooses the ethical values embedded in AI systems? What are the political, economic, and social dimensions of their construction? And what are the wider planetary consequences? PHILOSOPHICAL PERSPECTIVE Kate Crawford: “Artificial intelligence is not an objective, universal, or neutral computational technique that makes determinations without human direction. Its systems are embedded in social, political, cultural, and economic worlds, shaped by humans, institutions, and imperatives that determine what they do and how they do it.” AI AND ECONOMICS INEQUALITY Global income and wealth inequalities have worsened in recent decades, as reported by the OECD in 2015. In the 1980s, the richest 10% in OECD countries earned seven times more than the poorest 10%, but by the 2010s, they earned nearly 10 times more. When factoring in property and wealth, the disparity is even starker; in 2012, the richest 10% controlled 50% of household wealth, while the poorest 40% held only 3%. Technological advancements have contributed to this inequality by benefiting capital owners and highly skilled workers, often at the expense of low-skilled workers. Inequality is a global issue, with rising income disparities in countries like China, India, and Indonesia, despite overall poverty reduction. Furthermore, redistributive tax systems have become less effective due to tax competition, allowing the wealthy to hide vast sums offshore. In 2012, the Tax Justice Network reported that over $21 trillion was hidden in tax havens. This undermines public goods provision and leads to market inefficiencies, as the wealthy benefit from national services without contributing their fair share. THE CASE OF ITALY: RELATIVE CHANGES IN THE SHARE OF WEALTH HELD BY THE TOP 1% AND BOTTOM 90%. The graph presents relative changes in the share of wealth held by the richest 1% and poorest 90% of the population, using 1995 as the base year. Over the past 20 years, the gap between the top 1% and the bottom 90% has widened: for the former, the share of wealth has increased from 17% to 21% of the total while the share of the poorest 90% has shrunk by 11% points, from 55% to 44%. THE CASE OF ITALY: ECONOMIC INEQUALITY FROM THE EARLY 1900S TO TODAY. The chart presents the trend in income inequality as measured by the Gini index (i.e., a measure of relative inequality whose values range from 0 – when there is complete equality and everyone enjoys the same income – to 100 – when there is maximum inequality and a single person enjoys all the income). Inequality, which had been declining until the 1970s, began to rise again beginning in the late 1970s, and this rise coincided with the reversal of public policy and a change in common sense. PERCEIVED INEQUALITY IN THE US The general perception of inequality is often underestimated. Respondents vastly underestimated the actual level of wealth inequality in the US, believing that the wealthiest quintile held about 59% of the wealth when the actual number is closer to 84% and they also constructed ideal wealth distributions that were far more equitable than even their erroneously low estimates of the actual distribution, reporting a desire for the top quintile to own just 32% of the wealth. PERCEIVED INEQUALITY IN THE ITALY Italian respondents were asked about the policies to reduce inequality: citizens feel a strong need to act on inequality, as evidenced by the fact that 80% of respondents consider policies to combat inequality a priority. PRINCIPLES OF EQUALITY Equality is closely connected to morality and justice, especially distributive justice. Philosophers have debated its exact role in theories of justice, leading to various principles and conceptions of it. One such principle is formal equality, which states that if two individuals have equal status in at least one normatively relevant aspect, they must be treated equally in that regard. This idea, originating from Aristotle and Plato, hinges on identifying which aspects are normatively relevant and which are not. PROPORTIONAL EQUALITY:A COMPARISON WITH NUMERICAL EQUALITY The principle of proportional equality can be better appreciated in comparison with the principle of numerical equality, as it is proposed by Aristotle in the Nicomachean Ethics. A way of treating others, or a distribution arising from it, is equal numerically when it treats all persons as indistinguishable, thus treating them identically or granting them the same quantity of a good per capita. In contrast, a distribution is proportional or relatively equal when it treats all relevant persons in relation to their due. Just numerical equality is a special case of proportional equality. Numerical equality is only just under special circumstances, namely when persons are equal in the relevant respects so that the relevant proportions are equal. PROPORTIONAL EQUALITY: INEGALITARIAN THEORIES This principle indicates that equal output is demanded with equal input. Aristocrats and meritocrats all believe that persons should be assessed according to their differing deserts and that reward and punishment, benefits and burdens, should be proportional to such deserts. Since this definition leaves open who is due what, there can be great inequality when it comes to presumed fundamental (natural) rights, deserts, and worth – this is apparent in both Plato and Aristotle. PROPORTIONAL EQUALITY: THE RELATION WITH FORMAL EQUALITY Proportional equality further specifies formal equality; it is the more precise and comprehensive formulation of formal equality. It indicates what produces an adequate equality. However, both formal and proportional equality are simply conceptual schemas. They need to be made precise – i.e., its open variables need to be filled out. The formal postulate remains empty as long as it is unclear when, or through what features, two or more persons or cases should be considered equal. On the contrary, the next two accounts are substantive principles of equality, in that they identify a certain notion of equality. MORAL EQUALITY Definition: Until the eighteenth century, it was assumed that human beings are unequal by nature. This postulate collapsed with the advent of the idea of natural right, which assumed a natural order in which all human beings were equal and that, therefore, everyone deserves the same dignity and respect. History of the concept: The stoics first developed the principle of moral equality, emphasizing the natural equality of all rational beings. The New testament christianity envisioned that all humans were equal before God, although this principle was not always adhered to in the later history of the church. this important idea was also taken up both in the Talmud and in Islam, where it was grounded in both Greek and hebraic elements. In the modern period, starting in the XVII century, the dominant idea was of natural equality in the tradition of natural law and social contract theory. Hobbes (1651) postulated that in their natural condition, individuals possess equal rights, because over time they have the same capacity to do each other harm. Locke (1690) argued that all human beings have the same natural right to both (self-)ownership and freedom. Rousseau (1755) declared social inequality to be the result of a decline from the natural equality that characterized our harmonious state of nature, a decline catalyzed by the human urge for perfection, property and possessions. For Rousseau, the resulting inequality and rule of violence can only be overcome by binding individual subjectivity to a common civil existence and popular so-vereignty. In Kant’s moral philosophy (1785), the categorical imperative formulates the equality postulate of universal human worth. His transcendental and philosophical reflections on autonomy and self-legislation lead to a recognition of the same freedom for all rational beings as the sole principle of human rights. During the French Revolution, equality, along with freedom and fraternity, became a basis of the Declaration of the Rights of Man and of the Citizen of 1789. This fundamental idea of equal respect for all persons and of the equal worth or equal dignity of all human beings is widely accepted. Moral equality constitutes the “egalitarian plateau” for all contemporary political theories. PRESUMPTION OF EQUALITY: THE LINK TO DISTRIBUTIVE JUSTICE The first three principles of equality (formal, proportional and moral equality) hold generally and primarily for all actions upon others and affecting others, and for their resulting circumstances. The presumption of equality principle instead is a procedural principle of construction of a theory, which lies on a higher formal and argumentative level. The presumption of equality is a principle of equal distribution for all distributable goods. A strict principle of equal distribution is not required, but it is morally necessary to justify impartially any unequal distribution. The burden of proof lies on the side of those who favor any form of unequal distribution. For example, the following factors are usually considered eligible for justified unequal treatment in the economic sphere: (a) need or differing natural disadvantages (e.g. disabilities); (b) existing rights or claims (e.g. private property); (c) differences in the performance of special services (e.g. desert, efforts, or sacrifices); (d) efficiency; and (e) compensation for direct and indirect or structural discrimination (e.g. affirmative action). THEORIES OF DISTRIBUTIVE JUSTICE The presumption of equality principle focuses on distributive justice and evaluating how benefits and burdens are distributed in society. Historically, economic positions were seen as fixed by nature or divine will, with little opportunity for change. However, the realization that governments could influence this distribution brought distributive justice into focus. Today, governments constantly make decisions that affect the distribution of economic advantages and burdens. The role of distributive justice theory is to offer moral guidance for these decisions, ensuring fair distribution in society. 5 QUESTION Most conceptions of distributive justice result from a combination of a specific answer to the following questions: 1 Equality among whom? (subject) 2 Equality when? (time) 3 Why does equality matter? (justification) 4 What should be equal? (metric) 5 How should goods be distributed? (pattern) 1.EQUALITY AMONG WHOM? Individuals vs group Justice is primarily related to individual actions. Individual persons are the primary bearers of responsibility (the key principle of ethical individualism). One could regard the norms of distributive equality as applying to groups rather than individuals. It is often groups that rightfully raise the issue of an inequality between themselves and the rest of society, as with women and racial and ethnic groups. The question arises of whether inequality among such groups should be considered morally objectionable in itself, or whether even in the case of groups, the underlying concern should be how individuals (as members of such groups) fare in comparative terms. Local vs global justice The question examines whether distributive equality should apply universally to all individuals, regardless of location, or be limited to members of specific states or nations. While most theories of equality focus on distribution within a single society, there's no clear rationale for such a restriction. Universal morality argues that all individuals are equally entitled to resources unless valid reasons for unequal distribution exist. For example, natural resources discovered on someone's property, like oil, raise the question of whether these should belong solely to the person who found them or be shared more broadly. However, global justice—extending distributive equality globally—may seem overly demanding for individuals and states, potentially requiring significant redistribution across borders. Some argue that special relations, such as those within a nation, justify local equality, as nationalism suggests that members of a society share unique bonds that don't apply globally. Intergenerational justice A further question concerns whether the norms of distributive equality (whatever they are) apply to all individuals, regardless of when they live. This raises the question of the relationship between generations. Does the present generation have an egalitarian obligation towards future generations regarding equal living conditions? One argument in favor of this conclusion might be that people should not end up unequally well off as a result of morally arbitrary factors. However, the issue of justice between generations is notoriously complex. 2.EQUALITY WHEN? Starting-gate principles "Starting-gate" principles are a type of distributive justice framework that focuses on achieving a fair and equal distribution of goods, such as wealth or resources, at a specific initial point. According to this approach, everyone should start with an equal share of goods or opportunities at the beginning (the "starting gate"), but once that initial equality is established, individuals are free to use or manage their resources as they see fit. This means that after the initial distribution, no further efforts are made to maintain equality, and future outcomes will likely become unequal based on individual choices, efforts, or circumstances. Equality in time-frames Starting-gate principles can lead to significant inequalities over time as individuals freely use their resources after an initial equal distribution. In response, egalitarians propose strict equality principles, where income is kept equal at each point in time. However, even with equal income, differences in savings can create wealth disparities. To address this, strict equality principles are often paired with societal rules regulating saving behavior, ensuring that both income and wealth remain more balanced over time. 3.WHY DOES EQUALITY MATTER? Intrinsic egalitarianism and pluralistic egalitarianism The leveling-down objection argues that strict equality can lead to making everyone worse off just to achieve equality. Pluralistic egalitarianism avoids this by not focusing solely on equality but also valuing welfare, the principle that it's better when people are doing well. Instead of reducing the abilities of some, it advocates helping others, like supporting the blind. This approach balances equality with improving overall well-being, avoiding extreme measures to ensure a fairer outcome. Instrumental egalitarianism Instrumental egalitarianism holds that equality is valuable because it leads to desirable outcomes, not for its own sake. For example, redistributing wealth helps the poor escape poverty, promoting economic growth and reducing social unrest. Inequality can hinder education and health access for the poor, reducing their potential to contribute to growth. This is an instrumental justification of equality because the inequality itself matters less than helping people to get out of poverty. A more equal distribution of resources can also serve other goals, such as promoting economic growth. Economic growth can be negatively impacted when too many people are unable to invest in education or have poor health due to poverty. This isn't only a problem for the well-being of the poor but also for the economy, as a less educated and less healthy population contributes less to growth. Additionally, if inequality reduces the size and strength of the middle class, it can lead to a decrease in demand for goods and services, and limit investments in education and skills, further harming growth. Large wealth gaps can also lead to social conflict and instability, making it harder for society to reach political agreements. However, these instrumental arguments for a more equal distribution don't necessarily demand complete equality. As long as the poor receive enough support to avoid severe harm to economic growth or social stability, some level of inequality may still be acceptable. The key is to ensure that inequality doesn’t reach levels that threaten these broader societal goals. Constitutive egalitarianism Constitutive egalitarianism views equality as part of a larger framework that has intrinsic value (e.g., justice). It means equality isn't valued for its own sake but as a necessary component of a system (like justice) that is itself inherently valuable. Thus, if we take justice as an intrinsic good, and part of what makes a social system a just one is that persons must have an equal claim to some goods, then equality can be taken to be a constitutive (essential) component of justice, not a means to an end 4.WHAT SHOULD BE EQUAL? Income Income is the most straight-forward example of a resource metric to evaluate how well people are doing. In contemporary market economies, income is a very polyvalent good that gives people access to a large variety of external goods. It gives people the choice and avoids making controversial moral judgments about what should matter in people’s lives. It also facilitates the implementation and monitoring of distributive policies. A challenge of using this approach is that people have different capabilities to convert resources into well-being. Depending on their personal abilities or handicap, a person may not be able to derive the same well-being or opportunities. Treating people fairly may require giving them a bit more than the others to make sure that they attain the same level of well-being or opportunities as others. Opportunities and luck egalitarianism The metric of opportunities is defended by so-called luck egalitarians. It asks for people to be given equal opportunities, equal chances to succeed in life, and for compensating accidents in life. Opportunities and responsibility Luck egalitarians believe individuals should be accountable for their own choices and actions but not for circumstances beyond their control. The responsibility principle is central to assessing which inequalities are justifiable. When applied positively, the principle holds that inequalities resulting from voluntary and self-chosen actions are fair; individuals are responsible for these outcomes and do not deserve compensation unless in cases of dire need. In contrast, the negative formulation rejects inequalities stemming from circumstances beyond personal control as unjust, suggesting that those disadvantaged in this way deserve compensation. Inequalities must be based on factors for which individuals can be held responsible. Opportunities and formal equality of opportunities A difficulty of this approach lies in identifying which inequalities stem from personal choice and which one arise from circumstances. Formal equality of opportunity ensures that discrimination based on factors like race, ethnicity, age, or gender is not allowed, as these traits are beyond an individual’s control. When such uncontrollable traits significantly impact an individual’s economic prospects, it leads to unfairness because people cannot choose these characteristics. In societies where economic advantages depend on being born into a favored race or gender, outcomes become a matter of luck rather than merit, violating the principle of fairness and equal opportunity. Opportunities and a more substantial form of equality of opportunities Even with formal equality of opportunity, there will remain many factors over which people have no control but which will affect their lifetime economic prospects, such as whether a person’s family can afford to purchase good quality educational opportunities or health care. A society therefore will have reasons to adopt a more substantial equality of opportunity principle, with equal opportunities for education, health care… These social influences, which children have no control over, are considered part of the "social lottery," highlighting the randomness of one’s social circumstances at birth. Additionally, the "natural lottery" refers to the unequal distribution of innate talents among individuals, which also impacts their prospects. Opportunities and the race metaphor A race where the starting line is arbitrarily staggered, where people’s prospects for winning are not largely determined by factors for which they are responsible but rather largely by luck, is not considered a fair race. Similarly, if society is structured so that people’s prospects for gaining more economic goods are not largely determined by factors for which they are responsible but rather largely by luck, then the society is open to the charge of being unfair. Well-being and preference-satisfaction maximization Welfarists argue that the focus should be on ensuring everyone has access to similar levels of well-being. However, they must carefully define what constitutes well-being. Historically, utilitarians, who are related to welfarists, have used the term "utility" to describe well-being, and it has been interpreted in various ways, such as pleasure, happiness, or the satisfaction of preferences Jeremy Bentham, a key figure in utilitarianism, claimed that pleasure was the only intrinsic value, meaning everything else held value only if it led to pleasure or helped avoid pain. John Stuart Mill expanded this concept to include happiness or a sense of fulfillment as intrinsic values. Modern philosophers, starting from Kenneth Arrow, have shifted the focus to preference-satisfaction as the core of intrinsic value. According to this view, intrinsic value is achieved when individuals’ preferences or desires are satisfied. For preference utilitarians, the principle for distributing economic benefits is to maximize the satisfaction of these preferences. This is done by calculating the total sum of all satisfied preferences (with unsatisfied ones counted negatively) and weighing them based on their intensity. In cases of uncertainty about outcomes, the welfare function is adjusted to maximize expected utility, rather than actual utility, to account for the possible variations in outcomes. The first critique The first critique of utilitarianism, articulated by John Rawls, argues that utilitarianism fails to respect the distinctness of individuals. In utilitarianism, maximizing preference-satisfaction is considered prudent when applied to an individual’s life because people may accept temporary suffering or sacrifices for an overall better outcome in their lives. However, Rawls contends that applying this principle to society as a whole is problematic. Society is not a single entity like an individual; it consists of separate individuals with different experiences. In the context of society, some people may be made to suffer so that others can benefit, which is morally questionable. Unlike the individual case, where a person can choose to endure hardship for a future benefit, in a utilitarian society, people are not necessarily consenting to their suffering. Moreover, the decision to impose such sacrifices is not always agreed upon by everyone, making it unfair to impose suffering on some for the benefit of others without their consent. The second critique The second critique relates to how utilitarianism deals with individual preferences concerning other people's well-being or possessions. Utilitarianism, in its classical form, treats all preferences equally when determining the best distribution of resources. This approach can become problematic when preferences are discriminatory. For instance, if a majority group has a preference for a minority group to receive fewer material benefits, utilitarianism would count this preference the same as any other. If the majority’s preference outweighs the minority’s contrary preference due to their larger number, utilitarianism may justify an unequal distribution based on race, as long as it maximizes overall utility. This reveals a significant flaw: utilitarianism can potentially endorse discriminatory or unjust outcomes if they align with the majority’s preferences, regardless of their fairness. 5.HOW SHOULD GOODS BE DISTRIBUTED? Sufficientarianism and the Indifference Objection Sufficientarianism is a conception of justice that focuses on ensuring individuals have enough resources or well-being to meet a certain basic threshold, rather than striving for equality among all individuals. It is one of the least demanding views of justice, emphasizing humanitarian concern by aiming to alleviate suffering and help those who are worse off. The positive thesis of sufficientarianism argues that it is morally important and non-instrumental to secure a sufficient level of resources or opportunities for everyone. The negative thesis states that once individuals have reached this sufficient threshold, justice no longer demands concern for how benefits and burdens are distributed, regardless of any remaining inequalities. In his seminal work, Equality as a Moral Ideal (1987), Harry Frankfurt criticizes egalitarianism, arguing that it distracts from more important moral issues. He suggests that if everyone has enough, it doesn’t matter morally if some have more than others. Sufficientarianism, therefore, is not egalitarian, as its goal is not to minimize relative differences between people but to elevate the absolute condition of the worst off to a sufficient level. However, a critique known as the Indifference Objection points out that sufficientarianism allows for significant inequalities to persist, even if these inequalities are undeserved or result from luck, such as one's place of birth, talents, or social background. As long as the minimum threshold is met, the approach is indifferent to large disparities in wealth or resources that may remain. Strict egalitarianism and communism Strict egalitarianism, the notion that everyone should have the same level of material goods and services, is largely seen as unrealistic and is rarely advocated by any significant political movements or thinkers. Although egalitarianism is often linked with the demand for economic equality, and further associated with socialist or communist ideologies, neither communism nor socialism actually promotes absolute economic equality. Marx’s orthodox view on economic equality, as outlined in his Critique of the Gotha Program (1875), opposes the concept of legal equality for several reasons. First, Marx argues that equality considers only a limited range of morally relevant factors, overlooking others and thus creating unequal effects. He emphasizes that the economic structure forms the fundamental basis for society's development and should be the reference point for understanding its features. Second, he criticizes justice theories for focusing too much on distribution rather than addressing the more fundamental issues of production. Lastly, Marx asserts that in a future communist society, there would be no need for law or justice, as social conflicts would no longer exist. Strict egalitarianism and objections Strict egalitarianism faces several objections. 1. The first is the leveling-down objection, which criticizes the idea that reducing everyone to the same level—even if it means lowering the well-being of some without improving that of others—achieves equality but does not create any benefit. This approach is seen as counterproductive. 2. The second objection argues that strict equality can distort economic incentives. If everyone receives the same regardless of their effort or achievement, there is less motivation for individuals to excel or innovate in the economic field. Additionally, the administrative costs associated with redistributing resources equally can lead to inefficiencies and waste, so it’s necessary to find a balance between equality and efficiency. 3. The third critique emphasizes that a strict, mechanical distribution ignores individual differences and preferences. People have diverse desires, needs, and circumstances, so giving everyone the same goods doesn’t account for these variations: a one-size-fits-all approach is inadequate. 4. Finally, there is concern that strict equality could lead to uniformity instead of promoting pluralism and respect for diversity. This critique is especially prominent in feminist and multiculturalist theory, where the focus is on respecting and valuing differences rather than imposing a single standard. The maximum principle and the veil of ignorance John Rawls, in his book A Theory of Justice (1971), proposes an approach to developing principles of justice through a thought experiment called the original position. In this scenario, individuals decide on the principles of justice from behind a veil of ignorance, which blinds them to all personal details about their identities and circumstances. This veil ensures that people do not know their place in society, their class or social status, their natural talents (such as intelligence or strength), or even their personal beliefs and psychological tendencies and it would lead to the selection of impartial and fair principles of justice that benefit all members of society. The maximin principle and Rawls’ principles of justice John Rawls proposes two principles of justice to establish a fair society. 1. The first principle ensures that each person has an equal right to a comprehensive set of basic rights and liberties that are compatible with the same set of rights for all individuals. This includes guaranteeing that political liberties are protected fairly and have genuine value for everyone. 2. The second principle addresses social and economic inequalities, specifying that such inequalities are acceptable only if they meet two conditions: (a) they must be linked to positions and opportunities that are accessible to all under conditions of fair equality of opportunity, and (b) they must benefit the least advantaged members of society to the greatest extent possible. When these principles conflict, Rawls assigns them a lexical priority: Principle (1), concerning equal rights and liberties, takes precedence over Principle (2), which deals with social and economic inequalities. Within Principle (2), fair equality of opportunity (2a) has priority over maximizing benefits for the least advantaged (2b). Although Principle (1) pertains to the distribution of liberties rather than economic resources, it is foundational for justice, ensuring that all individuals have equal political and personal freedoms before addressing economic inequalities. The maximin principle and the difference principle The Difference Principle, proposed by Rawls, allows for inequalities in the distribution of goods, but only if these inequalities ultimately benefit the worst-off members of society. Rawls argues that representatives in the original position would rationally choose this principle, as it protects them against the worst possible scenarios they might face. Therefore, any redistribution of wealth or resources is justified only if it improves the situation of the least advantaged group. This approach, called the “maximin” principle, aims to maximize the welfare of those who are at the lowest level in society. Although the Difference Principle might initially seem to demand equal distribution of goods, it permits inequality when it is beneficial for everyone, including the least advantaged. If we accept that incentives and the efforts of the most talented individuals can increase productivity and overall societal wealth, the principle becomes compatible with significant inequality. Such inequalities can raise the overall resources available, ultimately providing a larger share for the worst-off than would be possible under strict equality. This creates a strong justification for tolerating some inequality, as it can lead to better outcomes for everyone, including those at the bottom. AI AND WORK Global inequalities have increased worldwide in the past decades. The causes of this process are complex. One important driver of rising inequality is technological change. AI contributes to this phenomenon in several ways; one of them is its impact in the world of work. AI for recruitment Building, maintaining, and testing AI systems AI and the labor market AI at the workplace AI for recruitment Strategeion was founded by a group of enthusiastic Army veterans who were skilled in programming and computer engineering. After their honorable discharge, they launched this non-profit organization driven by a sense of civic duty. The platform they developed aimed to provide a range of public services, with a focus on veterans, though it was open to other groups as well. Their motto, “leave no one behind,” reflected their commitment to inclusivity. In contrast to other tech companies that typically hire young graduates from elite universities, Strategeion primarily employed ex-military personnel. This hiring policy led to high employee satisfaction and retention, making Strategeion stand out in the public eye. The organization’s visibility increased, and job applications began to flood in, including many from candidates who would typically seek positions at larger, for-profit tech firms. The Efficiency Drive: PARiS System As the number of job applications overwhelmed the HR team, Strategeion’s developers introduced an AI-based system called PARiS to streamline the hiring process. PARiS used natural language processing (NLP) and machine learning to analyze resumes and identify the best candidates. The system was trained using resumes of current and past employees who were classified as either exemplary or poor based on their professional attributes and fit within the company. PARiS would then rate incoming resumes and reject those that didn’t meet a certain threshold. As HR developed increasing trust in PARiS, they relied less on manual checks, allowing the system to make decisions autonomously. Question 1: Efficiency vs. Other Values in Hiring PARiS promised to make the hiring process more efficient. But are there other values that might be desirable in hiring? Diversity? Equity? Creativity? What, if anything, do companies risk losing when hiring procedures are so singularly focused on maximizing efficiency? While PARiS improved hiring efficiency, it overlooked other important values, such as diversity, equity, and creativity. By singularly focusing on efficiency, companies risk filtering out candidates who might bring new and valuable perspectives. A system like PARiS may unintentionally favor candidates who fit established patterns, reinforcing homogeneity and stifling innovation. Diversity is crucial for fostering creativity and ensuring a variety of viewpoints, which can lead to more innovative solutions and better decision-making. By focusing solely on efficiency, Strategeion risked missing out on the benefits that come with a more diverse workforce, such as higher adaptability and enhanced problem-solving abilities. Hara’s Case: Rejection and Bias Hara, a bright computer science student from Athens, applied to Strategeion and was promptly rejected by the PARiS system. Despite her strong qualifications and civic work with non-profit organizations supporting wheelchair users like herself, her application was dismissed without human review. Confused, Hara contacted the company for feedback. The HR department, after reviewing her resume, was puzzled by the rejection. Hara’s skills and interests aligned well with the company’s values and objectives. Upon investigating further, HR discovered that PARiS had flagged her application as a poor fit based on an unexpected factor: sports. Many of Strategeion’s employees, being veterans, had a history of participating in athletics, which PARiS had correlated with good job performance. Hara, who had used a wheelchair her entire life, had no sports history, leading to her unfair rejection. Question 2: Addressing Bias in AI Systems Biased data sets pose a problem for ensuring fairness in AI systems. What could Strategeion’s engineers have done to counteract the skewed employee data? To what extent are such efforts the responsibility of individual engineers or engineering teams? PARiS’s biased decision-making stemmed from a skewed training dataset, which overemphasized characteristics like athletic participation. To prevent this kind of bias, Strategeion’s engineers should have diversified the training data to include a broader range of experiences, ensuring that the system didn’t unfairly favor irrelevant traits like sports. Regular audits and updates to the system could help catch and correct biases as they arise. This responsibility falls not only on individual engineers but on the entire development team, HR department, and the company leadership, all of whom must work together to ensure that the AI system operates fairly and without discrimination. Hara’s Complaint: Fairness, Dehumanization, and Consent After discovering why she had been rejected, Hara received an apology from Strategeion and was invited to interview. However, she declined and filed a formal complaint, raising concerns about fairness, the dehumanizing nature of automated decision-making, and the lack of consent regarding the use of her data. Hara argued that PARiS had treated her unfairly by rejecting her based on an irrelevant factor—her lack of sports participation. Given the marginalization that people with disabilities often face, Hara felt that Strategeion should have taken steps to actively support disabled candidates rather than allowing the system to inadvertently disadvantage them. She also criticized the use of an automated system to make such important decisions without human oversight. This process, she argued, felt dehumanizing, reducing her to a data point rather than considering her full potential. Hara suggested that this dehumanization might extend to the HR workers whose roles had been diminished by the AI system. Finally, Hara was dismayed to learn that her resume, and those of other applicants, may have been used to train the system without her explicit consent. Many current and former employees shared her concern, feeling that they had been subjected to decisions made by AI without their knowledge or agreement. Question 3: Addressing Insidious Discrimination The type of discrimination practiced by PARiS might not seem as blatantly demeaning as a blanket hiring policy against those with physical disabilities, but it is any different from a moral standpoint? How might this kind of insidious discrimination, which is, by definition, difficult to spot, be avoided? PARiS did not explicitly discriminate against Hara based on her disability, but it still indirectly excluded her due to its focus on athletic history. This type of discrimination is subtle and harder to detect but still morally problematic. While it may seem less demeaning than outright exclusion based on disability, it perpetuates inequality in a more covert way. Avoiding this form of discrimination requires careful design, continuous monitoring, and transparency in how AI systems are trained and applied. It’s essential to ensure that AI systems are regularly checked for any patterns that may lead to unintended exclusions, especially for marginalized groups. Rethinking Workforce Homogeneity Strategeion’s founders began reconsidering their emphasis on a homogeneous workforce of veterans. Studies in management suggest that diverse teams outperform homogeneous ones, as they bring a variety of perspectives and are better equipped to innovate. Diversity helps prevent “group think” and fosters a more inclusive and creative work environment. However, homogeneity also has advantages, such as smoother communication and a shared understanding that can reduce internal conflicts. Question 4: Balancing Homogeneity and Diversity Social science increasingly shows that there are advantages to a heterogenous work- force, but there are also advantages to homogeneity. A diverse workforce helps protect organizations against “group think,” for example, but groups that share certain expe- riences and backgrounds may find it easier to communicate with and understand one another, thereby reducing collective action problems. If you were a manager in charge of hiring at Strategeion, for which position would you advocate? Would you try to maintain the corporate culture by hiring people who resemble current employees, or would you argue that PARiS should be realigned to optimize for a broader range of types? If I were a manager at Strategeion, I would advocate for hiring a more diverse workforce. While shared experiences can create a strong corporate culture, diversity brings a wider range of ideas, perspectives, and solutions that can enhance problem-solving and drive innovation. Reconfiguring PARiS to prioritize diversity would not only enrich the company’s work environment but also align with the company’s commitment to inclusivity. This would ensure that Strategeion remains adaptable and better equipped to face future challenges while maintaining its foundational values of leaving no one behind. History of hiring technology Hiring technology has advanced significantly since the 1990s, starting with online job boards like Monster.com, which offered cheaper alternatives to newspaper ads. Soon after, search engines and pay-per-click advertising allowed recruiters to find talent more effectively. The internet also made it easier for candidates to apply for multiple jobs. Recruiters then began using technology to identify not just active candidates but also passive ones by leveraging online professional profiles like LinkedIn. As data collection and analysis methods evolved, employers adopted more advanced screening tools, including assessments enhanced by new technology. This shift also aligned with efforts to promote diversity, as tech vendors developed tools to minimize biases in hiring. Today, hiring technology uses machine learning to make predictions throughout the recruitment process. These tools analyze patterns in historical data to score and rank candidates, improving the efficiency and effectiveness of hiring decisions. Why employers adopt predictive tools Employers aim to reduce the time it takes to fill positions because delays consume resources, risk losing candidates to competitors, and may impact seasonal hiring needs. They also focus on minimizing the cost per hire, which averages around $4,000 in the U.S. Additionally, employers seek to maximize the quality of hire, evaluating candidates based on performance, output, and career growth within the company. Maximizing employee tenure is another priority, as turnover is expensive due to recruitment and training costs. Lastly, many employers set diversity goals based on factors like gender, race, age, and socioeconomic background to create more inclusive workplaces. Direct (or interpersonal) discrimination Hiring technology vendors often claim that their tools help eliminate bias in recruitment by making the process more consistent, efficient, and fair. They suggest that these tools reduce discrimination by masking applicants’ sensitive attributes, such as race, gender, or age, which are legally protected characteristics. These claims typically focus on preventing direct or interpersonal discrimination, where unfavorable outcomes are explicitly tied to these protected traits. In computer science and legal contexts, these protected attributes are those identified by laws like the Fair Housing and Equal Credit Opportunity Acts, which include race, religion, sex, age, disability, and more. Indirect (or systemic) discrimination Institutional discrimination occurs when company policies and cultures favor certain groups, disadvantaging others. For instance, using “culture fit” as a hiring criterion can lead to excluding qualified candidates from diverse backgrounds if the company primarily hires from a privileged, homogenous group. Structural discrimination, meanwhile, reflects broader societal patterns of inequality, such as racism or unequal economic opportunities. An example is employers favoring candidates from elite universities, which are often accessible primarily to privileged individuals, despite efforts to diversify admissions. Discrimination can also be internalized, affecting job seekers' behavior, such as their willingness to apply for certain positions based on perceived biases or barriers. Discrimination and bias How can predictive tools perpetuate discrimination? Through bias, which can exist and emerge in predictive tools in several distinct ways. Similar to discrimination, bias is also a source of unfairness. Discrimination can be considered as a source for unfairness that is due to human prejudice and stereotyping based on the sensitive attributes, which may happen intentionally or unintentionally, while bias can be considered as a source for unfairness that is due to the data collection, sampling, and measurement. But this distinction is not very strict. 1) Most AI systems and algorithms are data driven and require data upon which to be trained. In the cases where the underlying training data contains biases, the algorithms trained on them will learn these biases and reflect them into their predictions. 2) As a result, existing biases in data can affect the algorithms using the data, producing biased outcomes. Algorithms can even amplify and perpetuate existing biases in the data. In addition, algorithms themselves can display biased behavior due to certain design choices, even if the data itself is not biased. 3) The outcomes of these biased algorithms can then be fed into real-world systems and affect users’ decisions, which will result in more biased data for training future algorithms. Amazon developed an AI-based hiring tool to rate job candidates from one to five stars, similar to product ratings. However, by 2015, the company discovered that the tool was biased against women for technical roles. The AI model had been trained on resumes submitted over a decade, most of which were from men, reflecting the tech industry’s male dominance. As a result, the system favored male candidates, penalizing resumes that mentioned terms like "women’s" and downgrading graduates from all-women’s colleges. Bias can exist in many shapes and forms, and they can be classified according to the data, algorithm, and user interaction loop: 1. Data-to-algorithm bias: biases in data, which, when used by algorithms, might result in biased algorithmic outcomes. 2. Algorithm-to-user: biases that are as a result of algorithmic outcomes and affect user behavior as a consequence. 3. User-to-data: many data sources used for training ML models are user-generated. Any inherent biases in users might be reflected in the data they generate. DATA-TO-ALGORITHM-BIAS Omitted-variable bias Omitted-variable bias occurs when one or more important variables are left out of the model. EXAMPLE: The model predicting subscription cancellations failed to account for the emergence of a new competitor offering a similar service at half the price. This unforeseen competitor influenced customers' decisions to cancel but was not included as a factor in the model, making it an "omitted variable." As a result, the model could not accurately predict the cancellations caused by this new market competition. Representation bias Representation bias arises from how we sample from a population during a data collection process. EXAMPLE: ImageNet is an image database instrumental in advancing computer vision and deep learning research. The fraction of US and Great Britain are the top represented locations. This results in demonstrable bias towards Western cultures. Aggregation bias Aggregation bias arises when false conclusions are drawn about individuals from observing the entire population. A type of aggregation bias is the Sympson’s paradox, when an association observed in aggregated data disappears or reverses when the same data is disaggregated into its underlying subgroups. EXAMPLE: One of the better-known examples of the type of paradox arose during the gender bias lawsuit in university admissions against UC Berkeley. After analyzing graduate school admissions data, it seemed like there was bias toward women, a smaller fraction of whom were being admitted to graduate programs compared to their male counterparts. When admissions data was separated and analyzed over the departments, women applicants had equality and in some cases even a small advantage over men. The paradox happened as women tended to apply to departments with lower admission rates for both genders. ALGORITHM-TO-USER-BIAS Algorithmic bias Algorithmic bias is when the bias is not present in the input data and is added purely by the algorithm. The algorithmic design choices (such as use of certain optimization functions, regularizations, choices in applying regression models on the data as a whole or considering subgroups, and the general use of statistically biased estimators in algorithms) can all contribute to biased algorithmic decisions that can bias the outcome of the algorithms. Presentation bias Presentation bias is a result of how information is presented. EXAMPLE: On the Web users can only click on content that they see, so the seen content gets clicks, while everything else gets no click. And it could be the case that the user does not see all the information on the Web. Popularity bias Items that are more popular tend to be exposed more. However, popularity metrics are subject to manipulation – for example, by fake reviews or social bots. EXAMPLE: As an instance, this type of bias can be seen in search engines or recommendation systems where popular objects would be presented more to the public. But this presentation may not be a result of good quality; instead, it may be due to other biased factors. USER-TO-DATA-BIAS Historical bias Historical bias is the already existing bias and socio-technical issues in the world and can seep into from the data generation process even given a perfect sampling and feature selection. EXAMPLE: An example of this type of bias can be found in a 2018 image search result where searching for women CEOs ultimately resulted in fewer female CEO images due to the fact that only 5% of Fortune 500 CEOs were woman — which would cause the search results to be biased towards male CEOs Social bias Social bias happens when others’ actions affect our judgment. EXAMPLE: An example of this type of bias can be a case where we want to rate or review an item with a low score, but when influenced by other high ratings, we change our scoring thinking that perhaps we are being too harsh. ALGORITHMIC FAIRNESS Fighting bias and discrimination has roots in philosophy and psychology, and more recently, machine learning. Defining fairness is crucial, but challenging, as there is no universal agreement. Broadly, fairness implies no prejudice or favoritism based on inherent or acquired traits in decision-making. Fairness can be categorized into two types: 1. Two of individual fairness (i.e., give similar predictions to similar individuals). 2. Two of group fairness (i.e., treat different groups equally). Individual fairness 1. One approach to achieving fairness is "fairness through unawareness", where sensitive attributes (e.g., race, ethnicity) are removed from the data to prevent discrimination. For example, when training an algorithm used by judges for parole decisions, excluding ethnic origin could be seen as fair while using objective data like the number of previous offenses. However, even supposedly neutral data might carry historical biases, such as racial bias in policing. Additionally, removing sensitive attributes makes it impossible to check if opportunities or outcomes are equal for all groups. Some countries, like Germany, apply this approach for demographic statistics to prevent discrimination. However, it has limitations, as removing relevant information can obscure disparities that need addressing. 2. Another approach is the one called “fairness through awareness”, which means that similar individuals should get similar predictions. (EX. If two people are alike except for their sexual orientation, say, an algorithm that displays job advertisements should display the same jobs to both.) >The main issue with this concept is how to define similarity. In the example, the problem is that training data may have been distorted by the fact that one in five individuals from gender or sexual minorities report discrimination against them in hiring, promotions and pay. >This makes individual fairness hard to use in practice. Group fairness 1. A predictive algorithm satisfies demographic (or statistical) parity if, on average, it gives the same predictions to different groups. As a consequence, the likelihood of a positive outcome should be the same regardless of whether the person is in the protected group. (EX. Demographic parity refers to the idea that both males and females should have the same loan approval rates, regardless of their actual ability to repay the loan. The 80% rule, as used by the US Equal Employment Opportunity Commission, allows for some imbalance but ensures that no group's acceptance rate is less than 80% of the rate for the highest-accepted group.) Critiques: It might not make sense to use demographic parity in certain settings, such as a fair arrest rate for violent crimes (men are significantly more likely to commit acts of violence). Demographic parity does nothing to ensure individual fairness; a well-qualified applicant might be denied and a poorly-qualified applicant might be approved, as long as the overall percentages are equal. The ethical dilemma is whether to emphasize fairness (redressing past inequalities) or accuracy (evaluating based on present conditions): If a man and a woman are equal in every way, except the woman receives a lower salary for the same job, should she be approved because she would be equal if not for historical biases, or should she be denied because the lower salary does in fact make her more likely to default? 2. The principle of equal opportunity is the one of giving the same beneficial predictions to individuals in each group: the probability of a person in a class being assigned to a positive outcome should be equal for both protected and unprotected group members. THE HIRING FUNNEL Hiring is a series of decisions that culminate in a job offer or a rejection. 1. Employers start by sourcing candidates, attracting potential candidates to apply for open positions through advertisements, job postings, and individual outreach. 2. During the screening stage, employers assess candidates – both before and after those candidates apply – by analyzing their experience, skills, and characteristics. 3. Through interviewing applicants, employers continue their assessment in a more direct, individualized fashion. 4. During the selection step, employers make final hiring and compensation determinations 1.SOURCING AND MATCHING In the sourcing stage, employers use predictive technologies to find candidates by optimizing job ads, notifying job seekers about relevant positions, and identifying candidates who could be lured from competitors or are open to reentering the job market. These sourcing technologies have a profound impact on shaping the candidate pool before any applications are received. Matching follows, where job opportunities are compared with candidates, often leading to ranked recommendations. Jobseekers receive tailored job suggestions, while recruiters get a prioritized list of candidates. Although these tools promise to connect the right people to the right jobs, they can also inadvertently limit visibility, hiding certain opportunities from candidates and suppressing potential applicants from recruiters. Personalized job boards and matching technologies have become a preferred method for many, often replacing traditional employment and staffing agencies. The case of ZipRecruiter ZipRecruiter serves as an example of a matching system that uses personalized features for both employers and jobseekers. As a recommender system, similar to Netflix or Amazon, it predicts user preferences to rank and filter job opportunities. To personalize recommendations, ZipRecruiter primarily relies on two methods: 1. Content-based filtering: This approach examines user behavior, such as clicks or applications, to suggest similar jobs or candidates. 2. Collaborative filtering: This method predicts interests by analyzing the behavior of similar users. If two jobseekers apply for similar jobs, ZipRecruiter strengthens their connection. When one of them receives positive feedback (like a "thumbs up" from an employer), the system nudges the other to apply for the same job. However, ZipRecruiter’s algorithms also demote or suppress job postings from candidates deemed less likely to be a good fit, limiting the visibility of certain opportunities. Similarly, it prioritizes job openings for users based on their past behavior, elevating some positions while pushing others down based on the candidate's activity. Risks Job matching platforms like ZipRecruiter and recommender systems in general introduce significant equity concerns. While these tools aim to enhance efficiency and reduce bias, they can unintentionally reinforce or replicate the biases they are meant to counteract. Content-based filtering can reinforce a user’s own biases. For example, if a woman constantly clicks on lower-level job postings due to self-doubt about her qualifications, the system will gradually limit her exposure to higher-paying, senior-level opportunities, reinforcing her initial behavior rather than presenting her with roles she is qualified for. Collaborative filtering, which draws conclusions from the actions of similar users, risks creating stereotypes. Even if a woman actively clicks on management positions, the system might show her fewer senior roles if other women with similar profiles tend to apply for junior positions. This stereotyping effect arises from group behavior, not her personal preferences, potentially disadvantaging her compared to male candidates in similar situations. 2. SCREENING During the screening stage, employers review applications to filter out unqualified or weaker candidates, prioritizing the most suitable ones for further consideration. Predictive technologies play a key role here by assessing, scoring, and ranking applicants based on their qualifications, soft skills, and other factors. These tools enable hiring managers to quickly narrow down the pool of candidates, allowing them to focus more attention on those deemed the strongest. A large portion of applicants are automatically or summarily rejected at this stage, often without human review, as the technology helps streamline the hiring process by filtering out candidates who do not meet predefined criteria. The case of Pymetrics Pymetrics is a notable platform that uses neuroscience-based web and mobile games to assess candidates' cognitive, social, and emotional traits, such as processing speed, memory, and perseverance. For example, one of their games involves clicking when a red dot appears on the screen, ostensibly measuring reaction time, but in reality, it's assessing traits like impulsivity, attention span, and the ability to learn from mistakes. Pymetrics creates custom predictive models for employers based on the traits of their top-performing employees. Initially, the company collects data from a large, generalized pool of players to establish baseline "trait profiles." Employers then have their own employees play Pymetrics games, allowing the platform to apply machine learning to identify which traits differentiate top performers from the rest of the workforce. Once the predictive model is built, job candidates are required to play these games as part of the screening process. Pymetrics calculates a percentage score for each candidate, indicating how well their traits align with the employer’s ideal profile for the job. Candidates who do not meet the predefined threshold are automatically rejected, making the platform a key tool in early-stage applicant filtering. Risks Predictive hiring tools like Pymetrics raise concerns about bias and fairness because they differentiate between high and low performers based on subjective evaluations, potentially reinforcing existing social inequalities. The models often reflect current workforce patterns that may not be relevant to actual job performance. Even when these tools identify traits shared by successful employees, they risk unfairly excluding capable candidates who don't exhibit the same characteristics, as these inferred traits might not have a causal link to success but are instead products of historical hiring trends. The psychological theories behind these tools are shaped by specific historical and social contexts, often relying on research conducted with college student samples. This raises doubts about their applicability to broader, diverse populations. Additionally, assigning numerical "fit" scores can create a false sense of significant differences between candidates, leading recruiters to prioritize certain applicants based on overstated or arbitrary distinctions. 3.INTERVIEWING In the interview stage, employers interact directly with individual applicants. Prominent tools at this stage claim to measure applicants’ performance in video interviews, by automatically analyzing verbal responses, tone, and even facial expressions. Employers might use these tools to save interviewers time, relieve scheduling burdens, and standardize what is often seen as an inescapably subjective part of the hiring process. The case of HireVue HireVue offers a video interviewing tool that allows employers to collect recorded interview responses from applicants and "grades" them against answers given by successful current employees. Using machine learning, the tool analyzes video signals such as facial expressions, eye contact, vocal enthusiasm, word choice, complexity, and discussion topics. It builds a model that links these signals to workplace performance based on the employer’s metrics. For each candidate, HireVue generates an "insight score" from 0-100. High-scoring candidates can be automatically advanced for further review, while those below a threshold may be automatically rejected. HireVue claims to test its models for bias, evaluating them against demographic subgroups (e.g., gender, race, age) to detect adverse impacts. The model is also periodically reviewed for both accuracy and fairness once hiring begins. Risks 1. Accuracy Issues: Speech recognition can perform poorly for people with regional or nonnative accents, and facial analysis may struggle with darker skin tones, particularly for women. 2. Questionable Validity: Critics argue that using physical features or facial expressions as hiring criteria lacks a credible link to actual job performance. 3. Fairness and Dignity: Even if legally compliant, assessments based on immutable characteristics may violate candidates' sense of dignity and justice, limiting their opportunity to fairly showcase their skills. 4. Unintended Bias: Interviewees might be unfairly advantaged by exaggerated expressions or penalized for factors like disabilities or speech impediments, leading to biased evaluations based on irrelevant traits. 4.SELECTION At the selection stage, employers make their final decisions, which may involve background checks and negotiating offer terms. Here, predictive tools can help employers estimate a candidate’s likelihood of accepting an offer, as well as optimize the offer by adjusting salary, bonuses, and benefits. The case of Oracle’s Recruiting Cloud Oracle's Recruiting Cloud offers employers predictive insights into a candidate's likelihood of accepting a job offer. Employers can adjust variables such as salary, bonuses, stock options, and other benefits to observe, in real time, how these changes affect the acceptance probability. Additionally, the tool updates its predictions over time by incorporating data from previous offers and outcomes, enhancing its accuracy and relevance. Risks Tools like Oracle may widen pay gaps for women and minorities, as HR data often include proxies for socioeconomic and racial status, affecting salary predictions. Giving employers detailed insights into a candidate’s salary expectations increases information asymmetry during negotiations. This may also conflict with laws banning the use of salary history to prevent pay disparities, as employers could estimate past salaries, undermining these protections. In each stage of the hiring funnel, from interviews to final selection, AI tools offer efficiencies but also introduce complex ethical challenges around fairness, bias, and equity. AI and the labor market Resistance to technologies and innovation World GDP growth remained mostly stagnant for millennia, with significant per capita increases only beginning around 1800 due to the Industrial Revolution. This period marked a shift in economic growth and productivity, challenging two common narratives about human innovation: that nothing substantial happened in human history until recent centuries that humans were not inventive before the 18th century. However, a closer look at history reveals groundbreaking preindustrial innovations and ideas, as well as significant resistance to technologies that threatened workers’ jobs. This opposition to labor-replacing technologies has deep roots. Roman Emperor Vespasian, for instance, rejected a labor-saving invention for transporting heavy columns in the first century, choosing to maintain jobs instead. In the 16th century, Elizabeth I denied William Lee’s request to patent his knitting machine, citing concern for the livelihood of knitters. Many European cities in the 17th century similarly banned new machinery like automatic looms to prevent unrest The British government prohibited the gig mill in 1551 for similar reasons. The resistance continued into the 18th century with the rise of the Luddites, English textile workers who protested new machinery that threatened their craft. Facing little government support, they destroyed mechanized looms and knitting frames, leading to riots between 1811 and 1816. The government, however, increasingly defended technological progress and employed military force to suppress the Luddite uprisings, arguing that the machines ultimately benefited the national economy. This gradual acceptance of technology was driven partly by the rise of a politically dominant property-owning class in Britain, who saw profits in manufactured goods, and by a shift in how technological gains were distributed, gradually benefiting a broader labor force. Schumpter’s creative destruction Economist Joseph Schumpeter later described this shift as “creative destruction”—where old, inefficient methods are replaced by innovative ones, fueling economic growth and productivity by continuously driving the market to improve. Innovation, especially under capitalism, gained momentum as market competition incentivized progress. Workerist marxism Italian workerist Marxism offered an alternative view, arguing that class struggle motivated continuous innovation. Italian philosopher Mario Tronti explained that capitalists, facing pressure from labor demands for better rights and wages, found it economically convenient to replace workers with machines. Without this pressure, like in a slave-based system, there would be little incentive for capitalists to innovate. Today, similar dynamics exist with emerging technologies like Artificial Intelligence. As in previous eras, AI is likely to disrupt traditional industries but promises to bring significant economic benefits, reflecting a historical pattern of balancing innovation with social and economic impacts. Automation anxiety The trend towards automation is increasing, introducing labor-saving technologies with two distinct impacts on jobs: 1. Replacing Technologies: These make jobs and skills redundant, taking over tasks previously performed by workers. Examples include lamplighters replaced by electricity, liftmen replaced by automatic elevators, and ticket choppers replaced by turnstiles, drastically reducing employment in these roles. 2. Enabling Technologies: These enhance productivity in existing jobs or create new job opportunities. Examples include computer-aided design software improving productivity for architects, statistical programs like Stata aiding analysts, and office machines like typewriters creating new clerical roles. Periods of rapid technological change often lead to automation anxiety. This anxiety has recurred throughout history, such as during the late 1920s and early 1930s, and again in the late 1950s to early 1960s. Today, the anxiety persists as studies and reports from corporate research groups and economists frequently predict the extent to which jobs are at risk due to automation and the worry is more than the optimism about potential developments in automation. ➔ A landmark study by Frey and Osborne from Oxford University in 2013 estimated that 47% of jobs in the U.S. were at “high risk” of automation within the next two decades. This research has been widely cited and influenced similar predictions globally. ➔ However, not everyone agrees with these alarming numbers. Some experts, such as Coelli and Borland, have criticized Frey and Osborne’s method for being too subjective, arguing that it was based on limited understanding of job roles. ➔ Arntz, Gregory, and Zierahn further challenged the findings by emphasizing that Frey and Osborne’s estimates overlooked the variety of tasks within jobs and how roles adapt during digital transformations. Their analysis, which considered detailed task data, suggested that when accounting for job complexity and adaptability, the risk of U.S. jobs being fully automated could fall from 38% to just 9%. These differing views show that while automation will continue to reshape the job market, its impact may be less catastrophic than initially feared, especially if jobs evolve and adapt alongside technological advancements. The labor market In discussions about the impact of technology on the labor market, Acemoglu and Restrepo highlight a false dichotomy that divides opinions. On one side, many economists believe that, like in the past, technological advancements will ultimately boost labor demand and wages. On the other side, some predict that the rise of AI and robotics will lead to widespread job loss and the end of human work. Classical economists point out that technological progress has two main effects on employment. The destruction effect occurs when machines replace workers, causing them to find new jobs or adjust their skills. The capitalization effect arises when more companies invest in highly productive industries, which leads to job growth in those sectors. While it’s easy to see which jobs are lost to automation, it’s harder to predict the new roles that will emerge or identify who will benefit. New technologies create jobs in different ways. Directly, they lead to positions in design, maintenance, and the growth of industries around them. Indirectly, companies using these technologies save on labor costs, allowing them to expand, develop new products, or lower prices. Consumers save money and use those savings to buy more goods and services, which creates new jobs. The futurist perspective argues that each wave of technological innovation is unique. While some dismiss concerns based on historical patterns, technological shifts, like the invention of flight, show that once a new technology is developed, it can redefine what is possible. This time may indeed be different, depending on how advanced future technologies become. Acemoglu and Restrepo’s framework bridges the divide by focusing on the displacement effect—automation replacing human workers. This effect can reduce labor demand and wages, leading to a decoupling of output per worker and wage growth. But there are also counteracting forces, such as the productivity effect, where cheaper machine labor can boost economic output and create demand for non-automated tasks. A classic example is the spread of ATMs. While ATMs automated many banking tasks, banks expanded by opening more branches, increasing the demand for bank tellers to handle specialized roles. This illustrates how automation in one area can create growth in related jobs. The productivity effect can also raise real incomes, leading to higher demand for goods and services and creating jobs in other sectors. For instance, mechanization in agriculture reduced food prices, which allowed consumers to spend more on non-agricultural goods, creating jobs in other industries. However, not all automation has the same impact. ➔ The real risk lies in “so-so” automation—technologies that are productive enough to be adopted and cause job displacement but not productive enough to stimulate strong economic growth or new job creation. Deepening automation—improving the productivity of machines in tasks already automated—can also increase productivity without displacing more workers. An example is the shift from horse-powered agricultural tools to diesel tractors, which significantly boosted productivity and wages without widespread job loss. The most important counterbalance to the displacement effect is the creation of new tasks. When new functions and activities emerge where human labor has an advantage over machines, it generates a reinstatement effect that supports labor demand. The creation of these tasks is not automatic; it depends on the actions of firms, workers, and society. New automation technologies can fuel this process, ensuring that the labor market continues to evolve and adapt. Potential sources of new labor Potential sources of new labor include several trends and developments that could shape the global workforce: 1. Rising incomes and consumption, especially in emerging economies, are set to play a major role. Between 2015 and 2030, global consumption is expected to increase by $23 trillion, mainly from middle-class growth in these regions. This surge will not only create job opportunities locally but also boost economies that export goods and services to these countries. It is estimated that 250 to 280 million jobs could emerge from consumer goods, with an additional 50 to 85 million in health and education. 2. Aging populations are another driver of labor demand. By 2030, there will be 300 million more people aged 65 and older than in 2014. This demographic shift means more spending on healthcare and related services, increasing the need for medical professionals, home-health aides, and personal-care workers. The healthcare sector could see a boost of 50 to 85 million new jobs globally due to aging populations. 3. Investments in infrastructure and buildings could also generate significant labor demand. Bridging infrastructure gaps and tackling housing shortages could create up to 80 million new jobs under normal investment scenarios and up to 200 million if investments are accelerated. These roles would span architects, engineers, construction workers, and various skilled trades. 4. Renewable energy and climate action present another source of job creation. Investments in renewable technologies, energy efficiency, and climate adaptation could lead to millions of new jobs in sectors like manufacturing, construction, and installation. 5. Marketization of unpaid domestic work refers to the trend of monetizing services that were once unpaid and done at home, such as childcare and housekeeping. As more women join the workforce globally, this trend could create between 50 and 90 million jobs in these service sectors. 6. Development and deployment of technology will also drive job growth. Spending on tech, expected to increase by over 50% between 2015 and 2030, will create jobs related to IT services and technology development. While the number of jobs in tech may be smaller than in other sectors like healthcare, they tend to be higher-paying roles. However, not all jobs in this space are lucrative. Not all jobs in developing and deploying AI technologies are high-wage. Many underpaid workers contribute to building, maintaining, and testing AI systems. This hidden labor spans supply-chain work, on-demand crowdwork, and traditional service jobs. ➔ Exploitative work exists throughout the AI pipeline, from resource extraction in the mining sector to microtasks in software development, often paying very little. Mary Gray and Sid Suri call this ghost work, while Lilly Irani refers to it as human-fueled automation. This labor involves repetitive tasks like data labeling and content moderation, essential for AI training but typically poorly compensated. A UN study found that many crowdworkers on platforms like Amazon Mechanical Turk and Clickworker earned below their local minimum wage, despite being highly educated. Content moderators, who review harmful content, also face low pay. For instance, at the start-up x.ai, the AI agent “Amy” was actually supported by contract workers putting in long shifts to sustain the illusion of automation. The story of the Mechanical Turk, a fake automaton from 1770 with a hidden human chess master, reflects today’s reality of microwork. Amazon named its crowdsourcing platform Mechanical Turk, where humans handle microtasks to fill the gaps in AI systems. This is what Jeff Bezos calls “artificial artificial intelligence,” as humans effectively emulate and improve on AI’s limitations. In summary, while there are many sources of potential new jobs, including advances in technology and shif