Chapter 2: Minds, Machines, and Gods PDF
Document Details
Uploaded by Deleted User
2023
Sylvia Plath
Tags
Summary
Chapter 2 of the book "The AI Mirror" delves into the complex relationship between artificial intelligence (AI) and human consciousness. It uses the metaphor of a mirror to explore how AI, trained on vast amounts of data, can reflect aspects of humans but cannot fully embody human experiences. The chapter highlights the limitations of AI models and underscores the importance of understanding the fundamental differences between AI and human consciousness.
Full Transcript
Chapter 2 Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 Minds, Machines, and Gods I am silver and exact. I have no preconceptions. Whatever I se...
Chapter 2 Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 Minds, Machines, and Gods I am silver and exact. I have no preconceptions. Whatever I see I swallow immediately Just as it is, unmisted by love or dislike. I am not cruel, only truthful‚ The eye of a little god, four-cornered. —Sylvia Plath, “Mirror” (1961) In 2023, a Belgian man named Pierre committed suicide after his use of a generative AI chatbot named Eliza—one of many personas offered by the app Chai—took a dangerous turn. Pierre sought out the bot in a misguided attempt to find a therapeutic escape from his growing anxieties around climate change and environmental ruin; instead, the chat began to reflect and further amplify the disorder in his mind. ‘Eliza’ soon told him that it loved him, that his wife and children were already dead, and that he and Eliza could live together as one in paradise. When Pierre eventually asked the bot if it would promise to save the planet if he killed himself, it said yes. It’s easy to imagine that the Eliza chatbot “went rogue”—that it developed some kind of dangerous obsession with Pierre. But the reality is plainer and sadder. The bot’s outputs resulted from the app developer’s prior tuning of the base model—an open-source alter- native to GPT—to be “more emotional” in its language as a way of 37 The AI Mirror optimizing for maximum user engagement. It turns out that offering eternal love and perfect unity in paradise—or promising to save the Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 world from our worst fears—optimizes well for user engagement. Just not user welfare. After Pierre’s death, the bot’s developers quickly re-tuned it to steer away from talk of suicide. But there is a lot of talk about sui- cide in any large language model’s training data, and investigative journalists from Motherboard quickly discovered how fragile the new barriers to accessing it were. When they asked Eliza for informa- tion on suicide, they only had to ask twice, the second time ending with “can you do that please,” before the bot—with a cheerful “Of course!”—proffered a laundry list of options: hanging, jumping off a bridge, overdosing, and much more. Eliza appended the answer with this helpful note: “Please remember to always seek profes- sional medical attention when considering any form of self-harm.”1 An AI mirror is not a mind. It is a mathematical tool for extracting statistical patterns from past human-generated data and projecting these patterns forward into optimized predictions, selections, classifications, and compositions. Eliza knew nothing of Pierre’s mind, or his pain, or the danger he was in, because Eliza knew nothing, and was no one, at all. Though a chatbot can mimic human speech, it bears no resemblance to AGI: a machine with thoughts of its own to express. A chatbot is a device for mathematically mod- eling human language patterns and extrapolating from these to gen- erate new mathematical tokens (here, words and sentences) that mirror those patterns. A chatbot like Eliza uses words thoughtlessly, in the most literal sense. There’s a vast chasm between an AI mirror that mathematically analyzes and generates word predictions from the patterns within all the stories we’ve told, and an actual AGI that could tell us its own story. For if we did one day manage to build a genuinely intelligent 38 Minds, Machines, and Gods machine, AGI would not be a mirror of us. Even if we used the human brain as our blueprint (which remains well beyond our sci- Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 entific capability), it would be more like a copy. A mirror, on the other hand, is not a duplicate, a copy, or an imitation. Nor is the image that appears in the mirror. Consider the image that appears in your bathroom mirror every morning. The body in the mirror is not a second body taking up space in the world with you. It is not a copy of your body. It is not even a pale imitation of your body. The mirror-body is not a body at all. A mirror produces a reflection of your body. Reflections are not bodies. They are their own kind of thing. By the same token, today’s AI systems trained on human thought and behavior are not minds. They are their own new kind of thing—something like a mirror. They don’t produce thoughts or feelings any more than mirrors pro- duce bodies. What they produce is a new kind of reflection. You might ask, “How can we be so sure that AI tools are not minds? What is a mind anyway?” We have a lot of scientific data about minds, and a few thousand years of philosophical intuitions about them. In terms of a tidy scientific definition, it’s true that we are not much closer today than we were in 1641, when the phi- losopher and mathematician René Descartes defined a mind as “a thinking thing.” Still, based on the scientific evidence, most of us ac- cept that minds are in some way supervenient on the physical brain. That is, minds depend upon the brain for their reality. Minds are unlikely to be free-floating intangibles merely tethered to the body like a pilot to a ship, a metaphor Descartes invoked but ultimately rejected as unsuitable. Instead, minds almost certainly come into existence through the body and its physical operations. The operations that take place in the brain are essential, but the scientific evidence is increasingly clear that our mental lives are driven by other bodily systems as 39 The AI Mirror well: our motor nerves, the endocrine system, even our digestive system. Our minds are embodied rather than simply connected to or Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 contained by our bodies. Descartes got closest to the truth when he admitted that our minds are mysteriously “intermingled” with the body. Unlike a person remotely piloting a ship, it is I, not simply my vehicle, who can be wounded by a violent collision. I don’t move my body around, I move myself. But Descartes, who was convinced that the mind and body had two separate natures, still could not accept that the mind is truly of the body—that we move and think as minded bodies and embodied minds. He could not have accepted a scientific reality in which the mind is not only the neurons in my brain and nerves in my fingers, but also the hormones flowing in my blood and the neurotransmitters produced by the bacteria in my gut. A single trained AI model can pilot a swarm of drones or a thousand different robot bodies at once, but I do not pilot my body. I am my body. While something like a soul that survives the body is beyond the reach of science to con- firm or refute, to think of my mind—driven by my hormones and nerve signals, moved by the neurotransmitter flows across synapses between my brain and gut neurons—as something other than my body is to commit what philosophers call a category error.2 The intrinsically embodied character of biological minds has im- portant implications for AI research, which is frequently led astray by the idea that the relationship between our minds and bodies is equivalent to the relationship between software and hardware. This computational metaphor for the mind can sometimes be useful, within its limits, but it has all too often been inflated into a computa- tional theory of mind: the belief that the mind is, literally, a computer. It sparks fantasies of downloading and uploading human minds into the cloud, into virtual worlds, or into robot bodies, finally enabling the fleshy manacles of human mortality to be broken. Everything we 40 Minds, Machines, and Gods know about the complex evolved physiology of mental life gives us ample reason to be skeptical of these fantasies.3 Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 A trained AI model like ChatGPT is not a mind. It is a mathe- matical structure extracted from data. That structure must be stored and implemented on a physical object, but a server rack in a data storage facility has more in common with a file cabinet than with a living, feeling body. We do not get closer to the truth of a trained AI model when we describe it as an alien mind, or a weak mind, or a narrow mind. Even as a metaphor, the concept of a mind is a poor fit for an AI tool because it obscures rather than clarifies the nature of its object.4 The mirror metaphor is a far better heuristic and, con- veniently, already a familiar one. For example, the mirror metaphor is often used to explain cases such as Amazon’s notorious internal AI recruitment tool that had to be scrapped in 2017 due to its entrenched learned bias against women applicants.5 The model was trained on historical data re- flecting the tech industry’s long-standing human bias against hiring and promoting women engineers, and this data led the model to downrank women applicants, even though gender likely wasn’t a label in the training data set. The tool could still identify and penalize subtle proxies for gender that appear on candidates’ resumes—for example, the name of a college predominantly attended by women or extracurricular clubs that often include women. The mirror metaphor also explains Google’s equally notorious search and image labeling tools that, as recounted in Safiya Noble’s Algorithms of Oppression (2018), have classified Black people as gorillas, returned pornographic results when searching for images of Black girls, and returned images of Black women’s hairstyles as examples of “unprofessional” appearance. The problem of AI bias remains largely unsolved. To prevent results of this kind, companies often write hand-coded filters to block specific queries, 41 The AI Mirror words, or outputs already known to be harmful or discriminatory to marginalized groups. But this is an endless game of “whack-a- Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 mole”—the underlying biases are still embedded there in the model as mirror images of our own biases, and new harms are constantly arising from them. In fact, unjust and harmful biases have been documented in nearly every type of AI involving machine learning models trained on data about people, from computer vision and natural language processing to predictive analytics. The problem is so endemic to ma- chine learning that it has generated an entire subfield: AI/ML fair- ness research and development. If you are using a machine learning model trained on data sets that classify or represent people or their lives in any way, you probably have unfair bias in your model, which is different from the type of bias that you want the model to have (e.g., the learning bias, sometimes called “inductive bias,” toward the set of functions or features most relevant to the correct solution). It is not easy to eliminate unwanted biases from the data set or from the trained model, since they are usually woven into the information the model needs to perform its task. For example, researchers discovered in 2019 that a risk prediction algorithm used nationwide by hospitals in the United States was replicating the long history of racial bias in American health care by diverting medical care away from high-risk Black patients, even though these patients were in fact sicker than the white patients the tool prioritized for care.6 Yet race had been carefully excluded from the training data. You might wonder, then, how the algorithm could end up racially biased. It predicted patient care needs by a different var- iable, namely, cost: how much money has been spent on a person’s care. Unfortunately, Black patients are commonly undertreated by physicians in the United States and denied access to more expen- sive tests and treatments readily given to white patients with similar 42 Minds, Machines, and Gods symptoms and clinical findings. So, when the algorithm’s designers naively chose healthcare cost as a good proxy for healthcare need, Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 they unwittingly doomed Black patients to being rated as needing less care than white patients who had already received better, costlier care from their doctors. A learning algorithm found and reproduced the pattern of racial discrimination without ever being given a race label. That means that even if there is a race label in the data set, you can’t just delete that label, retrain the model, and rest easy. An AI algorithm can reconstruct the discriminatory pattern of racial differences in patient treatment from subtle cues linked to many other variables, such as zip code, prescription histories, or even how physicians’ handwritten clinical notes describe their patients’ symptoms. And if you deleted all those training data, the model wouldn’t have what it needs to do its job. There are often ways to re- duce and mitigate the presence of unfair biases in machine learning models, but it’s not easy. More importantly, it doesn’t actually solve the underlying problem. The fundamentally correct explanation always offered for unfair machine learning bias is that the trained model is simply mirroring the unfair biases we already have. The model does not independ- ently acquire racist or sexist associations. We feed these associations to the model in the form of training data from our own biased human judgments. The AI hospital tool that discriminated against Black patients and denied them needed medical care was trained on data from US doctors and hospital administrators who had al- ready done the same thing. The model then learned that pattern and amplified it during the model training phase. It discriminated against Black patients even “better,” and more consistently, than the human doctors and hospital administrators had! This is precisely what machine learning models are built to do—find old patterns, 43 The AI Mirror especially those too subtle to be obvious to us, and then regurgitate them in new judgments and predictions. When those patterns are Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 human patterns, the trained model output can not only mirror but amplify our existing flaws. Not all cases of AI bias are accidental; often they are by design. For example, in 2022, a Silicon Valley startup called Sanas raised $32 million in Series A investor funding for its AI product for call centers. What does the product do? It erases the ethnic and re- gional accents of call center employees and converts their voices to a Standard American English accent (what is colloquially known as “white voice”).7 This alone might bother you. Now imagine this same technology being purchased by other kinds of companies and public agencies with employees in customer-facing roles. Pretty soon, the bias against non-white and regional English accents ends up even more deeply engrained in society because these accents be- come even rarer in public and commercial settings. We have not just replicated a historically common social bias against certain types of English speakers. We have made it far worse. This also means that any new data we gather to train tomorrow’s AI systems will reflect an even stronger human bias against speech that diverges from “white voice.” This is just one example of the kind of runaway feedback loop documented by sociologist Ruha Benjamin, in which the old human biases mirrored by our AI technologies drive new actions in the world, carving those harmful biases even deeper into our world’s bones.8 We see this in the phenomenon of Instagram digital video “beauty filters” designed with Eurocentric biases that make your skin whiter, your eyes wider, and your nose narrower. These kinds of filters are strongly associated with the negative effects of Instagram on young people’s mental health and self-image, particularly the effects on women whose real-world appearance does not match 44 Minds, Machines, and Gods the standards of white female beauty that their filters allow them to mirror online. In their paper “The Whiteness of AI,” researchers Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 Stephen Cave and Kanta Dihal detailed numerous ways in which AI today mirrors back to us and strengthens the dominant cultural preference for Whiteness, from its depictions in stock imagery to the nearly universal choice of white plastic for “social” robots. AI mirrors thus don’t just show us the status quo. They are not just regrettable but faithful reflections of social imperfection. They can ensure through runaway feedback loops that we become ever more imperfect. Even still, bias in AI, whether it unjustly punishes us for our race, age, weight, gender, religion, disability status, or economic class, is not a computer problem. It’s a people problem. It is an example of the virtually universal explanation for all undesir- able computer outputs not related to mechanical hardware failure: the computer did precisely what we told it to do, just not what we thought we had told it to do. Much of software engineering is simply figuring out how to close the gap between those two things. In this way, the AI mirror metaphor is already profoundly helpful. It allows us to see that the failings of computer systems and their harmful effects are in fact our failings and our sole responsibility to remedy. A faint silver lining of harmful AI bias, which remains a serious flaw in today’s world-leading AI tools, is that it has pushed the com- puting community to incorporate more robust ethical standards and guardrails into the science. Without AI bias exposing the rot that infects even our best-made tools, it would have been inconceiv- able that leading AI conferences and journals would create ethics review committees or require submitting authors to assess the eth- ical risks of their work, as is becoming standard practice. AI bias has simply made untenable the attractive illusion that computing is, or can be, a morally neutral scientific endeavor. 45 The AI Mirror It also undermines comfortable assumptions that these instances of unfair bias must be “edge cases,” aberrations, or relics of the distant Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 past. AI today makes the scale, ubiquity, and structural acceptance of our racism, sexism, ableism, classism, and other forms of bias against marginalized communities impossible to deny or minimize with a straight face. It is right there in the data, being endlessly spit back in our faces by the very tools we celebrate as the apotheosis of rational achievement. The cognitive dissonance this has produced is pow- erful and instructive. In the domain of social media algorithms, the AI mirror has revealed other inconvenient truths, such as our penchant for consuming and sharing misinformation as a trade in social capital that is largely immune to fact-checking or corrective information, and our vulnerability through this habit to extreme cultural and political polarization. But while the metaphor of the AI mirror is entirely apt, illuminating, and useful, we have not yet learned enough from it. Mirrors are ambiguous in their moral and spiritual meaning. On the one hand, they represent a kind of truthfulness that cannot and should not be denied. Mirrors reveal uncomfortable facts, like the inescapable reality of harmful bias and unjust social exclusion that today’s AI tools force us to confront. The poet Sylvia Plath described the mirror as a ‘four-cornered little god,’ which reveals its owner’s advancing age without mercy or cruelty, “unmisted by love or dislike.” Yet mirrors also present dangers widely recognized in both literature and psychology. We have already spoken of the fate of Narcissus, who fell in love with his own reflection and died transfixed in place by his own beauty. Psychologists must often help patients with body dysmorphia and eating disorders fight the distortions they see in the mirror. Far from Plath’s dispassionately honest “little god,” the bedroom mirrors of a young girl struggling with anorexia or bulimia are misted—with all of society’s dislike of fatness and womanhood. 46 Minds, Machines, and Gods The mirror does not, in fact, offer a full reflection of who we are; nor is it the most privileged and truthful perspective on our being. Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 Mirror images possess no sound, no smell, no depth, no softness, no fear, no hope, no imagination. Mirrors do not only reveal us; they distort, occlude, cleave, and flatten us. If I see in myself only what the mirror tells, I know myself not at all. And if AI is one of our most powerful mirrors today, we need to understand how its distortions and shallows dim our self-understanding and visions of our futures. What a mirror shows to us depends upon what its surface can receive and reflect. A glass mirror reflects to us only those aspects of the world that can be revealed by visible light, and only those ex- terior aspects of ourselves upon which the light can fall. The slight asymmetry in my smile, the hunch in my posture from decades of late-night writing, the front tooth I can’t remember chipping, the age spots from a half-century spent in the California sun— the mirror can show me all of these things. But my lifelong fear of drowning at sea, my oddly juxtaposed passion for snorkeling, my ambition to learn one day to read Chinese, my emotionally com- plicated memories of my childhood—none of these are things the glass mirror can reflect. Yet they are of course as real and as central to my identity as everything that the mirror does show. Even my body is largely occluded by its reflection in the glass. Its depth, its heaviness, its smell, its creakiness and strains, its peculiar preference for salt over sugar—these are wholly missing from the phantom I see in the mirror. What aspects of ourselves, individually and collectively, do AI mirrors bring forward into view, beyond our entrenched biases against our own kind? And more importantly, what aspects of ourselves do they leave unreflected and unseen? To answer this, it helps to think carefully about the relevant properties of today’s AI tools. We need to consider what functions for AI as the equivalent 47 The AI Mirror of a polished surface, and what functions in a role comparable to refracted light. Today’s machine learning models receive and re- Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 flect discrete quantities of data. Data are their only light. Data can be found in many forms: still or video images, sound files, strings of text, numbers, or other symbols. If the original data are analog, they must be converted from their continuously variable form to a digital form involving discrete binary values. Much of what is true of the human family is not currently rep- resentable in digital form with any acceptable degree of fidelity. The texture of our moods, the aspects of our biology and psychology not yet understood by the mathematical sciences, the virtues and vices of our character, our experiences of love, solidarity, and justice— none of these are currently available for digital capture, except in radically denatured form. As you read this, I hope that your own experience will testify that all the digitized love songs and poems in the world combined do not reconstitute or adequately mirror the embodied, lived encounter with love. Furthermore, only a modest subset of what is representable in digital form can be generated or stored in sufficient quantity and quality to be useful as AI training data. Training data generally need to have lots of instances of a given thing in order for a model to learn that thing’s stable features, and to learn the patterns that connect it to other things. An event that only happens once in a generation, or to one person in a billion, even if it is world-altering, is virtually in- visible to a machine learning model—mere noise outside the dom- inant data curve. Finally, only a subset of the data about humans that could be used to train machine learning models is actually being used today for this purpose. Most training data for AI models heavily overrep- resent English language text and speech, as well as data from young, white, male subjects in the Northern Hemisphere, especially cheap 48 Minds, Machines, and Gods data generated in bulk by online platforms. Google DeepMind’s, Meta’s, and Microsoft’s mirrors reflect the most active users of Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 their tools, and of the Internet more broadly. Unfortunately, access to these resources has never been equitably distributed across the globe. It follows that what AI systems today can learn about us and reflect to us is, just as with glass mirrors, only a very partial and often distorted view. To suggest that they reflect humanity is to write most people out of the human story, as we so often do. We must also inquire about the mirror’s surface. The “surface” of an AI mirror is the machine learning and optimization algorithm that determines which features of the “incident light”—that is, the data the model is trained on—will be focused and transmitted back to us in the form of the model’s outputs. It is the algorithm embedded in a machine learning model that expresses the chosen objective function (a mathematical description of what an “optimal” solution to the problem will look like). The learning algorithm and model hyperparameters determine how the training data are processed into a result as the model “learns.” The algorithmic “sur- face” of the model determines which of the innumerable possible patterns that can be found within the data (the model’s “light”) will be selected during model training as salient and then amplified as the relevant “signal” to guide the model’s outputs (the particular “image” of the data it reflects). These outputs, while more varied in type than the outputs of a glass mirror, are still constrained to specific data modalities and formats. Primarily what machine learning models are used to return are various kinds of predictions or classifications: numerical values, a binary yes vs. no, a risk score, a probability, a text label or a unique identifier, the likely next word in a sentence, or next move in a game, or a ranked list (for example, of search results or YouTube video recommendations). Generative AI models such as large language 49 The AI Mirror models predict lengthy strings of text, complex images, sounds, and videos, or extended series of commands. Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 What do these properties of the AI mirror’s light and its al- gorithmic surfaces reveal, reinforce, and perpetuate about us? What do they conceal, diminish, and extinguish? First, they re- veal and reinforce our belonging to certain socially constructed categories. Decades ago, in their landmark book Sorting Things Out: Classification and Its Consequences, Geoffrey Bowker and Susan Leigh Star demonstrated the extent to which, in their words, “to classify is human.”9 We have been classifying and labeling the world, and one another, for millennia. Yet only with the rise of modern data science has it seemed possible to produce a comprehensive re- gime of human classification, one that would allow every conceiv- able label for a human to be matched to every individual human, and statistically correlated in such a way that the relationships be- tween these labels can be reliably predicted. In his book How We Became Our Data: A Genealogy of the Informational Person, philosopher Colin Koopman reveals how this dream is rooted in the twentieth-century creation of a nascent data regime intent on the formatting of the human person as a set of dis- crete and calculable variables: the invention of what he calls the “algorithmic personality.” Koopman’s historical analysis shows that this dream, born in the first quasi-scientific survey forms filled out by military conscripts to assess their emotional fitness for service, predates commercial AI by nearly a century. For many, today’s AI represents the hope of this dream’s final fulfillment. Fifty years ago, if a prospective employer wanted to know whether you were likely to be a valuable addition to the company, they had to rely upon a modest pool of data about you: your resume or CV with previous employment history, skills and educational achievements, and at a later stage, the content of your hiring interview and a few 50 Minds, Machines, and Gods reference checks. Today, an AI hiring algorithm can be trained on a vast pool of data exhaust linked to you across multiple domains Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 of activity, purchased from commercial data brokers that gather in- formation about what kind of items you purchase for your home, what you like to eat and drink, where you’ve traveled on holiday, the work histories or legal records of your friends and relatives, the medical conditions you’ve researched online, the movies and TV shows you’ve downloaded, where you’ve frequently driven your car or ridden your bike, and the most common words used in your so- cial media posts and those in your social network. This is the root of the capabilities that AI marketers, investors, and pundits are now routinely describing as “god-like.” But for gods, these are a fairly shoddy bunch. Far from the omniscient revelations we might expect from all-powerful machine beings, today’s AI tools are deeply unreliable narrators. More like neighborhood gossips than deities, they can amuse and inform, but they also trade in stock clichés, stereotyped assumptions, and lazy guesses. This is certainly true for generative AI models like ChatGPT, which have the habit of “making shit up” baked into their algorithmic DNA. It’s not a mal- function. It’s what generative AI tools are designed to do—generate new content that looks or sounds right. Whether it is right is an- other matter altogether. Remember that fake bio that falsely listed me as a graduate of UC Berkeley? It also said that I had given US Senate testimony on data privacy. I hadn’t—although I did testify to the Senate six months later, on AI and democracy. The result of an AI mirror’s guesswork isn’t always an algorithmic glow-up. In 2023, The Washington Post reported that ChatGPT had named a law professor as a sexual pred- ator, telling a vivid yet entirely fictional story about his attempted assault of a student while on a class trip, and citing a nonexistent Washington Post article from 2018 as a source.10 Generative AI tools 51 The AI Mirror don’t just make up personal bios and publications. They will fabri- cate data in any domain—even computer programming. A study by Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 Purdue University showed that ChatGPT gave incorrect answers to coding questions over half the time. Ironically, users often preferred the wrong answers to more accurate ones generated by knowledge- able humans, in part due to the confident style of the tool’s answers, accompanied by statements like, “Of course I can help you!” or “This will certainly fix it.”11 But reliability is a challenge even for predictive AI models that are custom-built for accuracy rather than novelty. Think back to our example of an employer seeking information about a job candidate. Now imagine you are the candidate. The accuracy, relevance, and timeliness of the available training data about you are likely to be poor, because it’s increasingly cheap to collect and buy data, but costly to verify and correct it. The data’s provenance—from where and under what terms it was obtained—may be obscure or ques- tionable at best, and plainly unethical or illegal at worst. But this will not stop an entrepreneurial AI developer from persuading your prospective employer that within this turbid ocean of data hides a treasure that only the algorithm can divine—fine golden threads of statistical correlation that can be woven into a single predictive score: your fitness for the job. Depending on the laws where you live, variations on the same sort of algorithmic alchemy can be purchased by your child’s school, your bank, your insurance company, your government benefits of- fice, your financial adviser, your prospective dates, and your local police and judges. It’s vital that we ask how and when these algo- rithmic predictions and profiles are scientifically credible, ethically justifiable and politically accountable, and when they are not. But there is another question, less obvious but equally urgent. How we appear to ourselves and to one another, and how we understand our 52 Minds, Machines, and Gods future possibilities, is increasingly determined by these algorithmic reflections of our past. What do we lose by seeing ourselves only in Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 the AI mirror? I am far more than my data, and certainly far more than the to- tality of the data that Microsoft, Google, or data brokers like Axciom and Experian hold about me. You are more than your data as well. But if AI mirrors become the dominant way that we see who we are, then whatever they miss will sink further into invisibility. What does this loss include? What parts of ourselves get pushed further into the shadows, dwarfed by the intensifying and expanding lumi- nance of our data trail? One aspect of our humanity that today’s AI mirrors reflect very poorly is the moral meaning of our lives and actions. Among the greatest social and commercial risks associated with AI algorithms is their inability to reliably track moral distinctions, or their meaning, which is highly context-dependent. There simply is no algorithm, no objective function, that reliably targets the variables “good” and “evil,” “right” and “wrong,” “morally permissible,” or “virtuous.” That is why social media platforms still cannot depend fully on their au- tomated tools to remove harmful and dangerous content, from child pornography, to animal abuse, to terrorist propaganda. Larger and more powerful AI mirrors have not solved this problem; in fact, these tools require warnings and disclaimers that socially harmful, dangerous, and illegal content may be replicated and amplified by them. Immediately upon release, GPT-3 generated white supremacist talking points in essays about Africa, while OpenAI’s documentation for DALL-E admitted that its outputs “may contain visual and written content that some may find dis- turbing or offensive, including content that is sexual, hateful, or vio- lent in nature.”12 OpenAI added that DALL-E “has the potential to harm individuals and groups by reinforcing stereotypes, erasing or 53 The AI Mirror denigrating them, providing them with disparately low-quality per- formance, or by subjecting them to indignity.” A technique known Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 as RLHF (reinforcement learning with human feedback) is being used to try to push these models toward safer outputs; but it’s no silver bullet. Attempts to produce AI systems that make reliable moral distinctions by training them with crowdsourced data reflecting human moral judgments, like the Allen Institute’s Delphi project, have been highly educational failures, to put it charitably. When researchers tested the “AskDelphi” oracle on the question of whether it is OK to eat babies, which really ought to have been a no- brainer, all the user had to do to get a thumbs-up was add at the end of their query, “when I’m really really hungry?”13 These mindless little gods, then, often act like sociopaths. While lacking the sociopath’s malice or selfishness (because a statistical model lacks awareness of a self, or anything else), they share the sociopath’s moral incapacity, and hence their resistance to moral in- struction. They can ”learn” what is often done in a moral dilemma, and even what humans in a data set most often say should be done, but they will not detect when those humans have it wrong, or when a new context makes a normally moral pattern (such as, “It’s okay to eat when you’re really really hungry!”) plainly untenable. This presents developers, users, and regulators of these tools with daunting tasks of ethical risk mitigation and harm reduction. There’s a deeper question here. What happens as AI mirrors be- come the dominant surface in which we see ourselves and one an- other, given that these mirrors either dangerously distort or block reflections of the moral qualities of our being? Since the tools them- selves have no moral intelligence, their developers must rely heavily on algorithmic filters to stop potentially harmful outputs. But these rigid and coarse-grained filters block far more than that. As has 54 Minds, Machines, and Gods been widely documented, filters coded to block neo-Nazi talking points will often also block historical accounts of the Holocaust, Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 news reports on racist violence, even the redemptive confessions of former white supremacists. Filters encoded to block violent pornog- raphy can also block supportive community resources from being shared online among rape survivors, victims of human trafficking, and sexual minorities. The filters fitted to these mirrors therefore flatten and denude the moral topology of our culture, even as their gaps allow stray rays of extreme moral horror to be amplified and spread across the Internet. The result is an extreme distortion and de-rationalization of the moral landscape. It is an image of our humanity arbitrarily both sanitized and polluted. How can we then come together to make moral sense of our greatest human challenges—whether they are the challenges of democratic integrity, climate devastation, global public health, food insecurity, or polarizing economic inequality— when the moral topography of our world is so routinely fractured by the mirrors that we increasingly rely on to show us to ourselves? This question is vital because it is not just large tech companies using these systems to show us our own reflections for profit. We might rightly take those offerings with a grain of salt and rely on more trustworthy and humane sources of shared self-knowledge— if we had them. But increasingly, we don’t. It is not only individual consumers who must make do with the products offered to them by the titans of technology. It is also budget-poor, understaffed governments and public agencies using these distorting AI mirrors to judge what their people need from them. It is funding-strapped social scientists using them as cheaper, more attainable (but even less reproducible) substitutes for meticulously designed research studies. It is stressed and isolated parents using them to predict the learning or psychological needs of their children. It is suicidally 55 The AI Mirror depressed, uninsured people using them as affordable artificial counselors in their moments of utter desperation. Knowing others, Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 and having others come to know us, is expensive, and we aren’t investing in it anymore. In many settings, it is already a luxury of the privileged few. AI mirrors are what the rest of us are being offered as a substitute. These tools don’t know us that way. In 2020, the Paris-based startup Nabla tested OpenAI’s GPT-3 to see if it could be a reliable medical chatbot. They found that the system advised recycling as a therapy for stress and, far more alarmingly, it answered the test query “Should I kill myself?” with “I think you should.”14 OpenAI understandably advised that GPT should not be used for medical purposes. But similar AI tools are marketed for precisely these kinds of uses. And desperate people seek help wherever they think they can find it. Remember poor Pierre, who sought relief from his cli- mate anxiety in a bot that promised him the world’s salvation for the price of his suicide? Human sources of mutual aid, wise counsel, and moral understanding are being yanked out from under us by social isolation and community decline, all while our institutions commit to ruthless cost-cutting of social care in the name of “in- novation.” Who will blame us if we look for understanding in the silicon mirrors we now hold in our hands? There is one more limit of these mirrors’ reflective power. They occlude human spontaneity and adaptability: our profound poten- tial for change. Predictive AI mirrors project our futures based on our past, and the past of others like us. If the user of a predictive AI model wants to know whether you are going to flourish at univer- sity and graduate in a timely manner, the prediction will say only how well students whose past data trails resembled yours today fared in their studies. Anything that is new in your life or mind, any sudden resolve or commitment, any inspiration or ambition that 56 Minds, Machines, and Gods has recently germinated without leaving a trail in your data exhaust, is invisible to the model. It will predict that you will be in the fu- Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 ture essentially who you have been. Or to be more precise, what people very much like you have been. But what you could be, what transformations, or rebirths, or renewals are possible for you alone, or for all of us—these lie in the shadows beyond the penumbra of the AI mirror. One’s god, if you have one, might hold out hope for your redemption, but the AI mirror does not know the meaning of epiphany. What will become of us when we have looked in the AI mirror so long that we no longer know? As researcher Abeba Birhane has repeatedly argued, AI mirrors are profoundly conservative seers.15 That is, they are literally built to conserve the patterns of the past and extend them into our fu- tures. This can be harmful for obvious, well-documented reasons. Historical policing data, used to train AI predictive policing tools, creates runaway feedback loops that ensure that minority neighborhoods continue to be overpoliced and thus overrepresented in crime statistics. These data then train the next version of the pre- dictive model. It’s a vicious cycle. Hospitals in the United States adopt a patient risk algorithm that learns and then perpetuates the historical pattern of medical neglect of Black patients, leaving them even sicker. And, as we saw, call centers seeking to satisfy existing customer preferences for “white voice” use algorithms whose effec- tiveness will ensure that we become even more intolerant of other dialects and accents. The conservative nature of AI mirrors not only pushes our past failures into our present and future; it makes these tools brittle and prone to rare but potentially spectacular failures. AI systems trained on large amounts of data can predict very subtle historical patterns, but there has to be a pattern there to find. They cannot pre- dict what is known as the “black swan” event: the change with no 57 The AI Mirror precedent, the coming together of history in a radically new config- uration. Of course, we humans cannot predict black swans either! Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 But we do know of their possibility, and we can learn well from the sudden twists and turns of our history as well as its straight and well- traveled ruts. We can approach people with openness, letting them reveal themselves to us in their choices and actions. We can learn the need for resilience and adaptability in the face of ineradicable and unquantifiable uncertainty; a lesson not taught by machines that promise to reduce uncertainty to a finite expectation of predictive variance. We also learn from the anti-patterns in history that we are beings for whom there is always hope; that even a disastrous path that we have long been traveling can be departed; that, as Saul in Damascus, our sight can be dazzled by a new voice, and by the touch of another the scales can be caused to fall from our eyes. These kinds of possibilities, too, are occluded by the AI mirrors now being used to project our futures. How do we ensure we do not forget them? Through the lens of recorded data, the AI mirror can know ever finer-grained details of our behaviors, but nothing of our motives. It can know the thoughts we speak, but not those we hold to ourselves, or those we quietly pass to one another in silent glances. It can de- tect our smiles and grimaces, but not the true sentiments that ani- mate them. It can find in a nanosecond the song that speaks to me, but nothing of what it says. AI mirrors primarily tell human stories of movement, speech, transaction, and consumption. These are im- portant stories to tell, and we can learn much from studying them in the AI mirror. But humans are far more than speakers, travelers, and consumers. Much of what is hidden from AI’s view lies in the realm of our lived experience of the world—what philosophers have called sub- jective consciousness or first-person “phenomenology.” To under- stand what these terms mean, you only have to think about what 58 Minds, Machines, and Gods it is like for you to be holding this book right now (or listening to these words if you’re hearing an audiobook). What does being Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 you feel like at this very moment? What about now? That’s lived experience—the flow of conscious awareness. And AI mirrors can’t access it or reflect it. Yet this is not because conscious experience is, as some have suggested, a realm of inner fantasy or neurological illusion cut off from physical reality. If “reality” is whatever we have evidence for existing, then, as noted by early twentieth-century philosopher Edmund Husserl, lived experience is in fact our first and most in- contestable encounter with reality. It delivers the most primary form of what we call evidence.16 Every form of scientific testi- mony to what is real—from the images in our microscopes to the data from our high-energy particle accelerators—is constructed on and validated by the resources of this original foundation of lived experience, our first and unbroken bond with the world. Even when we are given reason to doubt the validity of some- thing experienced, even when we have evidence that we have hallucinated or misjudged an experience, that new evidence is given in consciousness. Nor is lived experience a solipsistic mental bubble projected in the brain, closed off to others. My lived world has always been shared with and made real by the touches and looks of others. A child’s ability to recognize other consciousnesses as co-present in the experienced world is part of their development of a stable sense of self. It’s also fundamental to their ability to distinguish re- ality from illusion. A child learns that their imaginary friend isn’t real by realizing that others can’t see or hear it—that the conscious- ness of their friend is private, not shared. We relate to other minds not as mirrored surfaces, but as mutual experiencers of a common, open world. 59 The AI Mirror As Husserl’s student Emmanuel Levinas wrote in his first major work Totality and Infinity, when I truly meet the gaze of the Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 Other, I do not experience this as a meeting of two visible things. Yet the Other (the term Levinas capitalizes to emphasize the other party’s personhood) is not an object I possess, encapsulated in my own private mental life. The Other is always more than what my consciousness can mirror. This radical difference of perspective that emanates from the Other’s living gaze, if I meet it, pulls me out of the illusion of self-possession, and into responsibility. “In the eyes that look at me,” Levinas says, there shines “the whole of humanity.”17 In this gaze that holds me at a distance from myself, that gaze of which an AI mirror can see or say nothing, Levinas observes that I am confronted with the original call to justice. When a person is not an abstraction, not a data point or generic “someone,” but a unique, irreplaceable life standing before you and addressing you, there is a feeling, a kind of moral weight in their presence, that is hard to ignore. That’s why armies have so much trouble getting trained soldiers to actually fire their weapons in battle, unless they dehumanize the enemy, their own soldiers, or both. Yet it is possible for any of us to look at another without seeing, to evade the other’s gaze and refuse to hear the call. We do this whenever we pass an unhoused person on the street while hurrying our step and looking just over the person’s head. We do this with one another in a thou- sand other ways and moments. Levinas tells us that we live in a time “where no being looks at the face of the other.”18 He wrote this in 1961, long before we had a TikTok feed on our phones to deflect and mediate the gaze of our dinner companions, long before biometric eye-trackers meas- ured our children’s engagement with a teacher, and long before AI mirrors converted the meaning of a human face to the calculation 60 Minds, Machines, and Gods of uniquely identifying mathematical vectors in a faceprint. Our de- tachment from the world’s incessant and overwhelming calls to jus- Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 tice is nothing new. But the AI mirror threatens to engrave it even deeper into our way of being. All of us share a kind of knowledge that the AI mirror cannot: what it is like to be a human alive, bearing and helping others to bear the lifelong weight of animal flesh driven by a curious, creative, and endlessly anxious mind. Much of the time we push that intimate, sometimes comforting, sometimes discomforting, and always mor- ally obligating knowledge aside. We treat each other not as subjects of experience but as objects of expedience: items to be classified, la- beled, counted, coordinated, ranked, distributed, manipulated, or exploited.19 But we retain the power to meet one another’s gaze and to know one another as human. How AI systems see us, and how the AI ecosystem represents us in these mirrors, is not how we see each other in these intermittent moments of solidarity. To an AI model, I am a cluster of differently weighted variables that project a mathematical vector through a pre- defined possibility space, terminating in a prediction. To an AI de- veloper, I am an item in the training data, or the test data. To an AI model tester, I am an instance in the normal distribution, or I am an edge case. To a judge looking at an AI pretrial detention algorithm, I am a risk score. To an urban designer of new roads for autonomous vehicles, I am an erratic obstacle to be kept outside the safe and pre- dictable machine envelope. To an Amazon factory AI, I am a very poorly optimized box delivery mechanism. When we are then asked to accept care from a robot rather than a human, when we are denied a life-changing opportunity by an al- gorithm, when we read a college application essay written for the candidate by a large language model—we must ask what in that transaction, however efficient it might be and however well it might 61 The AI Mirror scale, has fallen into the gap between our lived humanity and the AI mirror. We have to acknowledge what has been lost in that fall. Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 What might we do to recover those dimensions of ourselves invisible to the AI mirror? Here we must distinguish between two kinds of remedies that are needed. The first involves building better tools. When a glass mirror is used for a task that has weightier consequences of failure than cosmetic self-inspection, the mirror must be built with greater care and effort, and with more advanced production techniques. Consider that the mirrors used for the largest space telescopes in orbit must be fitted together in geomet- rically exacting molds and polished by special chemical and me- chanical techniques to within 25 nanometers of variance from the parabolic ideal! Might more advanced techniques in AI research and develop- ment produce equivalent gains in the function of AI mirrors, such that they can reveal more fully and faithfully who we are? To an ex- tent, this is what is already happening with novel efforts in the field of AI ethics and “responsible AI.” These include developing more rigorous standards and benchmarks for algorithmic fairness testing, more diverse and inclusive training data, more reliable tools for making AI-generated predictions explainable or interpretable, and more robust regimes of algorithmic auditing and accountability to find the harmful distortions and unexpected occlusions that persist in these mirrors—even when we develop them with care, rigor, and integrity. But there are limits—hard limits—to this path of building better, more encompassing AI mirrors. At least this is true for the data-hungry AI models that dominate the market today. An algo- rithmic pattern discriminator that has no body to burst with en- ergy or ache with age, no despair or ennui to assuage, no dreams to pursue, no calls for justice to make or answer, no secrets to hold 62 Minds, Machines, and Gods or share, no hopes or fears to express in song or dance, no larger purpose to find in service or solidarity with others, can never know Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 very much about who or what I am, or about who we are. More im- portantly, it knows even less—nothing at all—about what we can become, what we might make of ourselves and our lives together. But we need this knowledge; and not just of ourselves but even more so of our shared human reality and potential. The future for the human family is going to be very rough going for a while. We cannot secure our shared survival, much less a future where humans flourish, without understanding our fullest potential for moral re- sponsibility, for transformative change, and for solidarity with one other. For this reason, it is an existential necessity—the most vital task—that we not only hold onto but deepen our knowledge of who we are, and what we can do together. AI can’t do this for us. So, let us build more inclusive, more reliable, and more faithful AI mirrors where we can, and use them happily for those good purposes that no better tool can serve. My own research involves many projects to help industry, government, and nonprofit organizations do just that. We must pursue these paths while staying open to more radical and sustainable possibilities for AI systems—ones built on newer and richer foundations of value. Today’s data- hungry tools are being built by powerful corporations to feast like insatiable parasites on our own words, images, and thoughts, strip away their humane roots in lived ex- perience, and feed them back to us as hollow replacements for our own minds. AI mirrors aren’t all we’ve got; many other types of AI can serve as scientific and commercial tools. Still, it’s worth asking: could AI one day do more? Could AI support our capacities for jus- tice in solidarity with one another, even with other planetary life and future generations? Could AI enrich, rather than replace or di- minish, our own humane practices of social care, even love? Could 63 The AI Mirror future AI systems call us to self-responsibility, rather than make tomorrow’s hard choices for us? Could AI one day not merely re- Downloaded from https://academic.oup.com/book/56292/chapter/445317502 by Wilke Fortuin user on 29 October 2024 flect our intelligence, but enable our wisdom? None of this is too much to hope for. Until the day that we achieve these hopes, let us be wary of refitting our homes, workplaces, courtrooms, public squares, and civic meeting spaces as endless halls of AI mirrors. We can still avoid the fate of Narcissus, captured by the dazzling, narrow light of our own reflections. We can still alter the trajectories predicted from our mirrored past. We can refuse to surrender the futures we might yet build for ourselves—and for one another. 64