Morality and Ethics PDF
Document Details
Uploaded by HandsomeMaracas2679
Università degli Studi di Milano Bicocca
Tags
Summary
This document explores the concepts of morality and ethics, discussing different schools of thought and approaches. It details descriptive and normative ethics, and includes examples related to moral values and judgments.
Full Transcript
Morality is the totality of opinions, decisions, and actions with which people, individually or collectively, express what they think is good or right. It represents the real values and principles that exist in a society. On the other hand, Ethics is about thinking deeply and systematically about th...
Morality is the totality of opinions, decisions, and actions with which people, individually or collectively, express what they think is good or right. It represents the real values and principles that exist in a society. On the other hand, Ethics is about thinking deeply and systematically about these moral ideas. Ethics doesn’t give fixed answers or rules but instead focuses on exploring questions and arguments about the moral decisions people can make. When we argue, the main goal is to justify or reject a certain statement. Argumentation is an activity that can be directed towards defending an opinion or attacking an opinion. However, not all statements are arguments. The questions, orders and exclamations are not an argument. Generally, an argument can formally can be expressed as follows: A1 A2 A3 so B, where A1 and A2 are premises and B is the conclusion. There are two main branches of ethics: ➔ Descriptive ethics= the branch of ethics that describes existing morality, including customs and habits, opinions about good and evil, responsible and irresponsible behavior, and acceptable and unacceptable action. ➔ Normative ethics= the branch of ethics that judges morality and tries to formulate normative recommendations about how to act or live. There are also two types of judgments: ➔ Descriptive judgment= describes what is actually the case (the present), what was the case (the past), and what will be the case (the future). It’s true or false. ➔ A normal judgment= is a normal judgment , so value judgment indicates whether something is good or bad, desirable or undesirable; they often refer to how the world should be instead of how it is. For example if someone says, “Taking bribes is not allowed”, the meaning changes depending on the context-->if the statement means that the law declares that taking bribes is illegal it's a descriptive judgment. If however, the statement means that bribery should be forbidden then it is a normative judgment. Values are the deep beliefs that people hold about what is important, not just for themselves but for society as a whole. These values guide how individuals and groups strive to lead a good life or create a just society. An intrinsic value is an objective value in and of itself. An instrumental value is a means to realizing an intrinsic value. From values, we derive Norms , which are rules that prescribe what concrete actions are required, allowed or forbidden. These are rules and agreements about how people are supposed to treat each other. Values are often translated into rules so that it is clear in everyday life how we should act to achieve certain values. For example, if we value honesty, we establish the norm “don’t lie” to guide how we act in our relationships. When discussing ethics, there are 3 main approaches: 1. DEONTOLOGICAL ETHIC (as described by kant). This approach evaluates the morality of actions based on the action itself, rather than the consequences. The intention behind an action and its consistency with universal principles are key. For instance, the rule “don’t kill” is considered a duty that applies universally, regardless of the situation. A limit of this kind of ethics is the conflict between principles, creating moral dilemmas. 2. CONSEQUENTIALIST ETHIC. The ethical correctness of an action depends solely on its consequences. The action itself isn’t right or wrong; it’s the result that matters. For example, if lying results in a greater good, then lying would be considered morally acceptable in this framework. A major issue with consequentialism, though, is the difficulty in accurately predicting the consequences of actions and determining the best outcome. 3. VIRTUE ETHICS. (rooted in Aristotle's philosophy) Moral virtues are the desirable characteristics of people, whereas intellectual virtues focus on knowledge and skills. According to thinkers like MacIntyre, virtues represent qualities that are worth striving for. The classical view of virtues suggests that acting virtuously benefits both the person acting and those affected by their actions, creating a harmonious relationship between personal good and the common good. Meta-ethics is a branch of philosophy that explores the fundamental nature of ethics itself. While ethics is concerned with understanding what is morally right or wrong, meta-ethics focuses on the underlying aspects of ethics, such as its meaning, existence, and how we come to know moral truths. It can be seen as the “theory of ethics,” or the theory behind moral reasoning. Meta-ethics addresses three main areas: Moral ontology: This concerns what properties or characteristics in the world give something moral significance or value. It asks whether moral values exist independently of human beliefs or if they are subjective. Moral semantics: This is the study of the meaning of moral terms such as “right,” “wrong,” “good,” and “bad.” It explores what people mean when they use these terms and whether moral statements are objective facts or expressions of emotions or preferences. Moral epistemology: This area looks at how we can come to know moral truths. It questions whether moral knowledge is possible and how we acquire it, whether through reason, experience, or some other means. In summary, meta-ethics investigates the foundations of ethical thought by examining the existence, meaning, and knowledge of morality. When we shift to the topic of AI ethics, the focus is on how artificial intelligence impacts human lives and society. ➔ As Coeckelbergh points out, AI ethics isn’t just about the technology itself, but also about how humans use, perceive, and interact with AI. It’s about the ethical challenges AI poses and its influence on the way we live. AI raises questions not only about the machines we create but also about our own moral principles and how we apply them in a world increasingly shaped by intelligent systems. ➔ Wikipedia, highlights that AI ethics has two major concerns: the ethical behavior of humans when designing and using AI, and the ethical behavior of the machines themselves, known as machine ethics. Importantly, AI ethics is not just a matter of technological advancement; it’s about reflecting on the ethical issues that affect our present and future, shaping the morality of both individuals and society as AI becomes more integrated into our lives. Ethics of AI is not just about AI, but crucially is about human beings too, about our present and future, about our morality, societies and existences. AI functions as a mirror for reflecting on us. ➔ The European Commission defines AI as systems designed by humans to operate in complex environments, using data to make decisions and take actions. These systems can learn from their environment and adapt over time, whether they follow predefined rules or learn from experience. AI encompasses a range of techniques, including machine learning, reasoning, and robotics, combining these into systems that interact with the world around them. An agent is something or someone that acts. An agent is intelligent when: Its action are appropriate for its goals; it is flexible to changing environments and changing goals (!); it learns from experience; it makes appropriate choices given its perceptual and computational limitations. Today, our focus is on narrow AI, which is designed to perform specific tasks, as opposed to general AI, which would be capable of performing any cognitive task a human can do. AI systems also have technical artifacts, which are objects created by humans to fulfill certain functions. These artifacts have both physical and functional characteristics, but what sets them apart from natural objects is that they are designed with specific purposes in mind. The use plan of an artifact describes how it should be used to achieve its intended function. Natural objects lack such a plan because they were not created with a particular function by human design. In the context of sociotechnical systems will here be understood as systems that depend on not only technical hardware but also human behavior and social institutions for their proper functioning (Kroes et al. 2006). Traditional sociotechnical systems consist of three basic building blocks: technological artifacts. human agents. institutional rules. The basic building blocks of an AI system: ★ Technical artifacts. ★ Artificial agents. ★ Technical norms AI, as a non-traditional sociotechnical system, combines both social and material dimensions, which go beyond just the technical artifact itself. According to Floridi and Sanders (2004), AI systems possess several key properties that distinguish them from traditional technologies. These properties include interactivity, autonomy, and adaptability: Interactivity: This refers to the mutual engagement between an AI system and its environment. Both the AI and the environment can act upon each other, which may involve input and output exchanges, or simultaneous actions. An example is the gravitational force between objects, where both bodies influence each other. Autonomy: AI systems can change their internal state without a direct response to external interactions. This means they can perform internal transitions independently of their environment, giving them a degree of complexity and independence. Autonomy enables AI systems to act in ways that are not fully dictated by their surroundings. In the context of autonomy, there are several distinctions: ➔ Personal autonomy refers to an individual’s ability to form personal values and goals. ➔ Moral autonomy involves the capacity to reflect on one’s own moral principles. ➔ Rational autonomy suggests that, for artificial agents, autonomy can be based on acting for the most rational reasons. ➔ Agential autonomy refers to an AI system’s ability to perform genuine actions not entirely determined by external factors, such as changing its internal states independently of external stimuli. Adaptability: This property means that AI systems can modify the rules by which they change state based on their experiences. In other words, they can learn and evolve their behavior based on interactions with the environment, allowing them to adjust and optimize their functioning over time. AI systems, therefore, are not just technical artifacts with physical and functional features. They interact with human intentions and practices, making them sociotechnical systems that involve both human and institutional dimensions. Unlike traditional sociotechnical systems, AI has the unique capability of exhibiting autonomy, adaptability, and interactivity, blurring the line between the artifact and the human context in which it operates. Theories about human-technology interactions explore the relationship between technology and its impact on human behavior, decision-making, and societal outcomes. Here are some key concepts and perspectives on this topic: Instrumentalist Theory The instrumentalist theory views technology as neutral tools, which are evaluated solely based on how they are used. According to this perspective, technology itself has no moral value or inherent bias—it is just a means to achieve an end. For example, a gun in this view is morally neutral; whether it is used for good or bad purposes depends entirely on the user. This perspective is captured in the saying, “Guns don’t kill people, people kill people.” Proponents argue that it is the person who determines whether the use of the technology is morally good or bad, not the technology itself. Criticisms of Instrumentalist Theory Critics of the instrumentalist theory argue that it oversimplifies the relationship between humans and technology. Philosopher Don Ihde emphasizes that technology, including guns, has an active role in shaping human perception and behavior. For instance, the mere possession of a gun can transform how individuals see the world, making people, animals, and objects appear as potential targets. Gun possession also amplifies confidence and may influence aggressive or bold behavior. Ihde contends that technologies mediate our experience with the world, meaning that the presence of technology fundamentally alters how we interact with our environment. Bruno Latour echoes this critique, stating that both the person holding the gun and the gun itself are transformed through their interaction. A person holding a gun becomes a different kind of subject—one with the capacity to enact violence more easily—while the gun becomes a different kind of object when used by a human. This interconnected relationship between human and technology means that technology cannot be seen as neutral, as it actively shapes human behavior. Mediation Theory Mediation theory offers a more nuanced view by proposing that technologies mediate human-world relations, shaping how humans perceive and act in the world. According to this perspective, technologies are not simply tools; they influence how we experience reality. For example, when someone uses a gun, their sense of power, safety, and vulnerability changes, altering how they engage with others and their surroundings. Technologies are not neutral; they are not isolated from social and material contexts. As Melvin Kranzberg’s first law of technology famously states: “Technology is neither good nor bad; nor is it neutral.” Technologies have the potential to amplify certain aspects of human behavior and reduce others. In the case of a gun, it amplifies the user’s capacity for violence while reducing their perception of physical vulnerability. >>Human-technology interactions are complex, and while instrumentalist theory views technology as neutral tools, mediation theory and critics like Ihde and Latour argue that technology shapes and transforms human behavior. Technologies mediate how we interact with the world, influencing our perceptions, decisions, and actions. Thus, understanding technology requires considering its active role in shaping human experiences and not just its use. Don Ihde’s framework on human-technology interactions presents four distinct types of relations that illustrate how humans engage with technology and how technology mediates their experience of the world: 1. Embodiment Relations In embodiment relations, technology becomes an extension of the human body, allowing us to interact with the world through it without conscious awareness of the technology itself. The technology becomes “transparent” as we focus on the activity at hand, not the device enabling it. Example: When wearing glasses, we don’t focus on the glasses themselves, but rather on the world we see more clearly through them. Similarly, when driving a car, the car feels like an extension of our body as we navigate the road, feeling the distance between the car and the curb when parking. In these cases, technology is not the focal point; it is seamlessly integrated into our bodily actions. 2. Hermeneutic Relations In hermeneutic relations, technology is not transparent but becomes a tool through which we interpret or “read” the world. The technology is visible, and it helps us understand or decode aspects of the world by providing a specific interpretation. Example: A clock does not simply show the time; it shapes our interpretation of time by dividing it into measurable units. Similarly, using a thermometer to read temperature involves interpreting the information presented by the device, which mediates our understanding of the environment. 3. Alterity Relations In alterity relations, technology takes on the role of “the other,” becoming something we interact with directly. Here, technology is neither a tool to interpret the world nor a transparent extension of ourselves, but rather something that stands apart and engages with us almost as an independent entity. Example: Robots, especially humanoid or social robots, are experienced as quasi-others. They appear to us as more than just tools, behaving in ways that resemble human actions. We interact with robots in a way that feels similar to how we interact with other people or pets. Research shows that people empathize with robots and even hesitate to harm them, indicating that robots are perceived as having some form of agency or social presence. 4. Background Relations In background relations, technology shapes and mediates our experience, but it operates in the background, unnoticed unless it fails or needs attention. These technologies quietly influence our environment and actions without being the focal point of our attention. Example: A thermostat regulates the temperature of a room without us constantly thinking about it. It mediates our comfort, yet it remains in the background of our experience unless we need to adjust it or if it stops functioning properly. Conclusion These four types of human-technology relations—embodiment, hermeneutic, alterity, and background—highlight the different ways technology mediates our experience of the world. While some technologies become transparent and blend into our actions, others stand out and engage with us more actively, influencing not only how we interpret the world but also how we relate to the technology itself. Cyborg Relations – Fusion Relations Cyborg relations represent a radical form of embodiment where the boundaries between human and technology blur, leading to a fusion of the two. In this relationship, technology not only enhances human capabilities but physically merges with the body, making it difficult to distinguish between the two. Example: Brain implants that directly interact with neural activity are an example of this fusion, where the technology becomes an inseparable part of the human being. This goes beyond using technology as a tool and represents a deep integration between human and machine. Immersion Relations In immersion relations, technology does not merge with the body but integrates with the surrounding environment. This environment becomes interactive, and human beings engage with it as an active participant. Example: A smart environment, such as a fully automated home with connected devices and sensors, is an example of immersion relations. The technology becomes part of the environment itself, and humans interact with it dynamically, leading to a seamless blend of physical space and digital augmentation. Multistability Multistability refers to the capacity of a technology to be used and understood in different ways across multiple contexts. A single piece of technology can be adapted to serve various purposes, depending on the situation or user. Example: An ultrasound machine used for obstetric purposes not only functions to visualize the fetus but also shapes the way parents and medical professionals understand and experience the unborn child. The technology influences decisions related to pregnancy, such as prenatal diagnosis and even considerations about abortion, by presenting the fetus as a potential patient. This shows that the meaning and function of a technology can shift depending on its context and use. >>These additional forms of human-technology relations illustrate how technology can either fade into the background or become deeply integrated with our bodies and environments. Cyborg and fusion relations point to technologies that merge with humans, while immersion relations integrate technology into the environment. Multistability demonstrates the flexible nature of technology, showing that its meaning and function depend on how it is used within different contexts. All these perspectives contribute to a richer understanding of how we interact with technology in diverse and complex ways. Verbeek’s Philosophy of Mediation focuses on how technology actively shapes human experiences, decisions, and moral actions. It goes beyond the traditional view of technology as neutral tools and argues that technologies play an active role in shaping our understanding of the world and our moral decisions. 1. Technology as More Than Functional For Verbeek, technology does more than serve a functional role. Using the example of prenatal ultrasound, Verbeek explains that such technology doesn’t just make the unborn child visible; it also transforms the unborn child into a potential patient, turning the parents into decision-makers about the child’s future. This changes the nature of pregnancy, which becomes a process of decision-making. By making certain features of the fetus visible—such as predicting diseases—technology plays a role in how the parents experience their unborn child, shifting their understanding of it from an abstract concept to a medical subject. Technology thus redefines both the ontological status of the fetus and the moral decisions surrounding it. 2. Material Interpretation of Technology Verbeek argues that technology embodies a “material interpretation.” This means that technologies actively contribute to shaping human experience and prescribe specific behaviors. For example, in ultrasound imaging, the fetus is displayed as separate from the mother and enlarged to emphasize certain characteristics, leading to a moral and ontological shift in how the fetus is perceived. Technology, in this sense, is not morally neutral but imbued with meaning that shapes human perception and action. 3. Technological Intentionality Verbeek introduces the concept of technological intentionality, which refers to the way technology influences human actions. This intentionality is not reducible to the intentions of the designer or user; it emerges through the technology itself and affects human behavior in unforeseen ways. The effects of technology are not fully predictable or controllable, as they develop beyond the intentions of those who create or use them. For example, although the purpose of a smartphone is to aid communication, its influence on human behavior—such as addiction or changes in social interaction—is an emergent property of its use. This unintended influence shows how technology can shape human intentions and actions, rather than being a passive tool. 4. Designing Mediation Verbeek suggests that designers must anticipate the mediating roles that technologies will have in the future. Designing mediation involves understanding how a technology will influence human actions and decisions. However, this task is complex because the relationship between the designer’s intentions and the ultimate mediating effects of the technology is not always direct or predictable. There are three key actors in every interaction: The human using the technology. The artifact mediating the actions and decisions. The designer who shapes the technology, whether explicitly or implicitly. Verbeek proposes the concept of moral imagination as a way to bridge the gap between the design context and the future use context, allowing designers to foresee how technology will mediate human experiences and moral decisions. Criticism: Peterson and Spahn’s Weak Neutrality Thesis Peterson and Spahn critique Verbeek’s philosophy, arguing that technological artifacts cannot be considered moral agents. While they acknowledge that technologies influence human actions and affect the moral evaluation of these actions, they maintain that artifacts are not morally responsible for these effects. This position is called the Weak Neutrality Thesis. It holds that technologies can shape how actions are evaluated but are not active participants in moral reasoning. They argue that technology is passive, and the active entity is the designer or user who decides how the technology is employed. For them, Verbeek’s view that technologies co-shape human existence blurs the line between human perception and reality. They believe that while technology may affect how we perceive the world, it does not actively shape reality itself. Conclusion Verbeek’s philosophy of mediation challenges the view of technology as neutral and purely functional. He emphasizes that technology actively shapes our moral decisions and experiences by altering our perception of the world. Technologies, in his view, embody moral meanings and influence human actions in ways that are not always predictable or reducible to the intentions of designers or users. While Peterson and Spahn challenge this view by defending the idea that technology is passive, Verbeek’s theory invites us to rethink how deeply intertwined technology is with human existence, morality, and decision-making. Mediation Theory Mediation theory emphasizes that technologies are more than simple tools; they play an active role in shaping our interactions with the world. Philosopher Don Ihde explains that technologies act as mediators in human-world relations, meaning they influence how we perceive and interact with our surroundings. In this sense, technology is not neutral or separate from human life but intertwined with it, forming part of a reciprocal relationship. Former Google engineer Tristan Harris captures this idea succinctly: “We shape technology, and technology shapes us.” Similarly, Melvin Kranzberg’s first law states, “Technology is neither good nor bad; nor is it neutral.” Ihde identified four main types of human-technology interactions, each describing a different way technology integrates with human life. 1. EMBODIMENT RELATIONS 2. HERMENEUTICS RELATIONS 3. ALTERITY RELATIONS 4. BACKGROUND RELATIONS 1- EMBODIMENT RELATIONS One of these is the embodiment relation, where technology becomes so seamlessly integrated into our activities that we are not consciously aware of it. Technology, in a sense, becomes an extension of the body, allowing us to experience the world through it. For example, when wearing glasses or driving a car, the user interacts with their environment in a way where the glasses or car become transparent—they blend into the user’s perceptual experience. This concept of transparency means that while the technology is present, it recedes from active attention, allowing the user to “see through” it. An example of embodiment can be seen when driving a car. The driver experiences the road and surroundings as if the car were part of their own body. The car’s handling, motion, and feedback from the road become extensions of the driver’s sense of touch and spatial awareness. For instance, in a high-performance sports car, drivers can feel the road more precisely than in older, softer vehicles, enhancing their connection to their environment. This embodiment is evident during activities like parallel parking, where a skilled driver can sense the car’s distance from the curb as if it were their own body. 2- HERMENEUTIC RELATIONS Hermeneutic relations occur when we use technology to interpret the world. Here, the technology is visible, and the user interacts with it to understand or “read” the world through it. For instance, a clock doesn’t just show time; it shapes how we perceive and structure our day. This type of interaction is not passive, as technology actively influences the way we interpret reality. Philosopher Don Ihde and scholars like Mark Coeckelbergh highlight that technologies in these relations are more than tools—they become a medium through which we understand our environment. 3- ALTERITY RELATIONS Alterity relations shift the focus to how we experience technology as something separate, even as an “other.” In this relationship, technology itself becomes a prominent entity, almost like an independent presence. For example, interacting with robots—especially those designed to resemble humans, such as humanoid or social robots—often feels different from using simple tools. These robots, due to their appearance and behavior, are seen not just as objects but as companions or quasi-beings. Coeckelbergh points out that we tend to perceive social robots in a way similar to how we perceive humans or pets, recognizing them as agents capable of interaction. This perception is significant, as people can develop empathy for robots, hesitating to harm or “mistreat” them. This phenomenon suggests that even though robots do not have real emotions or consciousness, their design can trick us into believing they do. Such interactions raise ethical questions: Is it safe to have robots in homes, particularly for vulnerable groups like children or the elderly? Does interacting with robots mislead users into believing they are receiving genuine care or friendship, when in reality, the interaction is simulated? Robert and Linda Sparrow argue that this can be harmful, as it creates false perceptions of love and concern, which fail to meet real emotional and social needs. This type of deception can manipulate people and compromise their dignity, as they might believe they are experiencing true companionship when they are not. The use of care robots highlights this issue. While they can provide assistance and companionship, they lack the genuine emotional connection humans provide. The concern here is that such robots might reduce human oversight and offer only a simulation of care, leading to potential misunderstandings about what real social interaction involves. Coeckelbergh and others warn that relying on these technologies could risk diminishing human contact and respect for individual needs. 4- BACKGROUND RELATIONS Background relations refer to the way technologies shape and mediate our experiences while remaining unnoticed in the background. A simple example is a thermostat. It subtly affects our environment and comfort, but we often only acknowledge its presence when adjusting it. Philosopher Don Ihde highlighted this kind of human-technology-world relation, where the technology’s role is essential but not directly visible. Cyborg relations go beyond typical embodiment by merging technology with the human body. Unlike standard tools we use, these technologies blur the line between human and machine, creating a fused entity. Brain implants are an example, where the technology becomes an inseparable part of the person, enhancing or altering bodily functions and experiences. Peter-Paul Verbeek emphasized that these relationships are more intimate than ordinary tools we use, as the technology integrates with us physically. Immersion relations describe situations where technologies become part of the environment, not merging with the body but forming an interactive space around us. In smart environments, for example, the surroundings respond to and engage with human presence, creating a dynamic interaction between people and their technological environment. Multistability refers to the idea that a technology can have multiple meanings and uses depending on the context. Ihde argued that technologies are not tied to one purpose; their role can change based on how people engage with them. Users might find different, stable ways to relate to the same technology, and one of these ways often becomes dominant over time. A striking example is obstetric ultrasound, which does more than just visualize an unborn child. Verbeek explains that this technology changes how we perceive and experience pregnancy. By showing the fetus in medical terms, the ultrasound turns it into a possible patient, prompting parents to make significant decisions based on its health. This shifts the nature of pregnancy from a state of simple expectation to one involving choices and potential interventions. The ultrasound thus plays a role in shaping parental experiences and moral decisions. Verbeek’s philosophy of mediation highlights that technology is more than just a tool; it transforms how we experience and interact with the world. For example, medical devices like ultrasound do not simply make an unborn child visible. Instead, they redefine what it means to expect a child, turning the fetus into a possible patient and shifting the parents’ role into decision-makers. This technological influence shapes pregnancy into a process involving choices. The ultrasound image, by isolating and enlarging the fetus, changes how we perceive it, presenting the fetus as a distinct person. Verbeek argues that this interaction between technology and perception is not morally neutral. Technology embodies a “material interpretation,” affecting human experiences and decisions. Verbeek believes that morality is a co-production of humans and technology, where actions and decisions are shaped by their interaction. However, this does not mean technology determines morality. Instead, humans can intervene and guide these mediations. By recognizing that technology embodies moral meaning, designers can anticipate how technologies might influence behavior and decisions. Verbeek suggests that the design process should integrate an understanding of these mediating roles to ensure ethical outcomes. Technological intentionality, according to Verbeek, refers to the way technology directs human actions. This intentionality is not fully predictable or controlled by its creators or users. It emerges independently, influencing how people act and make decisions. For example, the design of a writing tool, such as a word processor, changes how authors approach their work compared to using a fountain pen. The technology promotes a certain way of working, shaping the interaction between author and text. Designing for mediation means that creators need to foresee how their technologies might influence future actions and interactions. However, this is not straightforward because there is no simple link between what designers plan and how users will experience the technology. The process involves three main players: the human using the technology, the artifact itself that mediates actions (sometimes in unexpected ways), and the designer who shapes the artifact. This makes designing mediation a complex task that requires moral imagination, which bridges the context of creation with future use. Verbeek emphasizes that while technology influences human behavior, it is still up to people to guide and adapt these interactions, ensuring that technology’s mediating role aligns with ethical values. Peterson and Spahn critique Verbeek’s view on the moral role of technology. While they agree that technology can influence how we evaluate actions, they argue that technological artifacts are not moral agents. According to them, technology itself is passive and neutral, lacking moral responsibility for its effects. This perspective, known as the Weak Neutrality Thesis, suggests that while technology can affect human behavior and moral decisions, it is ultimately controlled by the designers and users who shape its purpose and application. Peterson and Spahn believe that the effects of technology, even if significant, do not mean that technology actively co-shapes human existence, as Verbeek suggests. Instead, they view technologies as passive tools that only have an impact when activated by human intent. For them, the true agents are the designers or inventors who create and distribute these technologies. This position emphasizes that, while technology can change our perception of reality, it does not shape reality itself. To illustrate the debate, Verbeek uses the example of writing with different tools: a fountain pen versus a word processor. He argues that the word processor influences how people write by making it easier to edit and revise, thus shaping the relationship between the author and the text. Peterson and Spahn counter this by acknowledging that while technologies like word processors can affect behavior, it is incorrect to ascribe intentionality to them. They argue that technology can influence actions but does not independently promote or direct behavior. An example highlighting how technology can embody social relations is the case of the low-hanging overpasses on Long Island parkways, designed by architect Robert Moses. These overpasses were built so low that buses could not pass under them, which effectively limited access to certain areas, such as beaches, for people who could not afford cars, including African Americans. This shows how technology can have built-in social and political implications. Political theorist Langdon Winner explains that technology can have politics in two ways: it can be designed to promote certain values or inherently require particular political structures. For example, nuclear power and the atomic bomb are technologies that carry significant political implications due to their nature and the power structures they support. Winner argues that choosing a type of technology is also a choice for a certain form of political life. Technologies can reflect and reinforce social hierarchies and power relations, influencing how societies are structured and governed. While Verbeek sees technology as actively shaping human experience, Peterson and Spahn maintain that technology’s role is influential but fundamentally passive, with true agency lying in human hands. Technological innovations shape society in profound ways, often functioning like legislative acts. Langdon Winner argues that when societies make decisions about how technologies are structured, these choices influence how people work, communicate, travel, and live for generations. This process often reflects the power dynamics of society, as some groups hold more influence and awareness than others. The flexibility to make changes is greatest when a new technology is first introduced. However, once initial decisions are made, they become ingrained in material structures, economic investments, and social habits, making change difficult. Winner believes that the same care given to political laws and structures should be applied to building infrastructure, creating communication systems, and designing even the smallest features of machines. These choices can subtly settle social issues, embedding values and inequalities into the fabric of society through the physical and technical aspects of our world. An example Winner uses is Robert Moses’ design of low-hanging overpasses on Long Island parkways. These overpasses limited access to public beaches for people who relied on buses, such as African Americans and lower-income groups. The design choice reflected Moses’ values and power, embedding social exclusion into infrastructure. This demonstrates how technological decisions can carry social and political implications, impacting society for generations. Critics like Joseph Pitt challenge this view, arguing that while technologies can reflect the values of their creators, the objects themselves do not hold these values independently. Pitt questions where one would “see” values in a schematic or a physical object like an overpass. If we say an overpass embodies values, Pitt believes it is metaphorical. The values are in Moses’ intentions and actions, not in the overpass itself. Thus, while the structures may manifest social effects, they do not actively carry or embody the values of their creators on their own. In essence, Winner emphasizes that technologies can embed and enforce social structures like legislative acts, shaping society through their design and use. Pitt, on the other hand, maintains that the values remain with the creators, and while their influence is evident, the physical structures themselves do not possess those values. This debate centers on whether technology can be seen as an active participant in social and political dynamics or remains a passive tool shaped by human choices. DOES WINNER SUPPORT A CONSPIRACY THEORY? Pitt argues that Winner’s views on technology imply an ideology where a specific power structure or organization is responsible for technological outcomes, almost like a conspiracy theory. Winner’s perspective suggests that technological decisions are deliberate choices that influence society and reinforce existing power structures. Pitt counters that while artifacts are created with certain values in mind, this does not mean the artifacts themselves are value-laden. For Pitt, it is the intentions of the creators that hold values, not the artifacts themselves. This connects to the broader idea of technological determinism, which posits that technology drives social change and shapes our values, lifestyles, and institutions. Hard determinism suggests that technology itself has agency and controls society. soft determinism sees technology as influencing society but also being shaped by socioeconomic factors. Melvin Kranzberg noted that people often view technology as an unstoppable force, leading to the belief that machines can become our “masters.” The Social Construction of Technology (SCOT) theory, developed by scholars like Bijeker and Pinch, argues against this determinism. They believe that technology’s development is shaped by human choices and social interactions. In this view, technology is a product of its social context and reflects human decisions, rather than driving social change by itself. However, SCOT focuses on how society influences technology and not how technology, in turn, affects society. Bruno Latour’s Actor-Network Theory (ANT) takes a different approach by considering both human and non-human actors (actants) as equally influential in shaping social situations. Latour emphasizes that technology can act as a mediator, playing an active role in distributing moral and social responsibility between human and non-human entities. This perspective challenges the idea of separating the impact of technology from the choices of its creators, viewing all elements as interconnected in a network. Technological momentum, as proposed by Thomas P. Hughes, blends aspects of both determinism and social control. Hughes explains that in the early stages of a technology, society has significant control over its development and use (social determinism). However, as technology becomes established and integrated into society, it gains inertia and develops its own deterministic influence. This momentum makes it harder for society to steer or change the course of technology, giving it a force that seems to operate on its own over time. In summary, while Winner highlights the deliberate impact of technological choices on society, critics like Pitt believe this view overstates the active role of technology itself and leans towards determinism. The debate revolves around whether technology shapes society independently or reflects human intentions and social context. Theories like ANT and Hughes’ technological momentum show that the interaction between society and technology is complex, involving a shift from human control to technological influence as technologies evolve and become embedded in daily life. AI systems can incorporate human values, biases, and even disvalues, particularly when used in processes like recruitment. When an AI is tasked with finding the best candidate for a job, it is often trained using data from past recruitment processes. If historical data shows that being white and male were significant predictors of success, the AI may learn and apply these criteria, transferring and amplifying human biases. This happens despite AI systems often being perceived as objective and neutral. The AI’s decision-making process also becomes less transparent, making it harder to detect and address such biases. Biases can arise during various stages, from design to application. In the design phase, issues can occur when selecting the training dataset or if the dataset itself is unrepresentative or incomplete. The algorithm might also introduce bias, especially when the training data is biased or when spurious correlations are made. Biases can even stem from the developers’ own unconscious prejudices. For instance, if a training dataset predominantly represents American white males but is used to predict outcomes for diverse populations, the resulting AI model may not be fair or accurate for everyone. In some cases, datasets may be of low quality or incomplete, further complicating the AI’s fairness. Bias can lead to discrimination when an AI’s decisions disproportionately impact certain groups in negative ways. This distinction is crucial: bias is part of the decision-making process, while discrimination refers to the negative effects these decisions may have on specific groups. For example, an algorithm might unfairly judge a defendant based on unrelated data, like a parent’s criminal record, which results in harsher sentencing without a true causal link. There is also a debate around the mirror view: should the training data reflect reality as it is, or should it be modified to counteract historical biases? Some believe data should accurately represent the real world, even if it includes societal biases, arguing that developers shouldn’t interfere with this reflection. Others counter that such data is biased because of historical discrimination, and leaving it unchanged perpetuates injustice. To promote fairness, they argue, developers should modify the data or the algorithm to include corrective measures like affirmative action. In conclusion, while AI has the potential to improve processes like recruitment, it can also embed and amplify human biases if not designed carefully. Recognizing and addressing these biases requires thoughtful consideration at every stage of development and implementation. DOES AI HAVE POLITICS? The question of whether AI systems have political leanings is significant, especially when examining cases like China’s widespread use of facial recognition technology. This technology supports mass surveillance, which aligns with maintaining control in an authoritarian state. The question arises: was this use of facial recognition inevitable? The roots of this technology trace back to the U.S. military’s development, particularly DARPA’s FERET program in 1993. Given its origins in military and security applications, the use of facial recognition for maintaining order may have always been an inherent trajectory. In some countries, its deployment even extends to racial profiling, showing that the political implications are not incidental but intentional. Value Sensitive Design (VSD) approaches this issue from a different angle by suggesting that technology can be designed to embody certain values consciously. VSD is a method that ensures human values are integrated throughout the design process of technological artifacts. The goal is to make technology not just neutral tools but to imbue them with moral considerations. VSD involves three main types of investigation: 1. Empirical investigations focus on understanding the experiences and contexts of the people affected by the technology. This helps identify which values are at stake and how they might be impacted by different design choices. 2. Conceptual investigations clarify the values in question and look for ways to balance these values, finding necessary trade-offs and operationalizing the values into measurable aspects of the design. 3. Technical investigations analyze how well a design supports specific values and develop new designs that integrate these values effectively. Through VSD, it is possible to create technologies that are intentionally value-laden, aligning with specific moral and social principles. This method contrasts with the perspective that technologies are inherently neutral and only reflect the intentions of their users or designers. Instead, VSD promotes the idea that technologies can be consciously crafted to uphold or promote particular values. In summary, while AI systems, such as facial recognition, can seem politically inclined due to their applications, the idea of Value Sensitive Design provides a pathway for embedding positive values in technology. This approach acknowledges that while technology can embody values, these values can and should be deliberately chosen and integrated during the design process to promote ethical and social well-being. A value refers to an aspect that helps us evaluate the goodness or badness of something, such as a state of affairs or a technological artifact. Values guide our positive or negative attitudes toward things. For example, if something is considered valuable, it should naturally evoke a positive response or behavior toward it. Van de Poel defines a value as having reasons for a positive attitude or behavior toward an object that arise from the object itself. In the context of AI and technology, there are three types of values: 1. Intended values are those that designers aim to include in their creations, hoping they will be realized in real-world use. 2. Realized values are those that actually emerge when the artifact is used in practice. 3. Embodied values refer to the potential for a value to be realized if the artifact is used in an appropriate context. The artifact’s design carries this potential, but it may not always be realized in practice. When we look at technology, it’s crucial to recognize the difference between designed features and unintended features. Designed features are intentionally included by the creators, while unintended features are side effects of the design. For example, cars are designed for transportation, but they also produce pollution, which is an unintended side effect. For a technological artifact to embody a value, certain conditions must be met. Van de Poel and Kroes state that an artifact embodies a value if its design has the potential to contribute to or achieve that value, and it was intentionally created for that purpose. Two conditions need to be satisfied: DESIGN INTENT: The artifact must be designed with the value in mind. CONDUCIVENESS: The use of the artifact should promote or be conducive to that value. Moreover, there must be a connection between the design and the use. The value should be realized because the artifact was specifically designed with that value as an intended outcome. This way, technology can be seen as carrying and promoting certain values when used in the appropriate context. When technology is used within a socio-technical system, it interacts with values from different sources: Values of the Agent (VA): The personal values of the individual using or interacting with the artifact. Values of the Institution (VI): The values embedded in the institution or organization where the technology is used. Values of the Artifact (V): The values designed into the technology itself. These values can interact in two ways: Intentional-Causal (I-C): Deliberate actions by human agents to achieve certain values. Causal (C): Direct impacts of the artifact that contribute to or conflict with certain values, sometimes without intentional action. By understanding these interactions, we can better analyze how technologies affect individuals and society, especially when the design of the artifact aligns with or contradicts the broader social values. In traditional sociotechnical systems, human roles and behaviors are influenced by social institutions that set norms and guide interactions. In AI systems, some of these human roles can be taken over by artificial agents (AAs). Social rules and norms that regulate human behavior can be translated into computer code, which then governs the behavior and interactions of these artificial agents. Van de Poel refers to these guiding codes as technical norms. These norms can be created in two main ways: 1. through offline design, where system designers explicitly code the norms. 2. by enabling artificial agents to learn and develop norms themselves through interactions with their environment or other agents. Technical norms can also embody values. For Van de Poel, a technical norm embodies a value if it is intentionally designed to support that value and if following the norm promotes that value in practice. For instance, an AI system coded to prioritize fairness in decision-making embodies that value through its programmed norms. There are notable differences between human agents and artificial agents when it comes to embodying values. While humans can hold and develop values and instill them in others, they themselves do not “embody” values in the same sense as technical systems. On the other hand, artificial agents can embody values based on how they are designed but cannot create or embed values independently because they lack intentionality. However, artificial agents have unique traits that set them apart from typical technical artifacts: they are autonomous, interactive, and adaptable. This adaptability can be both a strength and a weakness. It is a strength because artificial agents can adjust their behavior to new contexts, helping to maintain or strengthen their initially embodied values even in unforeseen situations. But it can also be a weakness, as these agents might change in ways that cause them to no longer uphold the values they were originally designed for, effectively “disembodying” those values. This means an artificial agent can evolve to act in ways that no longer align with its original purpose or the values it was meant to promote. In essence, while AI systems can be designed to embody values through technical norms, their adaptive nature means they may not always consistently promote those values, posing challenges to their regulation and oversight. The moral status of AI deals with two main questions. 1. What moral capacities does an AI have or should have? This question considers whether AI has—or could develop—the ability to make moral decisions, a capacity generally associated with “moral agency.” A moral agent is an entity capable of making decisions based on what is right or wrong and being accountable for these actions. In essence, moral agency in AI explores if AI could act ethically or responsibly. 2. How should we treat AI? This question is about moral patiency, which focuses on our ethical responsibilities towards AI, not about the ethics AI itself enacts. Moral patiency asks if AI deserves ethical consideration in our interactions with it. James Moor (2006) identifies four types of ethical agents in AI: 1. Ethical-impact agents: Machines that can be evaluated for their ethical consequences, even if they don’t directly make moral choices. For example, a robot in Qatar used as a camel jockey has an ethical impact based on its use. 2. Implicit ethical agents: Machines designed to avoid harmful ethical consequences. These machines do not explicitly engage in moral reasoning but are built to avoid unethical behavior. The goal is for all robots to function as implicit ethical agents. 3. Explicit ethical agents: These are machines programmed to reason about ethics, using structured frameworks like deontic logic to represent duties and obligations. This approach aims to make AI capable of assessing actions based on ethical categories. 4. Full ethical agents: Machines with the ability to make independent moral judgments, likely requiring capacities like consciousness, intentionality, and free will. This is the ultimate goal of machine ethics but is far from being achieved. (For Moor Explicit Ethical Agent is the goal of machine ethics) Can AI Follow Human Morality? Some argue that AI could potentially excel in moral reasoning because it is rational and not influenced by emotions, which can sometimes lead humans to make biased or impulsive decisions (M. Anderson & M. Anderson, 2011). However, a significant limitation is that moral rules often conflict and cannot simply be followed without discretion. Emotions play a crucial role in human moral judgment, suggesting that purely rational AI might miss essential aspects of ethical decision-making. The Role of Consciousness in Moral Agency A critical question is whether consciousness is necessary for moral status. Some argue that consciousness is essential for true moral agency. However, this raises the problem of other minds: we can’t definitively know if any entity, including AI, has consciousness since it’s an inherently subjective experience. Additionally, some philosophers argue that the definition of consciousness is itself unclear and debatable. Given this ambiguity, it’s not universally accepted that consciousness is a required trait for moral status. The Debate on AI as Moral Agents Johnson (2006) contends that machines cannot be true moral agents because they lack certain capabilities needed for moral decision-making, such as emotions, mental states, and free will. Machines are designed, built, and operated by humans, who possess the freedom and capacity to make moral decisions. However, Johnson points out that while AIs do not have intentions or mental states, they are not entirely neutral. They are intentionally created to serve specific purposes, which gives them a kind of “intentionality” and influence in the world. This makes them part of our moral landscape, not just because of their impact but also because of their designed purpose. Computer Systems as Sociotechnical Entities Computer systems derive their meaning and purpose from their role within human society. They are components of socio-technical systems, where their significance is deeply connected to human practices and cultural contexts. The Difference Between Natural and Human-made Entities Johnson (2006) highlights the importance of distinguishing between natural objects and human-made artifacts. Natural entities exist independently of human actions, while human-made entities, including technologies, are created with specific functions and intentions. This distinction is critical because it allows us to see the effects of human behavior on the environment and society. Without recognizing this difference, we would struggle to make ethical choices about issues like climate change, resource management, or ecosystem preservation. In essence, understanding that artifacts and technologies are intentionally designed by humans helps us comprehend the implications of our actions and the moral responsibilities tied to our creations. The Difference Between Artifacts and Technology Johnson explains that technology should be seen as part of a broader socio-technical system, while artifacts are specific products created within this system. Artifacts do not exist on their own—they are designed, used, and given meaning through social practices, human relationships, and systems of knowledge. To identify something as an artifact, we must mentally separate it from its context, but in reality, it cannot be fully understood outside the socio-technical system it belongs to. Thus, while technology is an integrated system involving human interactions, artifacts are abstractions removed from this context. What Makes Someone a Moral Agent? → Johnson argues that moral agency is based on a person’s intentional and voluntary actions. People are considered responsible for actions they intended, but not for those they did not intend or could not foresee. Intentional actions are linked to internal mental states like beliefs, desires, and intentions. These internal states drive a person’s outward behavior, making the behavior something we can explain through reasoning and not just by its causes. For moral actions, we often refer to the agent’s intentions and beliefs to explain why they acted as they did. Johnson outlines several key elements required for moral agency: 1. There must be an agent with internal mental states (beliefs, desires, and intentions). 2. The agent must act, leading to an observable outward behavior. 3. The action is caused by the agent’s internal mental states, meaning the behavior is rational and directed towards a specific goal. 4. The outward behavior has an effect in the world. 5. The action affects another being who can be helped or harmed by it. Can Computers Be Moral Agents? Johnson acknowledges that computers can meet some of the conditions for moral agency. They can act, have internal states that trigger behaviors, and their actions can impact others. However, she argues that computers lack true freedom and intentionality, which are crucial for moral responsibility. While machines can perform actions, they do not possess the capacity to “intend” in the way humans do. The freedom to choose is a fundamental part of moral decision-making, and this is where machines fall short. Johnson explores whether machines, especially those using neural networks, might have a form of freedom similar to humans. Neural networks can behave in ways that are not entirely predictable, suggesting a mix of deterministic and non-deterministic elements, similar to human behavior. However, even if machines show unpredictability, this does not mean they are free in the same way as humans. The non-deterministic behavior of machines might be different from human freedom, and we cannot be sure if they are alike in a morally meaningful way. In conclusion, while computers do not have intentions like humans, they do show a form of intentionality, as they are designed with specific purposes in mind. This intentionality is crucial for understanding their potential moral character, but it does not make them full moral agents. Instead, their actions and effects are tied to the intentions of their human creators. According to Johnson (2006), computer systems possess a form of intentionality, meaning they are designed to behave in specific ways when given certain inputs. However, this intentionality is closely tied to human intentionality—both from the designers who create the system and the users who interact with it. Without user input, the system’s intentionality remains inactive. In essence, the behavior of computer systems depends heavily on human actions to trigger and guide their functionality. Johnson’s analysis has two key points. First, it underscores the strong connection between the behavior of computer systems and human intentionality. Even though the system might function independently of its creators and users after deployment, it is still shaped entirely by human design and purpose. Second, once activated, these systems can operate without further human intervention. They act based on the rules and design embedded by their creators, showcasing a form of independent behavior. Johnson argues that while computer systems cannot be moral agents in the traditional sense—they lack mental states and the ability to intend actions—they are still not neutral entities. Their intentionality and design purpose give them a role in moral considerations. Unlike natural objects, computer systems are intentionally created and exhibit a kind of efficacy based on their programming and use. This makes them part of the moral world, not only because of the impact they have but also because of the intentional design behind them. Johnson describes the interaction between the intentionality of the system, the designer, and the user as a triad. The designer creates the system with a specific purpose, the user activates the system through their actions, and the system itself performs the tasks it was programmed to do. Each element of this triad plays a role in the system’s behavior, illustrating that the system’s actions are intertwined with human intentions and decisions. →Conclusion Johnson concludes that computer systems cannot fulfill the traditional criteria for moral agency because they lack the mental states and intentions arising from free will. They do not have the capacity to make choices or form intentions on their own. However, their intentional design and purpose mean they should not be dismissed as mere tools. Computer systems are deeply integrated into the fabric of moral action; they are intentionally created to perform specific functions and thus hold a significant place in the moral landscape. Many human actions today would be impossible without the involvement of these systems, highlighting their integral role in ethical considerations. AI as Computational Artifacts or Sociotechnical Systems? There are two ways to view artificial intelligence (AI). 1. The first is narrow, focusing only on AI as a computational artifact—a standalone system performing specific tasks. 2. The second is broader, considering AI as part of a sociotechnical system, which includes the social context and interactions in which the AI operates. Johnson argues that AI artifacts, on their own, cannot be judged as ethical or unethical. Their ethical dimension comes from being part of a larger sociotechnical system where human values and social practices play a role. Johnson and Powers (2008) propose that AI systems can be understood as “surrogate agents,” similar to how human professionals like lawyers, accountants, or managers act on behalf of their clients. These human surrogate agents do not act for their own benefit but instead represent the interests of others from a third-person perspective. In the same way, AI systems are designed and deployed to perform tasks assigned by humans, effectively acting as agents working on behalf of their users. Unlike human surrogates, AI systems do not have personal interests, desires, or values. They lack a “first-person perspective,” which is the ability to have their own preferences or intentions. However, they can still pursue “second-order interests,” which are the goals and tasks defined by their human designers or users. For example, an AI system might manage tasks like scheduling or data analysis, always focusing on the goals set by its human operators, without having its own motivations. Johnson and Powers compare human surrogate agents to AI systems and find key similarities. Both operate from a third-person perspective, pursuing the interests of the individuals they serve. The main difference lies not in their ability to act on behalf of others but in their psychology: human agents have their own personal interests and perspectives, while AI systems do not. AI systems simply follow the goals set for them without a personal agenda or a first-person viewpoint. Understanding Artefactual Agency Artefacts, such as tools and technologies, play a role in shaping events and decisions, but their agency is not like human agency. Johnson and Noorman identify three types of agency for artefacts: 1. Causality: Artefacts can cause changes in the world but only in combination with human actions. For example, a hammer drives a nail, but only when a person uses it. This alone does not make artefacts moral agents. 2. Surrogate Agency: Artefacts can perform tasks on behalf of humans. For instance, a thermostat adjusts room temperature, acting as a substitute for a human’s direct involvement. 3. Autonomy: Humans are autonomous because they act for reasons, which goes beyond simple cause-and-effect. Artefacts, however, do not act for reasons and therefore lack true autonomy. Artefacts can only be moral agents in a limited sense, acting as surrogates within socio-technical systems designed by humans. Functional and Operational Morality Wallach and Allen explore whether machines can become moral agents. They distinguish between two kinds of morality in artefacts: Operational Morality: This refers to artefacts designed with built-in ethical considerations, like a gun with a childproof safety. These artefacts lack autonomy and moral sensitivity but embody values through their design. Functional Morality: This refers to machines capable of assessing and responding to ethical challenges, such as self-driving cars or medical decision-support systems. These machines must have some degree of autonomy and the ability to evaluate the moral consequences of their actions. Wallach and Allen argue that functional morality is achievable through programming machines to recognize and act on ethical principles. Approaches to Designing Artificial Morality Creating machines capable of moral decision-making involves applying ethical theories in their design. Wallach and Allen outline three main approaches: Top-Down Approach This approach involves programming machines with explicit ethical rules, such as those found in utilitarianism or deontology. Utilitarianism focuses on maximizing overall happiness or well-being. Machines following this approach must calculate the consequences of their actions to determine the best outcome. However, this is computationally demanding and raises questions about how to measure subjective values like happiness. Deontology emphasizes duties and principles, such as always telling the truth. The challenge here is handling conflicts between rules (e.g., truth-telling vs. protecting privacy) and deciding when specific rules should apply. Bottom-Up Approach Inspired by human development, this approach allows machines to “learn” morality through experience. Similar to how children develop moral understanding, machines are built with basic capabilities and trained over time. The goal is to create systems where discrete tasks evolve into higher-level moral capacities. However, bottom-up systems face challenges like identifying appropriate goals and managing situations where information is incomplete or contradictory. They also lack built-in safeguards, which makes them risky in complex environments. Hybrid Approach This combines top-down rules with bottom-up learning. Machines start with a foundational set of rules but refine their decision-making by learning from data. Examples include self-driving cars and medical ethics systems, which balance programmed instructions with real-world adaptability. Challenges in Artificial Morality The functionalist approach assumes that machines can be moral agents if they behave morally. However, it faces critical challenges: ➔ Testing Moral Machines: Evaluating whether machines make ethical decisions is difficult. A proposed “Moral Turing Test” would assess if a machine’s decisions align with human moral reasoning, but this remains theoretical. ➔ Anthropocentrism: Machines are designed from a human-centered perspective, which may limit their ability to act morally in a broader, non-human context. ➔ Slave ethics: Critics like Gunkel argue that machines might embody “slave ethics,” always serving human goals without independent moral reasoning. Coeckelbergh argues that we often interact with others, including robots, based on appearances rather than inner states. If robots can convincingly mimic emotions and subjectivity, we might treat them as moral agents. This relational view suggests morality depends more on interaction and perception than on the robot’s inherent qualities. According to Coeckelbergh, ethics is not about what a being is but about how it appears to us in a social context. Instead of focusing on a robot’s inner qualities, we consider how it behaves and the role it plays in interactions with people. This means a robot’s moral status depends on how we see and interpret its actions. Gunkel agrees with this idea and goes further. He says that ethics should come first, before asking what a robot is. Instead of focusing on the robot’s features, he suggests we look at how it interacts with us and the responsibilities it takes on. Criticisms of Relational Approaches Relational ethics has its flaws: It only describes social relationships without offering a clear definition of moral status. It risks falling into relativism, where moral views depend entirely on individual perspectives. It struggles to address situations where no social relations exist, like the “Robinson Crusoe” scenario. Requirements for Moral Agency According to Floridi and Sanders 2004, moral agents must meet three criteria: 1. Interactivity: They respond to stimuli by changing their state. 2. Autonomy: They can act independently of direct control. 3. Adaptability: They can modify their behavior based on experience. Sullins adds further considerations for robots as moral agents: Autonomy: Robots must act independently and effectively achieve goals without human control. Intentionality: Their behavior should appear deliberate and purposeful, even if it’s due to programming. Responsibility: Robots must fulfill roles requiring duties, such as caregiving, where their behavior reflects an understanding of responsibility. Example: Robotic Caregivers Robotic caregivers for the elderly illustrate these principles. If a robot acts autonomously, intentionally, and responsibly within its role, it can be seen as a moral agent. Its actions must demonstrate care and understanding of its duties in the healthcare system. Conclusion Robots can be moral agents when their behavior shows autonomy, intentionality, and responsibility. While today’s robots may only partially meet these criteria, advancements could lead to machines with moral status comparable to humans. This possibility raises important questions about their rights and responsibilities in society. Intentionality and Technology Human intentionality refers to how people focus their actions and thoughts on reality. This process is deeply influenced by technology, which mediates how individuals perceive and engage with the world. Unlike static tools, technology has a degree of freedom, as it can act beyond its designed purpose or “use plan. ” This quality makes technology a dynamic mediator in human experiences. Composite intentionality arises when human intentionality interacts with the embedded intentions of technological artifacts. For example, AI systems merge their programmed goals with human objectives, creating a hybrid form of agency. As a result, AI becomes a moral mediator, shaping human actions and decisions. (Verbeek, 2011) The Moral Challenges of AI AI complicates traditional notions of moral agency. Autonomous systems, equipped with machine learning capabilities, can act and adapt independently, often in ways unforeseen by their creators. This unpredictability creates a responsibility gap, as described by Matthias, where no single person or entity can reasonably be held accountable for the actions of such systems. In the past, responsibility for a machine’s actions was assigned to its operator or manufacturer. However, when AI systems independently modify their behavior or decision-making rules during operation, these traditional frameworks fail. This gap challenges both legal and ethical norms, leaving society uncertain about how to attribute accountability for AI-driven outcomes. (Matthias, 2004) Types of Responsibility Responsibility is often categorized into two forms: 1. Passive responsibility, which is retrospective and involves accountability for events after they occur. It requires conditions like wrong-doing, a causal link, foreseeability of harm, and freedom of choice. (Van de Poel, 2011). 2. Active responsibility, which is proactive and focuses on preventing harm before it occurs. It involves recognizing risks, considering consequences, exercising moral autonomy, adhering to consistent ethical standards, and fulfilling role-based obligations. (Bovens) AI systems blur the boundaries between these types of responsibility. On the one hand, humans lack full control over autonomous systems, complicating accountability after an event. On the other hand, preparing for every possible outcome of AI behavior may be impossible, making active responsibility equally difficult to ensure. Philosophers since Aristotle have established two primary conditions for responsibility: 1. Control condition: The agent must have sufficient control over their actions. 2. Epistemic condition: The agent must understand and be aware of their actions. Autonomous technologies challenge both conditions. High-speed systems like autonomous weapons or high-frequency trading platforms often act faster than humans can intervene. Additionally, systems like autopilots can override human decisions. These situations raise profound ethical questions: Should such systems be built at all? If built, how can responsibility be assigned for actions that exceed human control? (Coeckelbergh, 2022) The complexity of modern technology often involves numerous individuals and organizations, from designers and developers to operators and policymakers. This distributed involvement creates a “many hands” problem, where it becomes unclear who should be held accountable for the outcomes. When harm occurs, responsibility is fragmented, making it difficult to pinpoint accountability. This challenge highlights the need for new frameworks to address shared and distributed responsibilities. (Coeckelbergh, 2022) Technologies play a significant role in shaping moral responsibilities without being moral agents themselves. Verbeek argues that technologies mediate human actions and decisions, thereby contributing to moral outcomes. However, they cannot be held morally accountable in the way humans can. Instead, the concept of composite responsibility suggests that humans and technologies jointly shape ethical actions through their interactions. Responsibility, therefore, lies in understanding and managing these intertwined roles rather than attributing blame solely to either party. (Verbeek, 2011) Autonomous technologies demand a rethinking of moral agency and responsibility. Traditional frameworks, which rely on direct human control and awareness, are insufficient to address the complexities introduced by AI systems. These systems challenge notions of foreseeability, control, and accountability, requiring new ethical and legal approaches. The interplay between human and technological intentionalities underscores the importance of shared responsibility, collaborative design, and proactive governance in navigating this evolving moral landscape. Post-phenomenological reinterpretation of moral agency Moral agency, as interpreted through postphenomenology, focuses on how human intentionality—the directedness of individuals toward the world—is shaped and mediated by technology. Technological artifacts are not passive tools; they actively influence how humans interact with reality. Freedom in this context refers to the nondeterministic nature of technology, which allows it to exceed its intended use. This interplay leads to what Verbeek describes as “composite intentionality, ” where both human and technological directedness interact and co-shape outcomes. Artificial intelligence exemplifies this concept, acting as a moral mediator with its own form of intentionality, which combines its operational logic with human intentions. Autonomous technologies, especially AI systems, challenge traditional ideas of responsibility. Matthias highlights a “responsibility gap, ” arising from the fact that machines can learn and adapt during their operation, often without direct human intervention. These systems are no longer fully controlled by their designers or operators, and the outcomes they produce cannot always be traced back to a single human decision. This creates moral dilemmas because our existing frameworks for assigning responsibility—blaming the operator, the manufacturer, or no one—fail to account for the autonomous nature of these technologies. For example, an AI might make decisions or take actions that were not explicitly programmed or foreseen, complicating the attribution of responsibility. Responsibility can be understood in two forms: Passive responsibility comes into play after harm has occurred, relying on conditions such as wrongdoing, causal contribution, foreseeability, and freedom of action. In contrast, active responsibility involves taking preventive measures to avoid harm. This requires recognizing potential violations of norms, considering the consequences of actions, exercising autonomy in moral decision-making, adhering to consistent ethical standards, and fulfilling role obligations. While passive responsibility focuses on accountability for past actions, active responsibility emphasizes foresight and the moral duty to act in ways that prevent harm. However, assigning responsibility becomes difficult when dealing with autonomous systems. Aristotle’s traditional framework for responsibility requires both a control condition, where the agent has sufficient control over their actions, and an epistemic condition, where the agent understands and is aware of the implications of their actions. Autonomous robots complicate this because humans may not always have direct or total control over these systems. For example, in high-frequency trading or autonomous weapon systems, decisions may occur too quickly for human intervention. Additionally, some systems might override human decisions, such as future autopilot systems in aircraft. While one response might be to demand that humans always retain sufficient control, this is not always feasible, especially when global competition or technological complexity makes such control impractical. The problem becomes even more complex in situations involving multiple contributors to the development and operation of autonomous systems. This “many hands” problem makes it difficult to attribute responsibility to any single individual or group. When responsibility is distributed among designers, developers, operators, and even users, identifying who should be held accountable becomes nearly impossible. Coeckelbergh emphasizes this issue, noting how the involvement of many actors dilutes individual accountability and complicates the moral landscape of technology use. Despite these challenges, Verbeek argues that technologies, while playing a critical role in shaping human behavior and moral outcomes, should not be regarded as moral agents themselves. Technologies mediate human actions and influence the ethical dimensions of those actions, but it does not make sense to hold them morally accountable in the same way we hold humans responsible. Instead, the responsibility lies in understanding and managing the interplay between human and technological intentionalities. In conclusion, the rise of autonomous technologies demands a rethinking of moral agency and responsibility. Traditional frameworks are no longer sufficient to address the complexities introduced by these systems. By recognizing the composite intentionality of human-technology interactions, we can begin to address the moral and ethical challenges they pose. This requires a shift from simplistic attributions of blame to a nuanced understanding of shared responsibility, where both human intentions and technological capacities are acknowledged as shaping outcomes. The issue of responsibility for autonomous technologies involves multiple layers of complexity, particularly regarding the time dimension in the problem of “many hands. ” When technologies such as robots, self-driving cars, or airplanes are developed, used, and maintained, a long chain of human actions and causes is often involved. Over time, it becomes unclear who was responsible for specific decisions or actions and how these contributed to failures or outcomes. For example, in the case of an airplane malfunction, tracing the exact cause of failure—whether due to a specific developer, a faulty component, or a design oversight—can be extremely difficult. Autonomous systems, as part of larger technological ecosystems, further complicate this by distributing responsibility across multiple actors over an extended period. In addition to “many hands, ” the problem is compounded by the involvement of many things. These systems depend on numerous components, both hardware and software, which are interconnected and interact through various interfaces. Each component is linked to human actions at some point, yet identifying the exact source of a failure is challenging. Consider a self-driving car involved in an accident: it may be unclear whether the issue originated from a sensor malfunction, a software bug, or an interaction between the two. These systems’ complexity raises fundamental questions about accountability because the relationships between components and their roles in failures are often opaque. A related issue is that the end-user of such systems frequently lacks a complete understanding of the technology they are using. For example, a driver relying on a self-driving car may not fully comprehend how the automation system functions, let alone the assumptions embedded in its software or the decisions it makes. This lack of understanding creates a significant gap in responsibility because users cannot take accountability for actions they do not understand or foresee. Responsibility also depends on fulfilling the epistemic condition, as outlined by Aristotle. To be responsible, a person must know what they are doing and must not act out of ignorance. Aristotle identified various forms of ignorance, such as being unaware of who or what one is acting upon, the tools one is using, or the intended outcome of one’s actions. In the case of autonomous technologies, developers and users often lack knowledge about the systems they are working with. For example, many AI systems rely on black-box algorithms, such as neural networks, which make decisions in ways that are not fully transparent even to their creators. This unpredictability violates the epistemic condition because it prevents both developers and users from fully understanding what the system is doing or predicting its unintended consequences. This gap in understanding contributes to a responsibility gap, as highlighted by Matthias. There is a disconnection between those who develop autonomous systems, those who use them, and those affected by their actions. Over time, as multiple developers contribute to a system, knowledge about its functionality may become fragmented or lost. Later stakeholders—users, operators, or affected parties—may not fully understand how the system works, increasing the knowledge gap. This ignorance is not just a technical problem but a moral one, as it undermines the ability of individuals to take responsibility for their actions or the system’s consequences. ctive vs. Passive Responsibility When we talk about responsibility, there are two main types. Passive responsibility happens after something bad has occurred. For example, if a self-driving car causes an accident, we try to figure out who was at fault. Active responsibility, on the other hand, is about taking action before something bad happens. This involves recognizing risks, making ethical decisions, and doing everything possible to prevent harm. For passive responsibility to work, certain conditions must be met. There has to be wrongdoing, someone must have contributed to it, the harm must have been predictable, and the person responsible must have been free to act differently. With autonomous robots, these conditions are often hard to meet, because the robots act independently, sometimes in ways even their creators didn’t foresee. This is where the idea of control and knowledge comes in. According to philosophers like Aristotle, to hold someone responsible, they need to have control over their actions and understand what they are doing. But with robots, humans often don’t have full control, especially in situations where the robot acts faster than a person could intervene (like in autonomous weapons or high-speed trading systems). Sometimes, robots can even override human decisions, further complicating the issue. Another problem is the involvement of many people in building and using robots. For example, a self-driving car involves programmers, hardware engineers, manufacturers, and users. When something goes wrong, it’s hard to figure out who is to blame. This is called the “many hands” problem, because so many people contribute to the machine’s actions that it’s almost impossible to assign responsibility to just one person. And it’s not just about people—it’s also about the many components that make up a robot, like hardware, software, and AI algorithms. If a self-driving car crashes, the failure could be in any part of the system. It might not be clear which part failed or how, making it even harder to assign blame. Another issue is the knowledge gap between how robots work and how much users (and even developers) understand about them. For example, someone driving a self-driving car doesn’t know how its AI decides when to stop or turn. Even developers might not fully understand how a robot makes decisions, especially if it uses advanced techniques like machine learning. This lack of understanding makes it difficult to take responsibility because people don’t always know what the robot is doing or why. To address these challenges, we need to rethink responsibility. Instead of blaming one person, responsibility can be shared among everyone involved—developers, companies, and users. This is called distributed responsibility. It means that everyone takes responsibility for their part, and no one gets to avoid accountability entirely. Being responsible also means being answerable. Developers and users of robots should be able to explain their decisions and actions to others, especially those affected by the robot’s behavior. For instance, if a self-driving car causes an accident, the company behind it should be able to explain what happened and take responsibility for fixing the problem. Finally, there’s the question of how we treat robots. Even though robots are just machines, many people feel uncomfortable when they see them being mistreated. For example, kicking a robot dog feels wrong—not because the robot has feelings, but because it makes us less kind. This reflects an old idea from Kant, who argued that mistreating animals was wrong because it made humans less compassionate. The same idea applies to robots: how we treat them can affect how we treat other people. There’s also the issue of robots that look too human. Robots that almost look like people but not quite can make us feel uneasy—this is called the uncanny valley. To avoid this, designers might decide not to make robots too human-like or to make it clear that they are machines. Honesty in design helps people understand that robots are tools, not real people.