Does Winner Support A Conspiracy Theory? PDF
Document Details
Uploaded by Deleted User
Tags
Related
Summary
This document discusses different theories on technology and its impact on society. It examines technological determinism, and the social construction of technology including the role of human choices and social interactions in shaping technology, while also looking at the role of bias in AI. It doesn't appear to be an exam paper.
Full Transcript
DOES WINNER SUPPORT A CONSPIRACY THEORY? Pitt argues that Winner’s views on technology imply an ideology where a specific power structure or organization is responsible for technological outcomes, almost like a conspiracy theory. Winner’s perspective suggests that technological decisions are deliber...
DOES WINNER SUPPORT A CONSPIRACY THEORY? Pitt argues that Winner’s views on technology imply an ideology where a specific power structure or organization is responsible for technological outcomes, almost like a conspiracy theory. Winner’s perspective suggests that technological decisions are deliberate choices that influence society and reinforce existing power structures. Pitt counters that while artifacts are created with certain values in mind, this does not mean the artifacts themselves are value-laden. For Pitt, it is the intentions of the creators that hold values, not the artifacts themselves. This connects to the broader idea of technological determinism, which posits that technology drives social change and shapes our values, lifestyles, and institutions. Hard determinism suggests that technology itself has agency and controls society. soft determinism sees technology as influencing society but also being shaped by socioeconomic factors. Melvin Kranzberg noted that people often view technology as an unstoppable force, leading to the belief that machines can become our “masters.” The Social Construction of Technology (SCOT) theory, developed by scholars like Bijeker and Pinch, argues against this determinism. They believe that technology’s development is shaped by human choices and social interactions. In this view, technology is a product of its social context and reflects human decisions, rather than driving social change by itself. However, SCOT focuses on how society influences technology and not how technology, in turn, affects society. Bruno Latour’s Actor-Network Theory (ANT) takes a different approach by considering both human and non-human actors (actants) as equally influential in shaping social situations. Latour emphasizes that technology can act as a mediator, playing an active role in distributing moral and social responsibility between human and non-human entities. This perspective challenges the idea of separating the impact of technology from the choices of its creators, viewing all elements as interconnected in a network. Technological momentum, as proposed by Thomas P. Hughes, blends aspects of both determinism and social control. Hughes explains that in the early stages of a technology, society has significant control over its development and use (social determinism). However, as technology becomes established and integrated into society, it gains inertia and develops its own deterministic influence. This momentum makes it harder for society to steer or change the course of technology, giving it a force that seems to operate on its own over time. In summary, while Winner highlights the deliberate impact of technological choices on society, critics like Pitt believe this view overstates the active role of technology itself and leans towards determinism. The debate revolves around whether technology shapes society independently or reflects human intentions and social context. Theories like ANT and Hughes’ technological momentum show that the interaction between society and technology is complex, involving a shift from human control to technological influence as technologies evolve and become embedded in daily life. AI systems can incorporate human values, biases, and even disvalues, particularly when used in processes like recruitment. When an AI is tasked with finding the best candidate for a job, it is often trained using data from past recruitment processes. If historical data shows that being white and male were significant predictors of success, the AI may learn and apply these criteria, transferring and amplifying human biases. This happens despite AI systems often being perceived as objective and neutral. The AI’s decision-making process also becomes less transparent, making it harder to detect and address such biases. Biases can arise during various stages, from design to application. In the design phase, issues can occur when selecting the training dataset or if the dataset itself is unrepresentative or incomplete. The algorithm might also introduce bias, especially when the training data is biased or when spurious correlations are made. Biases can even stem from the developers’ own unconscious prejudices. For instance, if a training dataset predominantly represents American white males but is used to predict outcomes for diverse populations, the resulting AI model may not be fair or accurate for everyone. In some cases, datasets may be of low quality or incomplete, further complicating the AI’s fairness. Bias can lead to discrimination when an AI’s decisions disproportionately impact certain groups in negative ways. This distinction is crucial: bias is part of the decision-making process, while discrimination refers to the negative effects these decisions may have on specific groups. For example, an algorithm might unfairly judge a defendant based on unrelated data, like a parent’s criminal record, which results in harsher sentencing without a true causal link. There is also a debate around the mirror view: should the training data reflect reality as it is, or should it be modified to counteract historical biases? Some believe data should accurately represent the real world, even if it includes societal biases, arguing that developers shouldn’t interfere with this reflection. Others counter that such data is biased because of historical discrimination, and leaving it unchanged perpetuates injustice. To promote fairness, they argue, developers should modify the data or the algorithm to include corrective measures like affirmative action. In conclusion, while AI has the potential to improve processes like recruitment, it can also embed and amplify human biases if not designed carefully. Recognizing and addressing these biases requires thoughtful consideration at every stage of development and implementation. DOES AI HAVE POLITICS? The question of whether AI systems have political leanings is significant, especially when examining cases like China’s widespread use of facial recognition technology. This technology supports mass surveillance, which aligns with maintaining control in an authoritarian state. The question arises: was this use of facial recognition inevitable? The roots of this technology trace back to the U.S. military’s development, particularly DARPA’s FERET program in 1993. Given its origins in military and security applications, the use of facial recognition for maintaining order may have always been an inherent trajectory. In some countries, its deployment even extends to racial profiling, showing that the political implications are not incidental but intentional. Value Sensitive Design (VSD) approaches this issue from a different angle by suggesting that technology can be designed to embody certain values consciously. VSD is a method that ensures human values are integrated throughout the design process of technological artifacts. The goal is to make technology not just neutral tools but to imbue them with moral considerations. VSD involves three main types of investigation: 1. Empirical investigations focus on understanding the experiences and contexts of the people affected by the technology. This helps identify which values are at stake and how they might be impacted by different design choices. 2. Conceptual investigations clarify the values in question and look for ways to balance these values, finding necessary trade-offs and operationalizing the values into measurable aspects of the design. 3. Technical investigations analyze how well a design supports specific values and develop new designs that integrate these values effectively. Through VSD, it is possible to create technologies that are intentionally value-laden, aligning with specific moral and social principles. This method contrasts with the perspective that technologies are inherently neutral and only reflect the intentions of their users or designers. Instead, VSD promotes the idea that technologies can be consciously crafted to uphold or promote particular values. In summary, while AI systems, such as facial recognition, can seem politically inclined due to their applications, the idea of Value Sensitive Design provides a pathway for embedding positive values in technology. This approach acknowledges that while technology can embody values, these values can and should be deliberately chosen and integrated during the design process to promote ethical and social well-being. A value refers to an aspect that helps us evaluate the goodness or badness of something, such as a state of affairs or a technological artifact. Values guide our positive or negative attitudes toward things. For example, if something is considered valuable, it should naturally evoke a positive response or behavior toward it. Van de Poel defines a value as having reasons for a positive attitude or behavior toward an object that arise from the object itself. In the context of AI and technology, there are three types of values: 1. Intended values are those that designers aim to include in their creations, hoping they will be realized in real-world use. 2. Realized values are those that actually emerge when the artifact is used in practice. 3. Embodied values refer to the potential for a value to be realized if the artifact is used in an appropriate context. The artifact’s design carries this potential, but it may not always be realized in practice. When we look at technology, it’s crucial to recognize the difference between designed features and unintended features. Designed features are intentionally included by the creators, while unintended features are side effects of the design. For example, cars are designed for transportation, but they also produce pollution, which is an unintended side effect. For a technological artifact to embody a value, certain conditions must be met. Van de Poel and Kroes state that an artifact embodies a value if its design has the potential to contribute to or achieve that value, and it was intentionally created for that purpose. Two conditions need to be satisfied: DESIGN INTENT: The artifact must be designed with the value in mind. CONDUCIVENESS: The use of the artifact should promote or be conducive to that value. Moreover, there must be a connection between the design and the use. The value should be realized because the artifact was specifically designed with that value as an intended outcome. This way, technology can be seen as carrying and promoting certain values when used in the appropriate context. When technology is used within a socio-technical system, it interacts with values from different sources: Values of the Agent (VA): The personal values of the individual using or interacting with the artifact. Values of the Institution (VI): The values embedded in the institution or organization where the technology is used. Values of the Artifact (V): The values designed into the technology itself. These values can interact in two ways: Intentional-Causal (I-C): Deliberate actions by human agents to achieve certain values. Causal (C): Direct impacts of the artifact that contribute to or conflict with certain values, sometimes without intentional action. By understanding these interactions, we can better analyze how technologies affect individuals and society, especially when the design of the artifact aligns with or contradicts the broader social values. In traditional sociotechnical systems, human roles and behaviors are influenced by social institutions that set norms and guide interactions. In AI systems, some of these human roles can be taken over by artificial agents (AAs). Social rules and norms that regulate human behavior can be translated into computer code, which then governs the behavior and interactions of these artificial agents. Van de Poel refers to these guiding codes as technical norms. These norms can be created in two main ways: 1. through offline design, where system designers explicitly code the norms. 2. by enabling artificial agents to learn and develop norms themselves through interactions with their environment or other agents. Technical norms can also embody values. For Van de Poel, a technical norm embodies a value if it is intentionally designed to support that value and if following the norm promotes that value in practice. For instance, an AI system coded to prioritize fairness in decision-making embodies that value through its programmed norms. There are notable differences between human agents and artificial agents when it comes to embodying values. While humans can hold and develop values and instill them in others, they themselves do not “embody” values in the same sense as technical systems. On the other hand, artificial agents can embody values based on how they are designed but cannot create or embed values independently because they lack intentionality. However, artificial agents have unique traits that set them apart from typical technical artifacts: they are autonomous, interactive, and adaptable. This adaptability can be both a strength and a weakness. It is a strength because artificial agents can adjust their behavior to new contexts, helping to maintain or strengthen their initially embodied values even in unforeseen situations. But it can also be a weakness, as these agents might change in ways that cause them to no longer uphold the values they were originally designed for, effectively “disembodying” those values. This means an artificial agent can evolve to act in ways that no longer align with its original purpose or the values it was meant to promote. In essence, while AI systems can be designed to embody values through technical norms, their adaptive nature means they may not always consistently promote those values, posing challenges to their regulation and oversight. The moral status of AI deals with two main questions. 1. What moral capacities does an AI have or should have? This question considers whether AI has—or could develop—the ability to make moral decisions, a capacity generally associated with “moral agency.” A moral agent is an entity capable of making decisions based on what is right or wrong and being accountable for these actions. In essence, moral agency in AI explores if AI could act ethically or responsibly. 2. How should we treat AI? This question is about moral patiency, which focuses on our ethical responsibilities towards AI, not about the ethics AI itself enacts. Moral patiency asks if AI deserves ethical consideration in our interactions with it. James Moor (2006) identifies four types of ethical agents in AI: 1. Ethical-impact agents: Machines that can be evaluated for their ethical consequences, even if they don’t directly make moral choices. For example, a robot in Qatar used as a camel jockey has an ethical impact based on its use. 2. Implicit ethical agents: Machines designed to avoid harmful ethical consequences. These machines do not explicitly engage in moral reasoning but are built to avoid unethical behavior. The goal is for all robots to function as implicit ethical agents. 3. Explicit ethical agents: These are machines programmed to reason about ethics, using structured frameworks like deontic logic to represent duties and obligations. This approach aims to make AI capable of assessing actions based on ethical categories. 4. Full ethical agents: Machines with the ability to make independent moral judgments, likely requiring capacities like consciousness, intentionality, and free will. This is the ultimate goal of machine ethics but is far from being achieved. (For Moor Explicit Ethical Agent is the goal of machine ethics) Can AI Follow Human Morality? Some argue that AI could potentially excel in moral reasoning because it is rational and not influenced by emotions, which can sometimes lead humans to make biased or impulsive decisions (M. Anderson & M. Anderson, 2011). However, a significant limitation is that moral rules often conflict and cannot simply be followed without discretion. Emotions play a crucial role in human moral judgment, suggesting that purely rational AI might miss essential aspects of ethical decision-making. The Role of Consciousness in Moral Agency A critical question is whether consciousness is necessary for moral status. Some argue that consciousness is essential for true moral agency. However, this raises the problem of other minds: we can’t definitively know if any entity, including AI, has consciousness since it’s an inherently subjective experience. Additionally, some philosophers argue that the definition of consciousness is itself unclear and debatable. Given this ambiguity, it’s not universally accepted that consciousness is a required trait for moral status. The Debate on AI as Moral Agents Johnson (2006) contends that machines cannot be true moral agents because they lack certain capabilities needed for moral decision-making, such as emotions, mental states, and free will. Machines are designed, built, and operated by humans, who possess the freedom and capacity to make moral decisions. However, Johnson points out that while AIs do not have intentions or mental states, they are not entirely neutral. They are intentionally created to serve specific purposes, which gives them a kind of “intentionality” and influence in the world. This makes them part of our moral landscape, not just because of their impact but also because of their designed purpose. Computer Systems as Sociotechnical Entities Computer systems derive their meaning and purpose from their role within human society. They are components of socio-technical systems, where their significance is deeply connected to human practices and cultural contexts. The Difference Between Natural and Human-made Entities Johnson (2006) highlights the importance of distinguishing between natural objects and human-made artifacts. Natural entities exist independently of human actions, while human-made entities, including technologies, are created with specific functions and intentions. This distinction is critical because it allows us to see the effects of human behavior on the environment and society. Without recognizing this difference, we would struggle to make ethical choices about issues like climate change, resource management, or ecosystem preservation. In essence, understanding that artifacts and technologies are intentionally designed by humans helps us comprehend the implications of our actions and the moral responsibilities tied to our creations. The Difference Between Artifacts and Technology Johnson explains that technology should be seen as part of a broader socio-technical system, while artifacts are specific products created within this system. Artifacts do not exist on their own—they are designed, used, and given meaning through social practices, human relationships, and systems of knowledge. To identify something as an artifact, we must mentally separate it from its context, but in reality, it cannot be fully understood outside the socio-technical system it belongs to. Thus, while technology is an integrated system involving human interactions, artifacts are abstractions removed from this context. What Makes Someone a Moral Agent? → Johnson argues that moral agency is based on a person’s intentional and voluntary actions. People are considered responsible for actions they intended, but not for those they did not intend or could not foresee. Intentional actions are linked to internal mental states like beliefs, desires, and intentions. These internal states drive a person’s outward behavior, making the behavior something we can explain through reasoning and not just by its causes. For moral actions, we often refer to the agent’s intentions and beliefs to explain why they acted as they did. Johnson outlines several key elements required for moral agency: 1. There must be an agent with internal mental states (beliefs, desires, and intentions). 2. The agent must act, leading to an observable outward behavior. 3. The action is caused by the agent’s internal mental states, meaning the behavior is rational and directed towards a specific goal. 4. The outward behavior has an effect in the world. 5. The action affects another being who can be helped or harmed by it. Can Computers Be Moral Agents? Johnson acknowledges that computers can meet some of the conditions for moral agency. They can act, have internal states that trigger behaviors, and their actions can impact others. However, she argues that computers lack true freedom and intentionality, which are crucial for moral responsibility. While machines can perform actions, they do not possess the capacity to “intend” in the way humans do. The freedom to choose is a fundamental part of moral decision-making, and this is where machines fall short. Johnson explores whether machines, especially those using neural networks, might have a form of freedom similar to humans. Neural networks can behave in ways that are not entirely predictable, suggesting a mix of deterministic and non-deterministic elements, similar to human behavior. However, even if machines show unpredictability, this does not mean they are free in the same way as humans. The non-deterministic behavior of machines might be different from human freedom, and we cannot be sure if they are alike in a morally meaningful way. In conclusion, while computers do not have intentions like humans, they do show a form of intentionality, as they are designed with specific purposes in mind. This intentionality is crucial for understanding their potential moral character, but it does not make them full moral agents. Instead, their actions and effects are tied to the intentions of their human creators. According to Johnson (2006), computer systems possess a form of intentionality, meaning they are designed to behave in specific ways when given certain inputs. However, this intentionality is closely tied to human intentionality—both from the designers who create the system and the users who interact with it. Without user input, the system’s intentionality remains inactive. In essence, the behavior of computer systems depends heavily on human actions to trigger and guide their functionality. Johnson’s analysis has two key points. First, it underscores the strong connection between the behavior of computer systems and human intentionality. Even though the system might function independently of its creators and users after deployment, it is still shaped entirely by human design and purpose. Second, once activated, these systems can operate without further human intervention. They act based on the rules and design embedded by their creators, showcasing a form of independent behavior. Johnson argues that while computer systems cannot be moral agents in the traditional sense—they lack mental states and the ability to intend actions—they are still not neutral entities. Their intentionality and design purpose give them a role in moral considerations. Unlike natural objects, computer systems are intentionally created and exhibit a kind of efficacy based on their programming and use. This makes them part of the moral world, not only because of the impact they have but also because of the intentional design behind them. Johnson describes the interaction between the intentionality of the system, the designer, and the user as a triad. The designer creates the system with a specific purpose, the user activates the system through their actions, and the system itself performs the tasks it was programmed to do. Each element of this triad plays a role in the system’s behavior, illustrating that the system’s actions are intertwined with human intentions and decisions. →Conclusion Johnson concludes that computer systems cannot fulfill the traditional criteria for moral agency because they lack the mental states and intentions arising from free will. They do not have the capacity to make choices or form intentions on their own. However, their intentional design and purpose mean they should not be dismissed as mere tools. Computer systems are deeply integrated into the fabric of moral action; they are intentionally created to perform specific functions and thus hold a significant place in the moral landscape. Many human actions today would be impossible without the involvement of these systems, highlighting their integral role in ethical considerations. AI as Computational Artifacts or Sociotechnical Systems? There are two ways to view artificial intelligence (AI). 1. The first is narrow, focusing only on AI as a computational artifact—a standalone system performing specific tasks. 2. The second is broader, considering AI as part of a sociotechnical system, which includes the social context and interactions in which the AI operates. Johnson argues that AI artifacts, on their own, cannot be judged as ethical or unethical. Their ethical dimension comes from being part of a larger sociotechnical system where human values and social practices play a role. Johnson and Powers (2008) propose that AI systems can be understood as “surrogate agents,” similar to how human professionals like lawyers, accountants, or managers act on behalf of their clients. These human surrogate agents do not act for their own benefit but instead represent the interests of others from a third-person perspective. In the same way, AI systems are designed and deployed to perform tasks assigned by humans, effectively acting as agents working on behalf of their users. Unlike human surrogates, AI systems do not have personal interests, desires, or values. They lack a “first-person perspective,” which is the ability to have their own preferences or intentions. However, they can still pursue “second-order interests,” which are the goals and tasks defined by their human designers or users. For example, an AI system might manage tasks like scheduling or data analysis, always focusing on the goals set by its human operators, without having its own motivations. Johnson and Powers compare human surrogate agents to AI systems and find key similarities. Both operate from a third-person perspective, pursuing the interests of the individuals they serve. The main difference lies not in their ability to act on behalf of others but in their psychology: human agents have their own personal interests and perspectives, while AI systems do not. AI systems simply follow the goals set for them without a personal agenda or a first-person viewpoint. Understanding Artefactual Agency Artefacts, such as tools and technologies, play a role in shaping events and decisions, but their agency is not like human agency. Johnson and Noorman identify three types of agency for artefacts: 1. Causality: Artefacts can cause changes in the world but only in combination with human actions. For example, a hammer drives a nail, but only when a person uses it. This alone does not make artefacts moral agents. 2. Surrogate Agency: Artefacts can perform tasks on behalf of humans. For instance, a thermostat adjusts room temperature, acting as a substitute for a human’s direct involvement. 3. Autonomy: Humans are autonomous because they act for reasons, which goes beyond simple cause-and-effect. Artefacts, however, do not act for reasons and therefore lack true autonomy. Artefacts can only be moral agents in a limited sense, acting as surrogates within socio-technical systems designed by humans. Functional and Operational Morality Wallach and Allen explore whether machines can become moral agents. They distinguish between two kinds of morality in artefacts: Operational Morality: This refers to artefacts designed with built-in ethical considerations, like a gun with a childproof safety. These artefacts lack autonomy and moral sensitivity but embody values through their design. Functional Morality: This refers to machines capable of assessing and responding to ethical challenges, such as self-driving cars or medical decision-support systems. These machines must have some degree of autonomy and the ability to evaluate the moral consequences of their actions. Wallach and Allen argue that functional morality is achievable through programming machines to recognize and act on ethical principles. Approaches to Designing Artificial Morality Creating machines capable of moral decision-making involves applying ethical theories in their design. Wallach and Allen outline three main approaches: Top-Down Approach This approach involves programming machines with explicit ethical rules, such as those found in utilitarianism or deontology. Utilitarianism focuses on maximizing overall happiness or well-being. Machines following this approach must calculate the consequences of their actions to determine the best outcome. However, this is computationally demanding and raises questions about how to measure subjective values like happiness. Deontology emphasizes duties and principles, such as always telling the truth. The challenge here is handling conflicts between rules (e.g., truth-telling vs. protecting privacy) and deciding when specific rules should apply. Bottom-Up Approach Inspired by human development, this approach allows machines to “learn” morality through experience. Similar to how children develop moral understanding, machines are built with basic capabilities and trained over time. The goal is to create systems where discrete tasks evolve into higher-level moral capacities. However, bottom-up systems face challenges like identifying appropriate goals and managing situations where information is incomplete or contradictory. They also lack built-in safeguards, which makes them risky in complex environments. Hybrid Approach This combines top-down rules with bottom-up learning. Machines start with a foundational set of rules but refine their decision-making by learning from data. Examples include self-driving cars and medical ethics systems, which balance programmed instructions with real-world adaptability. Challenges in Artificial Morality The functionalist approach assumes that machines can be moral agents if they behave morally. However, it faces critical challenges: ➔ Testing Moral Machines: Evaluating whether machines make ethical decisions is difficult. A proposed “Moral Turing Test” would assess if a machine’s decisions align with human moral reasoning, but this remains theoretical. ➔ Anthropocentrism: Machines are designed from a human-centered perspective, which may limit their ability to act morally in a broader, non-human context. ➔ Slave ethics: Critics like Gunkel argue that machines might embody “slave ethics,” always serving human goals without independent moral reasoning. Coeckelbergh argues that we often interact with others, including robots, based on appearances rather than inner states. If robots can convincingly mimic emotions and subjectivity, we might treat them as moral agents. This relational view suggests morality depends more on interaction and perception than on the robot’s inherent qualities. According to Coeckelbergh, ethics is not about what a being is but about how it appears to us in a social context. Instead of focusing on a robot’s inner qualities, we consider how it behaves and the role it plays in interactions with people. This means a robot’s moral status depends on how we see and interpret its actions. Gunkel agrees with this idea and goes further. He says that ethics should come first, before asking what a robot is. Instead of focusing on the robot’s features, he suggests we look at how it interacts with us and the responsibilities it takes on. Criticisms of Relational Approaches Relational ethics has its flaws: It only describes social relationships without offering a clear definition of moral status. It risks falling into relativism, where moral views depend entirely on individual perspectives. It struggles to address situations where no social relations exist, like the “Robinson Crusoe” scenario. Requirements for Moral Agency According to Floridi and Sanders 2004, moral agents must meet three criteria: 1. Interactivity: They respond to stimuli by changing their state. 2. Autonomy: They can act independently of direct control. 3. Adaptability: They can modify their behavior based on experience. Sullins adds further considerations for robots as moral agents: Autonomy: Robots must act independently and effectively achieve goals without human control. Intentionality: Their behavior should appear deliberate and purposeful, even if it’s due to programming. Responsibility: Robots must fulfill roles requiring duties, such as caregiving, where their behavior reflects an understanding of responsibility. Example: Robotic Caregivers Robotic caregivers for the elderly illustrate these principles. If a robot acts autonomously, intentionally, and responsibly within its role, it can be seen as a moral agent. Its actions must demonstrate care and understanding of its duties in the healthcare system. Conclusion Robots can be moral agents when their behavior shows autonomy, intentionality, and responsibility. While today’s robots may only partially meet these criteria, advancements could lead to machines with moral status comparable to humans. This possibility raises important questions about their rights and responsibilities in society.