Artificial Intelligence: Are Robots People, Too? PDF

Summary

This document is a lecture from Centennial College's GNED 228 course. It discusses the moral status of AI and explores topics such as moral agents, moral subjects, and the ethical considerations surrounding humanoid robots. The lecture also references Sophia the Robot and explores the concept of moral agency in AI, referring to Sullins' work.

Full Transcript

GNED 228 - Week 9 Artificial Intelligence: Are Robots People, Too? Centennial College M.Reza Tahmasbi Photo by Needpix.com, Artificial Intelligence Robot Free Picture Agenda The Moral Status of AI Moral Agent Moral Su...

GNED 228 - Week 9 Artificial Intelligence: Are Robots People, Too? Centennial College M.Reza Tahmasbi Photo by Needpix.com, Artificial Intelligence Robot Free Picture Agenda The Moral Status of AI Moral Agent Moral Subject Is a Humanoid Robot a moral agent? AI as a Moral Agent Photo by Pixabay.com AI as a Moral Entity AI as a Fellow Moral Agent What is the Moral Status of a Humanoid Robot? Photo by TheDigitalArtist from Pixabay.com Moral Relationship Can humanoid robots be ethically responsible toward human beings and other machines? Is it possible for machines to conform to the kind of moral norms that we require for any moral agent? Should human beings be ethically responsible toward humanoid robots? The Moral Status of AI Would Sophia the Robot be owed rights and moral protection and owe moral obligations? Photo by Sophieja23 from Pixabay.com The Moral Status of AI Is Sophia the Robot a moral agent? Is Sophia the Robot a moral subject? Photo by Sophieja23 from Pixabay.com Moral Agent vs. Moral Subject A moral agent is a A moral subject is conscious being an entity that is that is responsible entitled to be for their actions. treated in a certain way, that is, it is entitled to moral protection. Photo by Coffeebeanworks from Pixabay.com Photo by PIRO4D from Pixabay.com Moral agent & Moral subject Neither a moral agent nor a moral subject A moral subject, but not a moral agent Photo by Dimhou from Pixabay.com The Moral Status of AI Is Sophia the Robot a moral agent? If it appears as a moral agent, is it a “genuine moral agent,” or “it behaves as if? Photo by TheDigitalArtist from Pixabay.com AI and Moral Agency We can discuss the relationship between moral agency and AI based on the two sets of criteria: 1) Personhood. Personhood is the necessary and sufficient condition for being a moral agent. 2) Other criteria. There are different requirements for being a moral agent. What Is It to Be a Moral Agent? A dominant view is ― Persons are moral agents. To be a moral agent, the agent needs to have personhood. An entity is a person if and only if it has self- consciousness. Sophia the Robot is a moral agent if and only if it is a person. Sophia the Robot is a moral agent if and only if it has self-consciousness. Moral Agency and Self-consciousness A moral agent is capable of reflecting on itself. Having a concept of I or an internal understanding of I or “the consciousness of myself.” For example, a moral agent is an agent who is capable of thinking “I should not have done that.” AI and Self-consciousness Does Sophia the Robot have self- consciousness? Photo by TheDigitalArtist from Pixabay.com AI and Moral Agency According to John Searle― Machines merely use syntactic rules to manipulate symbol strings but have no understanding and consciousness. Sophia the Robot does not have consciousness. Sophia the Robot does not have self- consciousness. Sophia the Robot is not a person. Sophia the Robot is not a moral agent AI and Moral Agency Some scholars argue that in order to a be moral agent the agent does not need necessarily to be a person. John P. Sullins: It is not necessary for a robot to have personhood in order to be a moral agent. Sullins J.P. (2011). When is a robot a moral agent? In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 151-162), Cambridge: Cambridge University Press. AI and Moral Agency For a robot to be a moral agent, the robot needs to meet three requirements: - Is the robot significantly autonomous? The robot needs to have free will. - Is the robot’s behavior intentional? The robot needs to have intentionality. - Is the robot in a position of responsibility? If our answer is ‘yes’ to all three questions, then the robot is a moral agent. Sullins,(2011). Three requirements Autonomy is achieved when the robot is significantly autonomous from any programmers or operators of the machine. Intentionality is achieved when one can explain the robot’s behavior only by ascribing to it intention to do good or harm. Robot moral agency requires the robot to behave in a way that shows an understanding of responsibility to others. Possible Views on The Moral Agency of Robots 1) Robots are not now moral agents but might become in the future. 2) Robots are incapable of becoming moral agents now or in the future. 3) Human beings are not moral agents, but Robots are. 4) A robot is not a fully moral agent like human beings, however, it could have a kind of moral agency. We can discuss the moral agency of robots on the basis of a different understanding of the three requirements. Sullins , (2011). First Position Robots are not now moral agents but might become in the future. Daniel Dennett in his essay “When HAL Kills, Who is to Blame?” (1998) argues that we have no machine now that has the three characteristics, but we might have machines with those characteristics in the future. Sullins, (2010), p. 155. Second Position Robots are incapable of becoming moral agents now or in the future. Selmer Bringsjord: Robots will never have autonomous free will since they can never do anything that they are not programmed to do. In order to be morally responsible a robot needs to be able to choose between two options (morally bad and morally good). If a robot has been fully programmed, then this is the programmer that ultimately makes a decision and chooses one of the two options. Sullins, (2010), p. 156-7 Second Position “The only way that [a robot] can do anything surprising to the programmers requires that a random factor be added to the program, but then its actions are merely determined by some random factors, not freely chosen by the machine, therefore the robot is no moral agent.” Sullins, (2010), p. 156. Critique of the Second Position All human beings have been programmed by nature or nurture (genes, culture, education , etc.). So, if we take Bringsjord’s argument seriously, then we are not moral agents either. This is because we are not fully autonomous in that sense. Third Position We are not moral agents, but robots are. Joseph Emile Nadeau: Only those agents are moral agents that whose actions are free actions and action is a free action if and only if it is based on reason (logical theorems). Robots are programmed fully based on logic while human beings’ actions are not fully on the basis of logic. Sullins, (2010), p.156. Fourth Position Luciano Floridi and J. W. Sanders argue that― We don’t need to base our discussion of moral agency on the concepts of free will and intentionality since these are debatable concepts in philosophy that are inappropriately applied to artificial intelligence. we should instead consider robots as agents. Sullin, (2010), p.157. Fourth Position Consider a robot as an agent. If an agent has interaction with its surrounding environment and its programming is independent of the environment and its programmers, then we can maintain that the agent has its own agency. If such an agent causes any harm, we can logically ascribe a negative moral value to it. Sullins, (2010), p.157. Fourth Position Based on the Fourth position we can offer arguments to prove the existence of (a kind of) moral agency for robots. Sullins’ argument: We need to revise our interpretation of the three requirements― autonomy, intentionality, and responsibility― for being a moral agent. Autonomy The first question asks if the robot could be seen as significantly autonomous from any programmers, operators, and users of the machine. What do we mean by the term ‘autonomy?’ There are two different meanings: Philosophical autonomy Engineering autonomy Philosophical Autonomy So far, we have interpreted ‘autonomy’ based on the philosophical conception of autonomy. Kant’s concept of autonomy: An agent is an autonomous agent if its actions are truly its own actions and its actions are not (even partly) caused by any factor outside of its control. It requires to have absolute free will. Question Do human beings are fully autonomous in that sense? If not, are we real moral agents? Photo by Pixabay.com Engineering Autonomy Machine is not under the direct control of any other agent or user. The robot must not be a telerobot. If the robot has this level of autonomy, then the robot has a practical independent agency. Is engineering autonomy sufficient to be a moral agent? Intentionality To be morally responsible for my acts, the agent necessarily needs to have “intending to act,” that is, the agent needs to have intentionality. I need to have intentionality. Does robots have intentionality? Does robots have the property of ‘aboutness’ in their actions? Intentionality What if we consider a weak sense of intentionality? “If the complex interaction of the robot’s programming and environment causes the machine to act in a way that is morally harmful or beneficial, and the actions are seemingly deliberate and calculated, then the machine is a moral agent.” Machines seemingly have intentionality. Sullins, (2010), p. 158. Question Do you find this interpretation of intentionality (seemingly) sufficient for ascribing intentionality to a robot? Photo by Pixabay.com Responsibility If a robot behaves in such a way that we can only make sense of its behavior by assuming it has a responsibility to others and we can ascribe to it the ‘belief’ that it has the responsibility to others, then the robot meets the third criterion. Can we ascribe “beliefs” to robots? Sullins, (2010), p.159 Responsibility The beliefs do not have to be real beliefs. We don’t know if machines have consciousness or not. So, we cannot ascribe real beliefs to machines. However, we might be able to ascribe a kind of belief. Robots as Moral Agents To sum up; If a caregiver robot is not under the control of any other agent (engineering autonomy) and seemingly has intentionality and behaves in such a way that we can only make sense of its behavior by assuming it has a responsibility to others, then we can maintain that the robot is a moral agent, though not in the sense of agency we ascribe to human beings. AI and the Realm of Morality Deborah G. Johnson: AI is not a moral agent, however, it is not the case that AI has no moral responsibility. AI is in the realm of morality. AI is not a moral agent, but it is a moral entity. Johnson D.G. (2011). Moral entities but not moral agents. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 168-183), Cambridge: Cambridge University Press. AI as a Moral Entity AI is a moral entity, but not a moral agent. Consider intentionality as the requirement for being a moral agent. It requires freedom of choice or free will. Johnson: “While some computer systems may be non-deterministic and therefore “free” in some sense, they are not free in the same way humans are free.” Johnson, (2011), pp.199-200. AI as a Moral Entity Robots have intentionality in a sense. Computers and robots are created by human beings as a result of their intentionality. Built-in intentionality: “Computers have intentionality, but the intentionality put into them by the intentional acts of their designers. The intentionality of artifacts is related to their functionality.” Johnson, (2011), p. 201 AI as a Moral Entity “Computer systems are not moral agents, but they are a part of the moral world.” AI does not have mental states and intentionality by itself. But it has built-in intentionality as it is poised to behave in certain ways in response to certain situations. Johnson, (2011), p. 202 AI as a Moral Entity Robots have intentionality as they have been poised to behave in certain ways by their producers ― human beings. “The intentionality of computers is related to the intentionality of the designer and the intentionality of the user.” Johnson, (2011), p. 201 AI as a Moral Entity Johnson argues for the idea that robots have built-in intentionality. Johnson’s argument is based on two distinctions: The distinction between natural entities and human-made entities and; The distinction between mere artifacts and artifacts & technology. AI as a Moral Entity Artifact: A man-made material object Technology: Socio-technical system. Technology is “A combination of artifacts, social practices, social relationships, and systems of knowledge. These combinations are sometimes referred to as socio-technical ensembles or socio-technical systems or networks.” (p.197) Johnson D.G. (2006). Moral entities but not moral agents. Ethics and Information Technology, 8,195-204. AI as a Moral Entity Technological tools have meaning only in particular contexts ― human social institutions ― in which they are produced, recognized and used. Robots and computers are not mere artifacts separated from their contexts. To have a correct understanding of robots, we need to consider them in their social context. Johnson, (2011), pp. 197-8. AI as a Moral Entity Johnson: “Computer systems have meaning and significance only in relation to human beings; they are components in socio-technical systems. What computer systems are and what they do is intertwined with the social practices and systems of meanings of human beings.” Johnson, (2011), p. 195 AI as a Moral Entity Robots and computers have built-in intentionality once they have been produced. They are able to act independently and without human intervention. “The intentionality of computer systems means that they are closer to moral agents than is generally recognized. This does not make them moral agents because they do not have mental states and intending to act, but it means that they are far from neutral.” “Computers are closer to being moral agents than are natural objects.” Johnson, (2011), p. 202 AI as a Moral Entity Do you agree with the idea that AI is a moral entity, but not a moral agent? Photo by Pixabay.com AI as Fellow Moral Agent Sullins: AI is a fellow moral agent like a trained dog. Sullins discusses an example from another human technology over the history of human civilization i.e., the domestication of wild animals e.g., dogs. By breeding dogs for human uses we have manipulated nature to human ends. AI as Fellow Moral Agent Let’s look at the example of guide dogs for visually impaired people. Relationship between― Trainer Guide Dog The visually impaired person The trainer and the dog share moral agency. Sullins: “Certainly, providing guide dogs for the visually impaired is morally praiseworthy, but is a good guide dog morally praiseworthy in itself? I think so.” Sullins,(2010).p.153. Fourth Position: AI as Fellow Moral Agent Do you think a trained dog is a moral agent? If yes, to what extent? Photo by Pixabay.com AI as Fellow Moral Agent Is a robot (nurse robot) similar to the guide dog? Or it is just a tool like a hammer? Sullins: “No robot in the real world or that of the near future is, or will be, as cognitively robust as a guide dog. Yet even at the modest capabilities robots have today some have more in common with the guide dog than a simple tool like a hammer.” Johnson, (2010), p. 153. AI as Fellow Moral Agent When it comes to the behaviors of a robot, it is not the only locus of moral agency, however, it can be seen as a fellow moral agent in a community of moral agents. Moral agency is found in a web of relations within a community. So, a robot itself is a fellow moral agent while its programmers, builders, users of the robot and even its marketers are all morally responsible. All of them form a community of interaction within which the robot itself can be considered a fellow moral agent. So, the programmers of the robot are somewhat responsible, but not entirely. Johnson, (2010), p.155. AI as Fellow Moral Agent Do you agree with the idea that AI is a fellow moral agent? Why? Why not? Photo by Pixabay.com Ethical Case What should the self-driving car do? Watch the video: The ethical dilemma of self- driving cars - Patrick Lin https://www.youtube.com/watch?v=ixIoDYVf KA0