Class 15 - Ethics and Technology PDF

Summary

This document is a presentation on ethics and technology, focusing on class discussions, readings, and case studies. It examines ethical issues around surveillance, bias in technology, dual-use technology, corporate ethics, and government regulations for AI development. The presentation also evaluates the effectiveness of different approaches to ethical AI.

Full Transcript

Ethics and technology Class 15 December 2, 2024 Agenda Recap surveillance studies Ethics and regulation in technology Discuss the reading: Wong, “Things Get Strange When AI Starts Training Itself” (2024) Metz, “Is ethical AI even possible?” (201...

Ethics and technology Class 15 December 2, 2024 Agenda Recap surveillance studies Ethics and regulation in technology Discuss the reading: Wong, “Things Get Strange When AI Starts Training Itself” (2024) Metz, “Is ethical AI even possible?” (2019) Recap: Surveillance Governments and police often argue that surveillance is necessary to prevent crime or increase safety, but studies have shown that more surveillance does not prevent most crimes or make us safer. Constant surveillance impacts our thoughts and behaviors. It is a form of social control. We may start to change how we act or what we feel comfortable saying. And surveillance capitalism, through mass data collection and advertising, is trying to predict and change our behavior. Recap: Surveillan ce 4:45 – end “Is Facebook listening to your conversations?” The Verge, 2018 https://www.youtube.com/watch?v=G1q5cQY4M34&ab_channel=TheVerge Instagram ads Instagram ads Electronic Frontier Foundation https://www.eff.org/pages/tools Privacy/security resources Electronic Frontier Foundation Use a password manager. Don’t use the same password for everything. Use two-factor authentication whenever available Use a VPN Turn off GPS on your phone (or when you aren’t using it) Use privacy-focused search engines and browsers like DuckDuckGo Use encrypted platforms for texting and/or for email Use secure platforms for documents you store in the cloud, like Sync Don’t allow apps to access your contacts or location, especially if they do not need it to function. Be very cautious about what you do online when using public wifi. Use a numeric passcode instead of biometric data like facial recognition or fingerprints to unlock computers or phones. Police may also be able to force you to unlock your device using biometrics. 1. How do we decide what is ethical for technology? Ethical 2. Whose ethical code will we code use? Will this be the same across the world? Once we agree on a set of ethical principles, how do we ensure that everyone will follow these rules? 1. Should companies regulate Regulatio themselves (internally)? n 2. Should governments regulate companies and/or technologies? 3. What about international organizations? We have looked at many examples of technology this semester: Beauty AI (algorithm that judged beauty standards) Soap dispensers and face-tracking that did Bias in not work for people with dark skin Gender and voice assistants technology Technoableism Surveillance What do we do about bias or discrimination in technology? These are all different types of bias. Bias in technology “building ethical artificial intelligence is an enormously complex task.” “It gets even harder when stakeholders realize that ethics are in the eye of the beholder.” (Metz) Who should decide what is ethical for technology? Dual-use technolog y “Clarifai specializes in technology that instantly recognizes objects in photos and video.” “Policymakers call this a “dual- use technology.” “It has everyday commercial applications, like identifying designer handbags on a retail website, as well as military applications, like identifying targets for drones.” “This and other rapidly advancing forms of artificial intelligence can improve transportation, health care and scientific research.” Debating “Or they can feed mass ethics surveillance, online phishing attacks and the spread of false news.” (Metz) How do we balance the benefits of technologies with the risks? Corporate ethics “Employees at Clarifai worry that the same technological tools that drive facial recognition will ultimately lead to autonomous weapons” “Thousands of A.I. researchers from across the industry have signed a separate open letter saying they will oppose autonomous weapons.” (Metz) At Clarifai, “some employees grew increasingly concerned their work would end up feeding automated warfare or mass surveillance.” “A few days later, Mr. Zeiler held a Debating companywide meeting” ethics “He explained that internal ethics officers did not suit a small company like Clarifai.” “And he told the employees that Clarifai technology would one day contribute to autonomous weapons.” (Metz) Debating ethics “Though some Clarifai employees draw an ethical line at autonomous weapons, others do not.” “Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators.” They make an argument that autonomous weapons will save lives: “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents” (Metz) Do you agree? Whose lives are saved? Whose are at risk? Corporate ethics Many companies like Google and Microsoft “are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way.” “Some set up ethics officers or review boards to oversee these principles.” But: “Companies can change course. Idealism can bow to financial pressure.” (Metz) Many companies have decided to form their own ethical guidelines. They decide what is ethical and what is not, what kinds of Corporate technology they will build and how they will build it. ethics What are the advantages to this approach? What potential problems does it present? Google’s ethics board Google “agreed to set up an external review board that would ensure the lab’s research would not be used in military applications or otherwise unethical projects.” “But five years later, it is still unclear whether this board even exists.” “Google, Microsoft, Facebook and other companies have created organizations…that aim to guide the practices of the entire industry.” “But these operations are largely toothless.” Google’s ethics board (2021) Google Google formed an internal ethics board, but in 2021 they fired two of their top AI ethics researchers: ’s “Gebru was fired after arguments with managers over a research paper she co- ethics authored with Mitchell” “the machine learning community has [not] been very open about conflicts of board interest due to industry participation in research.” (Vincent) Google’s ethics board One of the lead ethics researchers published a paper that found “problems in large-scale AI language models — technology that now underpins Google’s lucrative search business” “the firings have led to protest as well as accusations that the company is suppressing research.” James Vincent, “Google is poisoning its reputation with AI researchers.” Vox, April 13, 2021. https://www.theverge.com/2021/4/13/22370158/google-ai-ethics-timnit-gebru-margaret-mitchell-firin g-reputation Facebook Supreme Court In 2020, Facebook established an Oversight Board or ‘Supreme Court’. The Facebook Supreme Court will make decisions about speech and content policy for the platform. “the board will consist of 40 members from around the world” “The board's decisions to uphold or reverse Facebook's content decisions will be binding, meaning that Facebook will have to implement them” https://oversightboard.com/ Facebook Supreme Court Some believe this will help regulate Facebook: The Supreme Court can make decisions independently from Facebook, without financial pressure from the company. Others think it is not effective: It can only review certain kinds of cases. It could allow Facebook to avoid stricter government regulation, since Facebook can say that they are regulating themselves. https://www.newyorker.com/tech/annals-of-technology/inside-the-making-of-facebooks-supreme-court Government regulation “Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.” Do you agree? Government regulation and ethics The Pentagon is “now building its own set of ethical principles” “The Pentagon has said that artificial intelligence…has not been used for offensive purposes.” But “the Pentagon is motivated to keep pace with China, Russia and other international rivals as they develop similar technology.” “For that reason, some are calling for international treaties that would bar the use of autonomous weapons.” (Metz) “A three-star Air Force general said the U.S. military’s approach to artificial intelligence is more ethical than adversaries’ because [the US] is a “Judeo-Christian society”” “Regardless of what your beliefs are, our society is a Judeo-Christian society, and we have a moral compass. Not everybody does,” Moore said. “And there are those that are willing to go for the ends regardless of what means have to be employed.” “The future of AI in war depends on “who plays by the rules of warfare and who doesn’t. There are societies that have a very different foundation than ours,” he said, without naming any specific countries.” However, “people from a wide range of religious and ethical Governme nt regulation Government regulation is also complicated. Governments are pressured by companies and lobbyists ($). Sometimes they do not understand the technology they are supposed to regulate… (this was a real conversation from a Congressional hearing in 2023) What about the public? Public We use and are affected by these technologies. discussio Should the public be able to n? influence the ethics and rules around technology? Discussion: Regulating ethics We’ve seen a few ways to try to regulate ethics in technology: Employees at tech companies Internal guidelines at tech companies / military Government regulations of companies International regulations of countries/technologies What about the public? Which do you think is most effective? Which has the best chance of success? Why? UN recommendations The United Nations published a report with a set of universal ethics, and making ethical recommendations for technology. As an international organization, they have a limited ability to enforce any rules. The first thing they ask is “which ethical code should they be programmed with?” UN recommendations “Asimov’s Three Laws of Robotics are frequently considered as the standard answer to this question.” We see the relationship between real-world technology and science fiction. “However, it is generally accepted…that such laws are too general, potentially contradictory and non-applicable in the real world.” (COMEST p. 45) First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm. Isaac Asimov’s Second Law A robot must obey the orders given Three Laws of it by human beings except where Robotics such orders would conflict with the First Law. Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. But if we don’t use science fiction for a guide, what ethical principles should we use? Cultural values? Legal What ethics? traditions? Religious ideas? The UN tried to create a set of universal human values which could guide ethical decision-making. “Dignity is inherent to human beings, not to machines or robots.” UN ethics: “robots and humans are not to be confused even if an Human android robot has the seductive appearance of a human” Dignity Even if they look like humans, and even if they are smarter and more powerful than humans, the UN says they are not human. Where do we draw the line? Autonomy: People can “refuse to be under the charge of a robot.” UN ethics Right to Privacy Do not harm principle: “Robots are usually designed for good and useful purposes…to help human beings, not to harm or kill them.” Beneficence (doing good): If a robot can help humans but can also be used to UN ethics: conduct surveillance or policing, how do we decide Doing Good whether it is doing good or not? “Is the particular type of robot used imposed on people or has it been designed for the people and eventually with the people? UN ethics: Diversity “robots – especially social robots – may be accepted in certain settings and not in others.” “greater sensitivity to cultural and gender issues should drive research and innovation in robotics.” UN ethics: Justice and jobs “The extensive use of industrial robots and service robots will generate higher unemployment for certain segments of the work force.” “This raises fears concerning rising inequality within society if there are no ways to compensate, to provide work to people, or to organize the workplace differently.” UN ethics: public discussion “Robots will have profound effects on society and on people’s everyday lives.” “citizens need to be equipped with adequate frameworks, concepts, and knowledge.” “public discussions need to be organized about the implications of new robotic technologies for the various dimensions of society and everyday life” How do we do this? Moral machine: morality game “A platform for public participation in and discussion of the human perspective on machine-made moral decisions.” https://www.moralmachine.net/ UN Recommendations: Autonomous Cars “in all cases, the human is accountable.” The UN says that the human driver must always be paying attention and ready to intervene. This means these vehicles would be semi-autonomous, rather than fully autonomous. Any accident would be the driver’s responsibility (not the car’s programmer or manufacturer). UN Recommendations: Drones “Armed drones have given humanity the ability to wage war remotely.” “this may be attractive in reducing the need for ‘boots on the ground’, but it threatens to change fundamentally the nature of conflict.” “one-sided remote warfare is massively asymmetric, with an attacker in a position to kill an adversary without any threat to him/herself.” “An ethical justification for such a situation is difficult to find.” UN Recommendations: Drones “the ability to go to war without exposing one’s own soldiers to direct threat lowers the perceived cost, and hence the activation barrier, of declaring war.” “This raises the worrying prospect of low cost continuous warfare.” They recommend that countries limit or ban autonomous weapons “as they have done for other weapons that have been limited or made illegal such as anti-personnel mines and chemical and biological weapons.” “Our strong, single recommendation is therefore that, for legal, ethical and military-operational reasons, human control over weapon systems and the use of force must be retained. (p. 54) Drone surveillance technology in police UN departments “should be decided by the public’s representatives, not by police Recommendati departments” ons: “Drones in police use should not be equipped Surveillance with either lethal or non-lethal weapons.” “Autonomous weapons should not be used in police or security use.” UN Recommendations: Gender “Roboticists should be sensitized to the reproduction of gender bias and sexual stereotype in robots.” “Particular attention should be paid to gender issues and stereotyping, and in particular, toy robots, sex companions, and job replacements.” UN Recs: Environment “environmental impact should be considered as part of a lifecycle analysis” “whether a specific use of robotics will provide more good than harm for society.” “This should address the possible negative impacts of production, use and waste (e.g., rare earth mining, e-waste, energy consumption), as well as potential environmental Next class: Ethics and technology part 2 Read: “Report of COMEST on Robotics Ethics.” Paris: UNESCO and COMEST, 2017. Read pages 48-55, starting with Recommendations. We will do a class activity and begin the ethics assignment. UN Recommendations: robots and AI Cognitive robots: these robots can learn from experience and “can make decisions in complex situations, decisions that cannot be predicted by a programmer.” “the responsibility for the robot’s actions is unclear” “its behavior in environments that are outside those it experienced during learning…can be potentially catastrophic.” (p. 48) “The question of who is responsible for the consequences of such (unpredictable) decisions has deep ethical aspects.” UN Recommendations: robots and AI Deterministic robots: “programmed to do clearly defined tasks” “their actions are controlled by a set of algorithms whose actions can be predicted.” “the behavior of the robot is determined by the program that controls its actions.” “Responsibility for its actions is therefore clear, and regulation can largely be dealt with by legal means.”

Use Quizgecko on...
Browser
Browser