Technology, Artificial Intelligence, and Ethics PDF

Summary

This document discusses the concepts of artifacts, technology, and ethics, exploring the nature of 'artifacts' as resulting from human intervention. It also delves into theoretical ethics, arguing that there isn't one essence of ethics that applies to all its branches. The summary uses keywords like philosophy, ethics, and critical thinking.

Full Transcript

Matyas Hana (00:00.888) Yeah. describe first what the object is. We already did that. Technology and artificial intelligence. And then I’ll move now immediately in a minute of perspective that will take in order to think about technology and general artificial intelligence specifically. Questions...

Matyas Hana (00:00.888) Yeah. describe first what the object is. We already did that. Technology and artificial intelligence. And then I’ll move now immediately in a minute of perspective that will take in order to think about technology and general artificial intelligence specifically. Questions from last week. Is there anything you didn’t understand as well? I’m mature enough. Maybe you need to learn some other examples. Matyas Hana (01:39.213) Wow. Okay, you want that? Yeah, maybe regarding what was an artifact, if you could just maybe develop a bit more on that. I didn’t really get it. What, an artifact? Yes. Okay. Yeah, I could do that. So, I was here talking about technology. So I would say... An artifact first tried to conceptualize artifact is, well, it’s an effect of human intervention. But that, of course, is too broad. mean, it also includes the ozone layer or a gap in the ozone layer, example. Because that’s also an effect of human intervention. So our definition of a polar artifact should be more clear. to be more precise, it’s an intended effect over the human intervention. But that’s still the growth, right? If I have some seeds, for example, I want to have a tree at end of the day, at end of the season, so I’m putting the seed into the ground, giving it water every day, week. At the end of the week, at end of the semester, at the end of the... summer for example we have a tree, then as you say well, following the definition the tree is not effected, because it’s intended effect of my intervention. This is what I really want to have, therefore I assume C into the ground etc. So that’s the problem because I’m not going to say only don’t use to say about it. Even if is an effect of dimensionless color, it’s not an artifact. So we need a third more precise characteristic property. And I would suggest, and I think that covers most of our conception of what an artifact is, that at some point, can grow autonomously. Matyas Hana (03:45.004) spontaneously between tracks. Okay, spontaneously between tracks, because we use water and air and sunlight and whatever. Retrieve, for example. But that’s totally different from all the things that we call artifacts, like this computer, point for example, it cannot grow spontaneously, naturally, which is different from things that are not artifacts. So that’s my, what I would say my broad, rough perception definition of what an artifact is. Of course, can think of examples throughout gray zones. It’s very difficult to find very, very precise borders between what an architect is and what it is. It really distinguishes from a non-architect, but that would suffice, at least for my part, to have a rough definition. Other questions, perhaps? Yeah, thank you. Other questions? Yeah. Matyas Hana (04:58.188) So these two are... Matyas Hana (05:09.302) So being an intent effect is for an artifact. That could also be for a normal artifact. The fact that it does not grow, that it does not grow automatically, spontaneously that’s characterized it. Which, I mean, from right from the beginning till the end, would say more than 40, less than 80, and half of the sentence really leads to a misinterpretation. That’s the difference. Matyas Hana (05:45.324) Well, I started with, like I tried to define, and I started in very broad one, the effect of human intervention was the first step. made steps, three steps. The first step is human intervention, intended effect, because still too broad. It’s an intended effect, and it cannot grow at all. Matyas Hana (06:09.311) Yeah. All right, let’s move to ethics of AI and technology. So what does it mean to say that you take an important ethics of technology? Like yesterday, I completely understand that. All of you, or some of you, the majority of you say, why should I take this kind of course ethics of AI and technology? Why? I’m going to answer that question at the end of this chapter. First, I’m going to try to conceptualize and try to define what ethics is. Is it really giving opinions or giving preferences and giving arguments for your preferences? That’s not true. first, I’m going to start by very briefly, ethics is just a supplementary philosophy. just a subdomain within philosophy in the sense that there are several subdomains that are also philosophy but not ethics. We have logic, political philosophy, metaphysics of philosophy, philosophy of science, philosophy of sports, culture, and ethics. So all ethics is philosophy but not everywhere else. And thus also, ethics is not the essence of philosophy. The essence here means something very specific we’re going to define it here the essence is a property I mean it’s a technical explanation of what an essence is it’s a property Matyas Hana (08:20.075) is, for example, you have to set ethics, all ethical theories or all ethicists, people doing research ethics and teaching, whatever. Here there are members of the set ethics, the co-theories, ethicists. So if you’re looking for the essence of ethics, what are we asking for, what are we striving to realize, what we want to have at of the day is some kind of property that belongs to all the members of the set, and only the members of set. So essence in other words is a universal property, a property that belongs to all the members of the set and also unique property. It’s a property that is distinguishing ethics from things that are not ethics or domains that are not ethics. Matyas Hana (09:17.003) So at the end of the day, if I made, last week I said, I’m going to have like eight points out of 20 based upon, for example, based upon argumentation questions, I could ask, give an argument against or in favor of an essentialist conception of what ethics is. An essentialist conception, some people say every year, sir. What does this word refer to? Essentialist. We’ve done this on the slide. But an assessment conception is a conception that says that, for example, ethics does have universal and unique properties. And we have to explain that. Unique property is a property that belongs only to the members of the set, in this case, ethics. And also to universal property in the sense that it refers to all the members of the set, in this case, ethics. So if you have the sense of the assumption that you believe, that ethics has something that is shared by all the members and also all the deuce members. That is something unique about philosophy of ethics. And I’m going back to the previous slide. In it’s just subdomain. But in philosophy, cannot say that ethics is the essence of philosophy. Why? Because the ethics of philosophy would be a property that is shared by all members of the set of philosophy. Ethics is just a subset of sense philosophy and thus it cannot be the essence of philosophy. Matyas Hana (11:02.73) So in order to define that, the essence are, in order to ask, however, to answer the question whether we have the essence of ethics, we should try to find something that belongs to all ethicists and only ethicists, the universal and unique problem. And I’m going to give already the answer. So if you look at the set ethics in general, there is nothing unique. There is nothing universal about ethics. If you look at the general method, right? And that’s also one of the reasons why you as a student in all of the main ethics are taking this kind of class. Now I’ll explain a little bit. So why don’t we have in general, why don’t we have an essence of ethics? Let’s look at three methods. First the ethics level, then the level of attitude, then the kind of method that... as the at the system of is taking, and that is the kind of question that those people are asking. First, the attitude. If you look at the word philosophy, which is a word that is coming from Greek, feels and so falls like that. It’s love of the word. When you’re trying to. Matyas Hana (12:56.617) your friends, your husband, your housewife, guests, but also family is important. Study is important, yes, but also physical sports. Also the other way around, go out to bars and restaurants, but also study. Yeah, so try to find some kind of little ground between the extremes. These are examples of wisdom, things that we try to... When we believe that we important, we also try to transfer it to another mind. For example, my son or my grandmother or my mother is always proclaiming and giving you wisdom, like deep between brackets, insights that would help you to navigate the world. To be honest, I’m an ethicist, I’m philosopher, in general, as a philosopher, as a philosopher, I don’t care about wisdom. I’m not interested in that first. Second, if I’m not the privilege of philosophy, I’m an ethicist. Also, a lot of are not ethicists, or interested in wisdom. My mother, for example, all the time when I’m calling her, be careful, do this, do that, what? Matyas Hana (14:11.003) So searching for wisdom could not be an answer to the question, does ethics have an essence? Because it’s not a primitive philosophy, it’s not a unique philosophy or ethics. But it’s also not something that is shared by all the members of the set ethics. Because as a philosopher, I’m not interested in allowing you to wisdom. I’m interested in different things. OK, second possibility. So I guess that some of you. or others if you go out and sit at the party and someone is asking you to do the K1 on the side of technical ethics and philosophy. Well, yeah, what is it? It’s critical thinking. And in a sense, yes, it’s correct, of course. I mean, what we try to do here is be critical about a couple of things. And then assume that all ethicists and philosophers are critical. beings, of course, the pediocles being critical is not unique to philosophy of ethics. guess you as an engineer, are as critical as a philosopher is, and probably more critical than a philosopher could be. But it’s the same for not only engineers, also people studying social sciences. Matyas Hana (15:31.41) psychiatrists, psychologists, all those other debates and also outside academia. People are, of course, critical. I those people comparing my breath for example, making a breath, guess, at least I hope that they are critical as well. That they can make a distinction between the substances that are good and not good, are dangerous or not dangerous, are safe and not safe at all. Same for the engineer, same for the psychiatrist as well. So critical could not be the essence of philosophy or advocacy. At the same time however, I can see why people are thinking that ethics has to do with being critical. Why? Because I think the history of philosophy, there are several people who very important, very known. One of them is Rene Descartes in French. Descartes was a French philosopher in 17th century. He’s known from his. faithless expressions you follow, dokshishri, I’m thinking and therefore I am. And that’s the kind of sentence that is expressing a very critical attitude. And that’s also the reason I guess, because he’s very known, that’s also the reason why a lot of people link philosophy and ethics to critical thinking. Because he’s very famous and also a very good example of a human being that is very, very, very And in what sense is he very critical? You know, he’s not a human being to come from. So Paul’s always free, I’m thinking, I’m being critical, and thus I am. Matyas Hana (17:19.112) So what does this come from? Well, it means that he was a philosopher, philosopher who was overworking mathematics, his purpose was to fight for theoretical points, he be sure of. He should not criticize in front of the court. And therefore, he started to be critical about everything he done. And he tried to be critical only about every given side of him. But also about all the sensations, perceptions that he has. So you see me here standing, talking. That’s an impression that you have. That’s a perception. That’s a kind of sensation that you have. How can you be happy with that children that you are sitting here in Leuven at 4.30 in Leuven on Friday attending a lecture in Palsy? How can you be sure about that? You have the impression that that’s a sensation, that’s a perception. How can you be happy with that children that this is the case? Same for the yellow here, the black this, the P-shirt, whatever. How can you be sure about the color of the picture variant? That’s the kind of question he was asking. So he was critical of anything, knowledge, perceptions, et cetera. So if you’re unsure about anything, then you’re, of course, sure about at least one thing. Namely, that you’re unsure about anything. And that you’re unsure about anything means that there is something or someone who is unsure about anything and thus is sure that you exist. Right? And that’s the idea. So, to pose, I’m or it means more precisely I’m driven. I’m skeptical about anything. And therefore, though, and thus, just really, I exist. Matyas Hana (19:21.703) I’m not sure about anything. There is one thing I’m sure about, maybe that I’m not sure about anything, and there should be something that is not sure about anything, and therefore I can say, there exists a list. There is a kind of material body that is very cool. this or you’re unsure about this and trying to maybe do it in another way or something like this. So you spend your time on unnecessary things. Some people have the belief that Rosy is about the new thing. you’re right. Matyas Hana (20:39.76) look at the first level attitude, there’s no reason to believe ethics has any essence. There’s nothing that is unique to ethics. Critical thinking, engineers are as well are critical. Try for wisdom, you don’t give about wisdom at least on a professional level. Then look at more methodological level, right? experimental thinking, work, experiments are seen as important since let’s say the 16th or the 17th century, Francis Bacon very famous figure in the history of electro history in Europe, who was one of the first people who said, well, bacon, was saying, well, if you want to have like, the good, reliable, scientific objective, insights, facts, evidence, data, then you have to do this kind of work, experimental work, right? And you do something pretty bizarre. during the experiment. It could be a thought experiment, yes, but also like a non-thought experiment, the kind of experiment that you are doing. Try over at Wittner or explore him at least. does something, an experiment means that you’re setting up some kind of artificial situation and at the end of the day you want to get some new information. Or you want to test the hypothesis based upon that experiment. That’s more or less the idea. it’s an unusual, unnatural artificial situation to set up because you want to test something, you want to know something extra, you want to gain extra information that is hypothesis correct or incorrect, that’s new information and we have a lot of examples of that but we also have like within Matyas Hana (22:42.022) domains that are not about science and engineering. We also have, for example, in philosophy experiments, maybe in thought experiments, that are also very non-natural, not spontaneous, artificial situations. But if we look at them, locate them, not in a room, but in the mind. It’s a purely cognitive setting. We’ll do a thought experiment in a way, in a minute, about the runaway theory. The point I want to make is that thought experiments are often, most of the time, linked to physics, to philosophy, but that’s not justified. Like Schrödinger, you know about the cat from Schrödinger, the parable from Schrödinger, I'm not going to explain it, probably you better than I do about the cat. But that's also an authority experiment, and Schrödinger was not. philosopher, he did a lot of work in physics. And so also, within domains that are not about philosophy and ethics, people are setting up thought experiments to gain expert information. So again, the third way to find the essence, thought experiments could also not be the essence of abstract philosophy. So if I'm asking at the exam, give an argument. You have an argument against the conceptual conception of what ethics is. And I explained it last week, first time, first lecture. You need to do three things. First, you explain what the thesis is. Just like a very descriptive definition of the thesis you need to tackle or justify. Then you give the argument, very quickly, this is the argument. And then you explain why this argument is an argument that people want to that thesis. Right? So this. Thought experiments are an argument against the sense of conceptual ethics. Because thought experiments are, even if they're used by all adhesis, they're not used by all adhesis. And thus, an argument against the sense of perception. Because an essence refers to something that is unique to the set of the number of the set. And this is not unique, because we also have, for example, Schrodinger's gap, which is also a thought experiment. Matyas Hana (25:07.589) Got that? Okay, let's start out as we have an example. We learned a couple things about the thought experiment in an ethical context and it's also going to be relevant for the second chapter of Judy Lappley's. Okay, you know this picture of course. You didn't do that last week? You did that? We saw it. You did that? You know it? So you are here, right? You're here at the hospital. train that you can use, and here's the train, cannot stop the train. You know that. It's not going to stop. If you don't do anything, the train going to kill five If you do something, yeah, you do something here, the train is going to do not this, killing these five people, but only the train will do it. Or this way that it can kill only one person. A thought experiment is not about What's the right choice? That's not the question. question is what would you spontaneously answer if I would ask you, standing here, would you do? My point is not you're wrong or correct. My point is what is in an engineer's mind, for example? What would he or she do? Fuck people of majesty. So, okay, let's do it. Let's kill all of them. I'm going to learn something. I'm going learn something about you. So who would do Who would say, I'm not going to let them train killing five people, it's only one person that should do it? Who would do anything? just say, well, one person, three people, five, that's minority. So keep in mind that just a minority of people would say, don't do anything. The majority would say, do something. Do something. Of course, I should have mentioned that you don't know anything about the identity of those sides of six persons. It would make a difference, of course, if that could be your girlfriend or mother or daughter. At least I hope so. Matyas Hana (27:26.779) Okay, keep your own answer in mind. In general, the majority would do something. So the majority would say, I'm going to go for a world in which I did something, but the end result is that there's only one person killed. The minority would say, I'm going go for a world in which I'm not going to do anything. But the end result is that there are five people dead. Okay, keep that in mind. Keep your open eyes in mind. Also, the most given answer, take it out that you do something. Okay, what would you do now? You're standing again here. There's someone else. Again, if you're reckless, you don't know that person. It's not in your seat. You don't know anything about that person. Also, here, you don't know anything about this person. What would you do? Matyas Hana (28:31.155) What's the scenario? You push someone. Like why? Like that he's fat and he's gonna stop it or what? If you push the guy on top, the train is stuck because you have to the guy. Sounds fun, bro. Matyas Hana (28:57.676) So, okay, well, yeah, yeah. I mean, it's like, that's me, I'm gonna go. Matyas Hana (29:11.437) be clear, your answers are of course not surprising. do this every semester with let's say a thousand, five hundred people. More or less always the same. Most people say, first case I'm going to do something. Most people say the second case I'm not going to do anything. But this is bizarre. I'm not saying that this is incorrect, this is bizarre. Because it's inconsistent. For example, me, Lola, first case I was saying, I'm going to do something. That's my answer. So I chose for a world in which at the end of day there's only one person. That's the best result I'm saying. One person, that's better than five. But if I would follow the same principle, and I would say in the second case, yeah, push him for her. Because then, they're safe five people. That would be the same principle, going for the best results, less people than the other example. That would be my decision to push someone, personally here. So I'm making an insistence. If I'm taking people's view, I have to make ethical decisions, and this is obviously a pretty ethical decision. If people take ethical decisions, that's what I learned at least from you. If you take ethical decisions, you're not consistent. Sometimes you go for what we call consequentialist perspective. That's the link to the second chapter. Consequentialist perspective, it's not on the slide, but it's important to consult and decide. Consequentialist perspective means that... majority of you, in the first case, went for the best consequences, went for the best effects of your intervention. A world with only one person dead is a paradigm, a world with five people dead. That's a better impact. Matyas Hana (31:16.899) Whereas the majority of you in the second case says, I'm not following a principle that is consequentialist in nature, because if I would do that, I would push the person. But I'm not going to do that. So I'm following a non-consequentialist perspective. Which one? It does really matter, but it's a non-consequentialist perspective. That's something that I learned from the situation. I didn't know that. I didn't know what your beliefs were about ethics, but now I learned something. You're not consistent. Same for me. I'm not consistent. OK, let's move. Matyas Hana (31:58.242) That was not the option. Yeah, that's a different question. So the question is, sure, so what if there's that third option not to kill anyone? But that was not, I you only had two options. And that's not because I want to have a real example that covers reality, that is a line of reality. in reality it something which you can sense your name on. It's not same scenario as music. again, that's not the point. The value is already in the situation. It's sure that one of those two are there. Here is not, is like someone, so it's like in this case. there? It is a bit different. Matyas Hana (33:25.002) I think this was the word. Matyas Hana (34:00.928) If we look now at the third level in order to find out what the essence of ethics could be, some people would say, look at the kind of questions they're asking. Or very often given the definition as well if they're talking about abstract, unanswerable, non- very viable questions. That's probably correct. But it's time. Do we have a free will? Why is time? independent from your consciousness, whatever, from your beings. Another example is, imagine if you have a wood, example, in a forest, you're standing there, a tree is falling, you hear something, you hear something because a tree is falling. the crowds but imagine that no one else is hearing that three, five of the crowd. It's kind of difficult philosophical question. I'm saying well, probably all kinds of philosophical ethical questions are not very easy to answer, yes? In three or four of them, I guess they're way more difficult questions than philosophical ethical questions. physics, evolutionary theory, big bag of research, whatever, I mean it's way, way, way more difficult than those difficult kinds of philosophical questions. this slide is very important for the exam. Think about the multiple choice questions. This slide is key in order to understand this first chapter and also to prepare the exam. Matyas Hana (35:50.78) If you're looking at the camera... Yeah, then there is no reason to say ethics has an essence. the level of questions, the methodology, but also attitude, there's nothing in the first language that's false. So then the question becomes, well what if we look at the subdomains within ethics? Matyas Hana (36:23.05) What if we look at the subdomains within the antics, maybe if we divide or let's have look at several subsets within the domain of the antics, is there's reason to believe that these subdomains do have an essence. And there are three stuff that is metaethics, descriptive ethics, and normative ethics. And it's going to be especially the last one. It's going to be very important in the second and also the third and fourth lecture. And that's question will be or is this metaethics, does normative ethics, or descriptive ethics, do they have like an essence? there something unique and universal about normative ethics? And there you'll see there's some kind of questions. Look at first, meta-ethics. Meta-ethics is again a very abstract domain name ethics, very theoretical question that they're asking. And it's mostly about things that people believe are desirable in ethical sense or undesirable in ethical sense. In the sense of the other question about, well, like if you look at science, there we can talk about scientific facts we can about facts things that are facts but can we talk about we can talk about scientific facts but do we have reason to believe that we can talk about ethical or which means the same can we talk about moral facts do we have moral facts? or are sentences in which we say that some a pure cease of action is desirable or undesirable, are we expressing the belief that something is a fact or are we just expressing our own preferences? Killing is bad, it's not a rule, right? Or is this just expressing the belief that this is undesirable? Matyas Hana (38:29.428) That's the of question that people in mid-anx are asking. Other kind of questions they're asking. We have a clear idea of... Matyas Hana (38:40.833) Maybe you weren't there. Yeah, you. Matyas Hana (38:55.308) I this a Friday and I completely understand this. but as such you're studying at the university right now you don't have to be here so it's more than 20 minutes you're out all the time laughing also your neighbor again can't stop laughing and talking and etc I don't know why would you be here if you don't have to and you're laughing all the time so it's wasting your own time but also like maybe on present for at least the neighborhood. So if you can give me a good reason why you are here while at the same time not listening at all. Matyas Hana (40:02.728) It will never happen again. Matyas Hana (40:06.943) Yeah, you have a good piece. There is the one between. You don't have a good piece, so I'll say, well, if you don't have a good piece, why are you on a piece? There are a couple of people who read the list and see the writing, and you're stuck on the Well, how would you please read Dr. Selig? And I guess someone could use Matyas Hana (40:36.67) Okay, so we have a clear idea of some things that are morally neutral, like walking, example, losing support, watching television, things that are morally neutral in a way, visiting a friend, whatever. Morally neutral at the same time helping someone else, as conceived by many, many people as something that is morally not neutral but desirable. Or knocking on someone's face. generally we say undesirable, should not do this. But very precisely, is a morally neutral action, not morally neutral anymore. And very precisely does it become something that is morally relevant. Or the other way around, we have the idea of something that is morally desirable, but is there a criteria, is there some kind of a threshold that which would say well, something morally desirable stops here to be morally desirable and becomes something that is morally neutral. Maybe can ask whether, and at the level, maybe it's the same with natural phenomena. In natural phenomena, have lot of examples that are feeling gradual, day and night for example. have a period example of day, 5pm, and also night, 12 hours for example, it's period night. But this is a gray zone. Where precisely stops the night, and starts the day and also the other day around, it's not clear. So maybe this is the same for ethics. Maybe it is that at some point some acts are not just morally neutral in the moment. Where exactly do these stuff can become morally neutral? That's a miniathecal question. For the first time, this has an essence. This sub-demand of ethics has an essence in the sense that only those people and all those people are asking this kind of question. the exam, asking for an argument in favor of an essentialist perception of ethics, could answer this. You could answer this. Matyas Hana (42:43.134) Namely, people, mid-ethics is an argument because people working with mid-ethics, they are asked mid-ethical questions, abstract questions, and they are the only persons normally who ask these kind of very abstract questions and try to investigate and answer them in a very systematic way. Same goes for the second sub-debate, descriptive ethics, also an argument in favor of excessive conception. So in the second part, can answer that sub. The ethics can have it indeed, an essence in the sense of this subdomain is a subdomain in which only those people ask these descriptive questions. So one of the descriptive questions they're descriptive ethicists try to track But the moral beliefs are of a group of human beings in a specific country, whatever place. I'm a descriptive analyst. I could be interested in trying to what first batch of engineering students in the world think about Indonesia, example, abortion, climate change, door training, whatever, people, et cetera, et cetera. what I'm going to do, I'm going organize interviews, I'm going to talk to you and this will give me some kind of indirect insight not direct of course because I'm not 100 % sure what you're really thinking about ethics, ethical questions but this will give me some kind of indirect insight into the beliefs of engineers studying in the movement first batch it's descriptive in the sense of it's just trying to describe without giving any kind of moral evaluation of what in those people's minds is I'm just trying to capture how you think about animal issues and that's different from the third and most important subdomain called global impact Matyas Hana (44:59.793) normative ethics is a third subdomain and it doesn't describe how people in fact think about ethical issues and questions not how they should think about ethical issues and it's normative meaning it's not descriptive it's prescriptive it tries to prescribe how we should think about ethical issues So maybe as an entry you think that privacy is important, so that privacy bullshit. But normative attitudes will give you reasons to say well, or arguments to say well. Maybe you have to believe that privacy is unimportant and whatever. But here are couple of reasons to say well, it isn't unimportant. So they tried to argue for normative positions. Normative positions in the sense of they tried to say, this is a good way of behaving, this is a good way of thinking, versus this is a worse way of thinking and behaving. So generally, we formulate the question of the role of the ethicist. What should I do in order to behave ethically? What is desirable? What is undesirable? What is ethical? What is correct? What is incorrect? What is fair? is unfair, et cetera. So they try to make a distinction. They try to make a distinction. What is morally right? What is morally incorrect? and that's different from the descriptive part. As an emphasis, in the second subdomain, I'm just tracking how you think as a normative person, I'm trying to describe how you should think about applicable issues. So they are different. Matyas Hana (47:01.148) That's not on the slide. It's important to see that both are could be connected. They could be connected. That's first thing I want to say. It could be connected in what way? think about the ethical guidelines of the European Commission. Think about, you know the ethical guidelines about AI, the European Commission, think about GDPR. GDPR is legislation made by you persons, Mostly academics, policy makers. So they have strong beliefs about privacy, for example. So they have, as non-engineers, strong opinions about the importance of privacy, fairness, And I agree, but assume that we can have like a descriptive research trying to capture how you as an engineer, those people who really make and develop those AI systems, how you as an engineer think about privacy and fairness, whatever. So maybe there's a clash. Maybe the AI after the GDPR asks that you take a 2,000 pounds privacy, give it a fair. those other ethical topics, maybe the ask require that you take it into account, but at the same time, it could be that my descriptive research shows and finds out that you as an engineer, you don't care. You don't care about fairness, sustainability, rights, assume that. And then of course, that's an issue. That's a problem. commissioners that ask you to take into account privacy and carelessness, but those who should take it into account, the engineer, could be of self-care. So then by descriptive research development, because that opens the way for workshops, teaching, handbooks, whatever, in order to make you more familiar with the questions, to create awareness about those issues, etc. Matyas Hana (49:01.883) That's the first thing I want to say about the relationship. The second thing I want to say about the relationship is that, well, the second, the normative domain doesn't necessarily follow from the second domain. So it's not because you track how an engineer is thinking that you then know how he should or she should think about ethical issues. Cheers! The fact that you think as a philosopher, as an engineer, a psychiatrist, or whatever, that flying a jet is bullshit, doesn't of course give you reason to that it is bullshit. the normative conclusion that it is or that it should be bullshit doesn't fall from a descriptive point. That a lot of people believe that privacy is bullshit isn't the reason at all to believe that it should be bullshit. Then if you do that, if you make the conclusion, if you have the conclusion that is based upon a descriptive account, then there's a policy. This what we call a discophous. Matyas Hana (50:24.815) Give it all right sound. Matyas Hana (50:35.087) This alt-palacy is a very famous fallacy, this alt-palacy, also called the naturalistic fallacy. this, how we should behave or think, cannot be based upon this. Cannot be, may not be based upon this. If you do this, you're in. This is a fallacy. So for example, daily life, why not you go to France? holiday. Well, we did that. We did that. That's what we use today every summer. And thus we should go again to this country. Of course, it's bullshit. You cannot conclude this namely that we should go to because you always was used to go to France. It's not because you actually believe that it is bullshit that it should be bullshit. France, for example. So a prescriptive claim cannot fall from a descriptive claim. That's important. OK, there was a question. Matyas Hana (52:02.222) Yeah. Matyas Hana (52:07.482) So, we understand the question. In normative ethics, try to make a distinction between what is desirable and undesirable. That's correct. Yes. And then the question is, does it imply that it also continuously changes because our norms and set of rules are just, no. It's correct indeed that our beliefs and appraisals and values and norms that they do change, that's correct. But that's not changing like continuously. I mean, today, you and I probably have the idea that firms at that point, that's not going to be different tomorrow. In a year, or in five years, I guess. So there is a change, but it doesn't, I mean, is a change over the time, and it's probably over decades that people's minds. and also at a more cultural level. Like if you look at privacy and animal welfare, for example, that's of course, that has been changing over the years, but it's not over one month or one year, it's over decades. That's in the 70s, people became more more aware of animal welfare, et cetera. That took a while. So beliefs and people, I'm appraised also, normally the points are indeed. There's some kind cultural change on that one, yes. But not continuous. So yeah, that would be to today if you would ask me why are you a vegetarian? I would say animal well-being is very important. Killing another animal just because we like it, it's not cruel. That's the soup that's my opinion. But if you would ask me, like, living in the 50s, I would say, what? What you say? That's the thing. It looks like, I would say, Matyas Hana (54:14.714) Temporal variation. There's also spatial variation. I guess my parents are living in small villages and farmers. ask if I'm telling about you, if you're telling about the well-being of the well-being of the chicken or whatever. So this cultural variation, yes? Temporal, rock, spatial variation. Good question. So my answer would be, yes or yes, change, but not like every day. It takes a while. Matyas Hana (55:19.385) So good question. First, it's a fantastic question. But let me ask a question. don't know if it's often to me. But it's a fantastic question. My first initial question would be, who are those persons? It's not those persons somewhere in an ivory tower or whatever. It's actually you. It's actually me. You are doing all of the graphics. Mostly all of them. everyday, everyday. There you have it, ask a lot of questions. People talk about climate change, migration, euthanasia, animal welfare, what ever all those things. Those are very important questions. As soon as no one sitting somewhere in a tower and deciding, okay this is correct, try to follow it. Matyas Hana (56:21.175) Yeah, if I'm suggesting this kind of image, like those people sitting in an ivory tower, like in academia, and then thinking about what should be done and what should be thought about, and then transferring this through an idea to society, that's not the way I want to describe it. So I'm just saying, normative ethicists are mostly people with an activity out of the blue, and they're in an ivory tower. But then you're not saying, you shouldn't do this. You're doing research, writing it in papers, and maybe at some point it'll be picked up. But it has been picked up. Take an example, animal ethics. was in 70s or 60s, think it was the 60s. It's a very famous book by one of the most influential philosophers, Peter Sikker, writing a book, publishing a book on ethics. It's probably one of the most known books ever. It has like a very wide influence. And it was not that he was saying, you should do this. He was like writing down a couple of ideas, trying to express, well, this is indeed the best way of living. Well, you should do this, there's something else. mean, you can choose this problem, if you're not. But I think that's exercise in living, not to kill and eat animals, for example. So there's more to it. Now, if ethics is indeed about people doing research, and then it's not like, OK, you should not do this. And otherwise, it's going to be difficult to do this in That's not the real point. Matyas Hana (58:23.6) First thing, someone was asking, do we have to learn anything that we say in class? Because I'm not physically able to write everything down. Well, first thing, if there's something that is unclear, raise your hand and ask. I will go and re-explain it. Second, studying the diversity of making or implying that we should be able to make a distinction between something that is really important and something that is unimportant. Given an example, we'll have to study the example by heart. Given an example of whatever example we're taking, given an example that's trying to make clear a concept. don't have to study by heart the example. And at the same time, yes, answer is yes. You should have anything that has an anxiety problem as part of the material that you have to study. going about other form of examples. If there's something that's unclear, you're not talking too fast, and I know that I have to be like, the sentence is too fast, raise your hand. There's nothing wrong with that. Let's keep going. talk without running, raise your hand. Other question that came up is about movement in ethics. So normative antics is I would say is a theoretical cognitive. Matyas Hana (01:00:01.075) subtly with it, because with ethics. It's a cognitive act, it's a cognitive activity in the sense of you're discussing something. So normative ethics is not about, well, I'm doing this, I'm doing this good, I'm doing this bad, I'm supporting this kind of childhood because I'm wearing or buying this sweater, whatever. This is not normative ethics. Normative ethics is about thinking, arguing. discussing arguments in favor or against something. Against for example, whether it is being by children. For example, about eating animals. Not eating animals is not normative ethics. But discussing the question whether it's allowed to or justified to eat animals is normative ethics. And the point is going to be within a minute. That's not a privilege of a philosopher of course. mean you're all the time doing overcontractors. Probably. Probably tonight at a bar. With parents, with friends, with your wife, husband. I mean, I don't know. You're answering questions. Why are you lying to me? Why? You're lying to me. That's something that we believe is undesirable. Why are doing this? So the question is, is it justified to lie? You give an answer. You discuss. If you have an answer... at that point you're doing moment of fact, you should try to give reasons why lying is not allowed, not justified, or lying is justified in this case. This is not lying or lying as such, moment of fact, this is discussion about moment of fact, about lying that is moment of fact. Okay. Matyas Hana (01:01:57.143) normative ethics is to find out what the distinction is between something that is undesirable and something that is desirable whether it's a belief or an action it's about the distinction making a distinction and of course the distinction as such is not the most important thing in normative ethics I mean we also have strong opinions about and climate change and migration and abortion and indecision and whatever about anything very very strongly. I don't care about strong opinions, I care about this, the reasons in order to justify an opinion or a statement or a thesis. That's what matters. So what we're going do is we'll try to find, to find reasons in order to ground the distinction between something that is dire and undesirable, something that is good and bad, only and not more correct. So reasons, or another word is ground, justification of an argument. An argument does, in other words, some kind of piece of evidence, information, that you give to someone else in order to convince the other that your position is a right one. That your thesis, that your technology is correct, desirable, trustworthy, what? And that's what it's called, arguing. reasoning, grounds, justification, arguments, evade or get something. Matyas Hana (01:03:38.73) be clear, mean it's based upon reasons, your distinction. It's not based upon like religious text. We don't care about it. I mean, we do care about it, but not in this context. It's not sufficient to say something is good, something is bad, something is undesirable or not because, why? Our reason is religious text. We're not gonna accept it anymore. We wanna have like... other reasons than religious text. That's first thing. Second thing, also authority. We don't care about authority. We do care about authority, but not in an ethical, normative sense. We're not saying, this is good, this is bad, why? Because my parents, my professor, the priest is saying this or that. And therefore it's good or bad. No, we are not accepting that anymore. Also like intuitions or feelings. We don't care about them. Well, we do care about them, but not in normative sense. This is good, this is bad, because I have feeling that this is bad, this is good, I have the intuition that it's good, this is bad, I don't feel good about this, and thus it is in a normative sense, is not acceptable. We want to have reasons. I'm putting this here, I need to ask here, because also at the exam, People often make the mistake that they are confused about giving reasons on one hand and two reasons on the other hand. So to reason or using your reason is a cognitive activity. The reason is a cognitive activity. Reason as such is a cognitive, let's say, attitudes or capacity. Well, this is about reasons in the sense of grounds, justification for aqueous seeds. In our case, it's that. Matyas Hana (01:05:49.654) justification of the distinction between what is good and bad. So we're interested in this. Something is good, something is bad. We made the distinction, it's clear. But of course what is interesting to us is what is the ground here for this distinction? What is it based upon? Maybe there is no good ground and thus the distinction is unjustified. That's going to be interesting for us. So what we want to do is not so much using our cognitive capacities, we will do it, but that's not the main point here. It's about finding arguments in favor or against something. And that's what we call to argue. What is arguing? Arguing is social activity. To argue is social activity in the sense that if you try to convince someone else, yeah? based upon reasons, based upon justification, something, a statement, a thesis, a theory, a belief, or an action that is good or bad, is correct or incorrect, whatever. Now I could argue in a non-atical sense, Manchester City, that's my statement, that's my thesis, Manchester City is the best team to work with, thesis, I'm interested in the argument, the favourites, because when the champions win, that's my argument, I really believe it, and second, I could also in an atical sense, I believe that it's wrong, assume. to eat animals, while that's a thesis and giving arguments to convince you that it's an indeed problem. And that's arguing. It's a social activity, always trying to convince someone else. Okay, to make clear what arguing or giving reasons explicitly is, it's important to distinguish it from explaining something or giving causes of something. Matyas Hana (01:08:03.966) When I'm giving causes, when I'm explaining something, what I'm then doing is referring to facts. that are needed in order to understand why some event or state of affairs is the case. I think that my shirt is wet, for example. This is a state of affairs in world. I have a wet t-shirt. I have to explain this. Then I need to give some kind of causal chain of events. Here, for example, it's raining outside. I was walking outside. So walking outside, it's raining outside, leading to the effect that indeed my t-shirt is wet. For example, here I'm not doing some kind of social activity where I'm trying to convince you of something. I'm just explaining why this is the case. And this is different from... giving reasons, so giving causes is different from giving reasons explaining is different from arguing because in arguing I'm trying to convince you that something is correct or incorrect to make the distinction clear. A super example that would drive you too fast is what is happening. And I'm not saying I was drinking too much, I too much alcohol in the blood. This is of course not, if I would say this, would say well, now you're arguing, now you're justifying your events of driving too fast. Matyas Hana (01:09:46.59) what you're now doing is a misciclis you're explaining something you're giving a call to explanation here you're saying well this is happening because of that but you wouldn't say this is an example of an argument this is not an example of justification you're not trying to justify it that driving too fast is okay and acceptable. not doing it. You're not saying driving too fast is acceptable. Why? Well yeah, because I have too much alcohol in the back. However, you could justify this. You could to justify it. You could try to argue. You could try to convince someone else that it isn't being justified by giving, for example, the argument that your wife is pregnant and she needs to go to the hospital or whatever. That's an argument that could justify, provisionally probably, that was indeed justified to God. I'm trying to distinguish that. In normative ethics, it's not about this. It's about this. So the sentence here is explaining that not all causes are reasons. Not all explanations are arguments. could be of course that an argument is an or that a reason is also an cause could be but not the other way around it's not necessarily that something that is an explanation is also an argument and this is an example this here I'm not arguing I'm just explaining and what the matrix is about the first thing so we're going to see three kinds of arguments in the second chapter so what I'm going to do But in the five minutes I'm going to convince you, I'm going to convince you that ethics should be studied by engineers. Not because I believe or I have the intuition that it's no. Because I have very good reasons for that. And what I'm going to do in the second chapter is I'm going to offer you some kind of tool in order to help you to answer normative questions. And you are actually constantly asking normative questions. Matyas Hana (01:12:01.021) So the purpose of this course is at least for the second chapter is I'm going to give you some tools, kind of framework. Once you have a collaborative question, I can help you with this framework to develop arguments in favor of the question or against the question. These are three kinds of theories, three kinds of arguments when you are having, when there is a normative question. And these are three different arguments that we've tried to develop. Judy Ethics is developed by Meloakant, George Walshberg. Post-colonialism by Jeremy Bentham, race loss for an event, Aristotle in by the way. The ancient Greece had to be deferred to ethics. Very briefly, a very brief sketch will continue next week for two weeks. Duty ethics is not a word in theontology. It's, or in other words, it's right based ethics. So what is the purpose of introducing you to this? Well, it's going to provide you with reasons to answer a question or a different question. You could say, well, something is desirable, something is undesirable, that's a distinction. Why? Because it's my duty to behave this way. Because you have the right not to be harmed. Because we have to follow this rule. because it's unjustified to impose a risk to someone else, the risk of harm for example. And that's one kind of reasoning that we have. So the reason is that this is an argument in favor of the thesis that some policy technologies of action is good or bad. Matyas Hana (01:13:56.978) Second is going to be consequentialist view of reasoning. It will be about this is good, this is bad, technology is bad, technology is good, policy is good, policy is bad. Why? What is the ground? What is the justification for it? It has everything to do with consequences, with effects. three of us, consequentialism. It says well, something is good, something is bad. because it has so many good effects. It's so bad because although there are some good effects, they are not, they are overruled by the many big negative undesirable effects of my passion. that also refers to virtue, something that's good, something that's bad. Because it's expressing virtues. Courage. royal, for example. That's also courage, that you express it in action. But people, we do actually, we... We refer to duties, we refer to consequences, but also to virtues when we are arguing a favor against something. Something is good because it's expressing loyalty. As an employee, for example. Why do you do this? Is it good? Yes, it's good. Why? Because you're expressing in your action, empathy, example, sympathy, or loyalty, or courage. Matyas Hana (01:15:34.002) three kinds of arguments to already make you little bit familiar with it. We are often, in many, cases, the training example I expressed then has proven this, we are in many, cases arguing and the consequences to be often easy. examples, related to AI technology, there's a lot of debate and discussion, AI today and the environmental impact of AI. the huge amount of energy that is needed in order to those models, large language models, example. So what I would then say is, when something is good, something is bad. Why? Because of an old man. Thanks for watching! Matyas Hana (01:16:33.905) Okay, what you're seeing is something is good, something is bad because of the effects. These are the things that make this thing incorrect or correct. This is consequentialism. As such, this one, this action, as such is really neutral. What it makes good or bad is the impact of the action, of the technology. Because we are like, think about AI, the impact of AI, training like large language model is so big that some people, I'm not saying that I'm following this kind of reasoning, but some people say, well, some models are bad because of the effects that the environment has. Also, think about abusing people. young people, people, like if you're the general of human beings, also let's say of human animals, yeah human animals and non-human animals. Some people say well, if you're the general for example is bad, general behavior is bad. Why? This is a thesis again, the justification that is needed to be referred to the effects. What are the effects of child labour, example, so traumatic experiences, not able to go to school, physical disabilities, maybe emotional disability, I don't know what kind of relaxing effect could be given as a reason to say that this is bad. But you should be aware of the fact that if you're reasoning in this way, at the end of the day, Matyas Hana (01:18:22.799) You could end with very, very bizarre situations. It could be that the negative effects that you're giving, that you're referring to in order to say that this is bad, child labor for example, that these negative effects aren't there, that there aren't any traumas, that there aren't any mental, physical disabilities that are resulting from the child labor. So if there are no negative effects, at the end of the day, can at least think about it. At the end of the day, we should say, well, this is maybe a problem. This is maybe a problem. So be careful. I'm personally so very convinced that constructions are very, very attractive. As such, most of the things are good or bad. They are not good or bad. It's about the effects. But it's very bizarre or very strange. And therefore, lot of people should, or at least they do refer to what is called right-based approach, a deontological approach. Maybe child labor doesn't have any negative effects, but it's a such fact because it goes against values, moral values that are expressed in the practice of child labor. the value of autonomy for example. Abusing someone, he says that this is bad. Not only because of the negative effects, if it has negative effects, of course, it's second-least, but not all of them, because in case it doesn't have any negative effects, it's still believing it's done because of the fact that the act is such a biblical or not justified. yeah, that's possible. Lie, another example. Another example to make clear, consequentialism cannot cover our moral practice. Lying, we condemn it in lot of cases by the hand compared to the negative consequences. Here you can see on your moral go-through, that's the of game that we have. It's bad, it's really bad. Why? Matyas Hana (01:20:34.256) Because of this, she will be disappointed with whatever. Lose self-confidence. But assume that nobody knows about it. Do it. You and just the other person knows about it. Your girlfriend or boyfriend, they don't know about it. So what does is the point. This will have a negative effect and thus cheating on him or her would be more in the front. But that's not the kind of way we think about activities. We refer to something like the polygamy and values that make it as such bad, even though it doesn't have any negative consequences. So, consequentialism could be interesting, yes, many cases, but it's not sufficient to guide our moral intuitions and practices. We should also have right-based approaches. We're doing theological... Matyas Hana (01:21:45.815) So the question here is, is consequentialism the same as utilitarianism? Difficult? Always difficult. But it's different. We've come a touch further in the second chapter. So, utilitarianism is just a subset of consequentialism. So every utilitarianist is a consequentialist, but not the other way around. There are other consequentialists than utilitarianists. Don't worry about that, we'll see that in the next lecture. So, my point, normative ethics cannot be the essence of ethics and of course not philosophy because a lot of other people are also doing normative ethics, maybe in a less rigorous way than academics do. But also as an engineer you're asking normative questions, you're answering normative questions, think about the famous example of the Google project, the Mavre project in Google, right? So at that point, they signed a contract that they would develop AI technology for the Ministry of Defense in the United States. lot of employees at Google became aware of it. They weren't happy with it. They signed a petition. Many, many people signed the who's the Sergey friend, Larry Page, they agreed upon it. Okay, this is bad. Stopped the collaboration with the Ministry of Defense. And so the petition ended. have actually some effect. So the engineers at that point, the AI developers at point at Google were having or taking the role of position. We're developing something, we're doing something which is an action, we're developing an AI system and we take the role of position. This is bad. We shouldn't do that. This is not normally neutral anymore. You're taking its time. This is bad. Matyas Hana (01:23:50.383) So just to give a very famous example, when you start working within a week, month, year, five years, whatever, as an engineer, as an idea, AI developer, for the first week, day, month, year of, you will have to answer more of the questions. And if you don't do it, that's an issue, but you're probably going to do it spontaneously. And so therefore, it's probably a good reason to say, well, it could be the essence of loss. because other people then philosophers are also asking questions. Engineers, psychiatrists. So, example, assume that you're doctor and you have patients saying, you should not help me, cure me because of religious reasons. Well, I'm then asking the question, what should I do to behave well? Should I follow or not? you're trying to convince or not? This is your question, This is desirable or not desirable? Assume that as much here I think that equity is important, treating everybody equally is important. Should I follow this rule and value in all cases, in every context? Meaning that if someone put a lot of energy into a project, more than someone else, but at the end of the day they should have the same salary? That's equally able. Why? Why not? What was your question? So this slide is very important. be more or less sure about at least one question at the exam, multiple choice, and four open questions or argumentation questions based on this slide. Matyas Hana (01:25:46.542) So it's clear that ethics is not the essence of philosophy. Why not? And that's something that's unique and universal. But we have several sub-delegates. Maybe someone else could close the thing because there's a lot of stuff like that. Yeah. Could you do that? Matyas Hana (01:26:17.176) So, is, well in the domain of philosophy there are lot of subdomains and therefore philosophy cannot be, ethics cannot be subdomains of philosophy. Yes, helping? Any help? That's okay. Yeah. Okay, perfect. So. Why is ethics not the essence? Because there are some domains, political, social, philosophy, philosophy of science, etc. Meta-ethics and descriptive ethics do have an essence because they ask questions, very specific questions that only they are asking. And they are course not the essence of ethics and philosophy. Why? Because we have normative ethics. also have logic, therefore they are not the essence of ethics or Normative ethics is not the essence of ethics. Why? Because we have descriptive ethics. But also has an essence. Why? Because also engineers are asking normative questions. So very easy for me, one minute to think about, we'll deploy the question, two or three of that maybe based on this slide. Normative ethics, first option. Is not the ethics, is not the essence of philosophy? It has an essence. Second option, meta-ethics. is the essence of philosophy. And it has an essence. Matyas Hana (01:28:05.687) Third option, descriptive admixt. Hasn't the essence, and is not the essence of ethics? Option number four, normative ethics. Hasn't, and is not the ethics, is not the essence of ethics. Option four is going to be correct. Keep in mind multiple choice questions. There's only one answer that's correct. only have one point, gain one point if you circle the correct answer. If you circle the correct and a bad answer, you lose one point, zero, nothing. Okay, so what this course will be about is about normative ethics mostly. Second third are also chapter number four. So almost no meta ethics, no scriptive ethics, almost no... logic, we'll talk about the three positions in the second chapter, we'll talk about enhancement, and we'll talk about AI and ethics, that's also important in ethics. So this is what this ethical view on technology and AI will be about. What I'm not going to do is, I'm going to give opinions about... Matyas Hana (01:29:26.413) Manchester United or animal welfare or climate change and migration and et cetera. I care about it, but also the army around. I'm also not going to have it. Matyas Hana (01:29:45.224) I'm going to do this, I'm not going to discuss study, like we will in depth reading of fake philosophers. We still have couple of six minutes, so please, last slide is going to be important. And also people often tend to believe, well, fast kinetics is about seeing a lot of deep fakes in a very technical sense. You're not going to use that many technical words. I'm going to reach the inside. OK. So what is the purpose? And this is important. So you're trying to understand that there's another five last minutes and then I'm done. So what is the purpose of this course? So there are two questions. Why should we have ethical, AI and technology? Why should we have it? And if that's clear, and I guess you agree with this, if that's clear and convincing, still it's not clear why we should study ethical and philosophy. Because we have to have ethical AI without studying ethics. Question. So at the end the day, I don't think we have it well. We all have very strong opinions about what is desirable and not. We nothing on someone's face or name, for example, without any good reason. Most people would say, this is undesirable, this is bad. A lot of actions, lot of policies, we believe them to be incorrect. We have movement of opinions about them. It's a hard place. So there's no reason to believe that I think about my technology and I should be an exception. So even though things are normative in nature, there's no reason to say well technology should be normative. We can do whatever we want with technology and develop it the way we want it. won't take into account any kind of guidelines or questions about that. Matyas Hana (01:31:53.164) here I assume that this is convincing for most of you, but I assume that you don't buy this position. You're not interested in it. and it should be also about the ethical knowledge. Well, even if you don't agree with my position and most of your colleagues' position, I mean, it doesn't matter what you believe. I mean, we have the ethical guidelines, we have the I act. So if you believe that... artificial intelligence, general or not, shouldn't be ethical, well, that doesn't matter at all. I mean, you should follow ethical rules. If you're developing it within a year or five years or ten years, in a way that is not in line with ethical guidelines of the European Commission, but also that AI equity is coming, and it developed a roll-down, if you're not developing it in a way that is in line with AI equity, you're going to be punished, there's a reason to be punished. Matyas Hana (01:32:52.651) Okay, assume that you're convinced of this slide. Why should you study ethics? Matyas Hana (01:33:03.219) So the F word is trustworthy AI, ethical AI, good AI, ethical AI, AI for good, all these things. But some maybe will say, I mean, I don't need to study ethics. I don't have to take a course in ethics. This is bullshit. Why? My argument is, or your argument could be, this is bullshit because we have intuitions. There's an American psychologist, John S. Hayes, who said, well, people have a lot of intuitions, moral intuitions, a lot of many, many things. And these intuitions, they are either biological, given by nature as part of the DNA. Or cultural, you learn them between one and ten years old, living in a family, your parents give you lot of ethical beliefs. These intuitions, cultural, biological, they are sufficient to develop more efficient technologies and transfer hyper-autonomous values and whatever kind of technology they need develop. This seems very unconvincing, at least to me. Why? Well, I agree. mean, biology and cultural beliefs are probably sufficient in many, many daily cases. Like, biological way you could say, people are naturally not interested in virus and the crash. They see a lot of virus. They are like, not looking. not looking into the virus and blood and dead bodies, this is like a biological disposition that have. Also cultural, the environment in which I grew up, learned me that lying, cheating, whatever, is bad. And this is sufficient to guide their life like normally. But this is sufficient to know how to develop a neural network. Think about the brain. Matyas Hana (01:35:03.979) We have like 85 billions of neurons and billions of connections between the neurons. This is the same for the artificial network, neural network, more or less. It's too complex. It's imprasperable. in the sense of it's too complex. We don't have any good, entire, cool, sufficient insight into the system, into the working of the system in order to understand why the system is making this decision, why the system is producing this output. That's nobody is ever able to understand why entirely judging is giving this information or that output. Nobody. Of course, I'm not able to, but also the engineers. So another good question then is, Is it desirable, it justifiable and acceptable to have interest from the outside? Yeah, could say, if you talk about Spotify, we don't care. mean, Spotify, if I don't understand the working of Spotify, it's just Spotify. But it's different if you're using an AI in the legal sense, in the legal context. For example, to assess whether a criminal will get the involvement of criminal activities or to assess whether you are the right fit for this new kind of vacancy for a job in the higher gross. People use organizations, use AI in order to find the right fit. If you are not able to understand why the AI system is saying, well, you're not the right fit for this position, this is justifiable. This is acceptable. Matyas Hana (01:36:45.13) The question about ink transparency can probably not answer with the help of individuals. Biological or cultural and thus... for those who believe that this is not a part of the exam, there's very, very high chances this is gonna be part of the exam, so I'll try to understand it. So my conclusion is, the thesis is, ethical AI technology, design technology, requires study that is based upon academic research. And that's based upon three premises. First, we are sure about, Many, many things do have to with character. You can tap that. And that's not different from technology. We also say that technology could be good or bad, or in the eyes of the second premise. And then the third premise, if you are sure about, if you convince that ethically-ized desirable, the intuition is not going to be sufficient to devalue the technology that is desirable on the side. Therefore, we need a course that is not based on my personal information because I don't have very deep information about many things. But I do have a little bit of knowledge about research that is going on. And that research could guide you, or could guide my course, and that could guide your... work in developing desire. And that's my reasoning behind, or my justification for this course. So at the end of the day, if some of you, maybe within two weeks of this lecture, have convinced me that this is incorrect, or that this is convincing, then they have reason to stop. You only have two lectures that stop this course. Because. Matyas Hana (01:38:43.786) Yeah, then I'm not needed anymore. That discourse and this material is not needed. If you have a good reason to say what your intuition, I'm to help you in finding answers to the question whether autonomous artificial systems do face responsibility. Well, you... That's not a great... What happened to your hair? What happened to your hair? I had a shorter hair before. I had shorter hair before. Matyas Hana (01:39:14.737) This is just a type of shop. Slow down. Bro, looks so... I mean, doesn't look normal. Yeah, I know. No, it's it's different. But you can... I this is your one. I give you a broom, you clean the house. No actually, I'm good at this. Enjoy your girlfriend, maybe we can do tennis on Sunday. If you don't have pussy. No, no, you don't want to. I'll beat you bro. I'll beat your ass. Alright, take care bro. Matyas Hana (01:39:48.81) you Matyas Hana (00:00.174) everybody. First, another check. You understand me well? Maybe someone could, yeah. Could you raise your hand? Yeah, okay, thank you. Thank you. Yeah, that's good. Okay, good afternoon. Welcome in our third lecture within the course, Ethics, Enterprise and Ethics. So after this lecture, are already at the half of the semester and half of this course. So as I mentioned, unfortunately today we have to organize this lecture online, not offline. So maybe there could be some, I know that I announced it at the first lecture that this lecture was gonna be. online, but some people may be misunderstood. So therefore I'm going to make a recording to just to be sure that everybody is at the end of the day able to listen to what I said during this lecture. Okay, so I'm making a recording and I could post it on, let's say Toledo if needed or I'll do it anyway. All right. So Before I start, let's say this, I know that this is an online lecture, which is not the most ideal situation. So what I suggest is that I'll talk for about 40 minutes, then I have a break, then another 40 minutes, then another break. So not just one break after one hour and then another big part. So I'm gonna talk in smaller blocks, which is, guess, a hope at least. This is more... It's nice for you, but also it's better for me not to talk for an hour to a computer. I hope you understand that. So I'll start to talk in a minute for four minutes and then I have another break. Okay, if I'm not mistaken, but please correct me. Last time we ended with what is called the ethics argument. And I'm gonna now finish that. Matyas Hana (02:20.942) and then move to the second, let's say, chapter of this course. Just I'm going to repeat one thing and then I'm going to move to the second chapter. okay, here we ended last week or two weeks ago. Before I start discussing the things I need to discuss, If you have any questions during this online lecture, please raise your hands and you can always use the microphone if you want, if you can use it. Or I mean, maybe you don't feel very comfortable about using the microphone. You can also ask the question in the chat box. If you have questions, question for discussion, question for clarification, please put it in the chat box or you can use, raise your hand and use the microphone. All right. Okay. Okay. I'm going to stop. I'm going to start here. So this is the last slide from the first chapter, which is a slide on what is called the ethics arguments. What is the ethics argument? The ethics arguments. That is the name of an argument that is in favor of the thesis that if you want to develop ethical AI or desirable technology, then it's required to take a course in ethics. And this ethics course should be based upon scientific research. That's the C under the bar, which is the C standing for conclusion. So the conclusion is based upon three Ps. three premises. So three lines that are given and because they are given, it follows or what is given here in three lines, if these lines are correct, then it follows and also the, then the conclusion follows, right? So to rephrase it, this conclusion is based upon three premises, three arguments in favor of the thesis that. Matyas Hana (04:44.633) Ethical AI technology that is desirable requires some study courses based upon research. What is the first line I mentioned at last week or two weeks ago last lecture? The first line is about, well, it's clear that we want to have a couple of things, but we also don't want to have a couple of things. We find it important that people are honest, that they are respectful, that they develop, do things in a sustainable way, that they respect. each other's privacy, that they are fair, et cetera. We don't want people to be unsustainable, disrespectful, unfair, et cetera. So we make that distinction in normal life outside the context of AI, but also in the context of AI, we do that. So we do that in general. We make distinctions, meaning, and I'm coming back here to the notions I introduced in the first two lectures, we have... this like broader domain of philosophy. And one of the sub domains of philosophy is called ethics. And one of the sub domains of ethics is called normative ethics. And normative ethics is about making the distinction between what is desirable in the one hand and undesirable on the other hand. Right. And so this is what we do in normal life outside the context of the eyes. Well, we make a distinction between what is desirable and undesirable. Well, we do that. Second, Second premise, second P, there's no reason why we shouldn't apply this distinction or this attitude of making this distinction within the domain of technology and artificial intelligence. That's the second premise. And third, some people could say, well, maybe we could rely on our intuitions in order to justify our design practices and AI practices. It could be so the intuition that I have are like the ideas that I have could be acquired in a cultural way via education, teaching or whatever, or it could be like something that you inherited biologically via DNA, etc. So it could be based upon cultural processes or natural process that you acquire some ideas, intuitions, beliefs about Matyas Hana (07:09.305) about what is desirable and undesirable. However, we have seen that this is, I mean, this could be helpful, of course, as some kind of moral compass in daily, usual contexts, but it's probably, it's pretty clear that it's not gonna sufficiently help you in contexts wherein we use hyper-advanced artificial systems like AI systems. So if you talk about artificial intelligence and you want to discuss whether or you want to know whether it's really required that a system is transparent, right? And if you want to know whether it's really required that it's fully transparent, this is an ethical question, but this ethical question not going to be based upon or not going to be answered with the help of your culture or biological intuitions. This could be maybe, mean, could be of course a little bit helpful, but not sufficiently. And therefore we need extra material in order to answer the question. We need classes, lectures, some kind of study of ethics, of technology, of artificial intelligence. That's the conclusion. So the intuitions, that's the third premise, intuitions for desirable technology. can be in a way maybe a little bit help or can help a little bit in order to design ethical technology, but it's not going to be sufficient anyway. So for example, let's say we agree upon, well, some kind of transparency is needed for ethical AI. Transparency in the sense of we should be able to understand. what the mechanisms are behind the output of a chatbot, for example, of an AI system. And so some kind of transparency, some kind of insight in the mechanism is needed, but not, I mean, it shouldn't be, we shouldn't have a total entire insight into the system, the mechanisms. So, but then the question becomes, well, what is the threshold between, or what is the line between this amount of transparency is undesirable, but okay, it's allowable. Matyas Hana (09:28.819) So that's a really difficult question, right? And my DNA, my biology won't help, but also my cultural intuitions that I learned, that I acquired during education, living in the same room and house as my parents did, but it's not gonna help. And therefore we need study. We need to study ethics of AI. need to study, we need to take courses that are based upon research on technology of. and the ethics of technology and artificial intelligence. Okay, so this is what is called the ethics argument. So this is the reason why you're now taking this course on a Friday evening, right? So to be honest, as I mentioned already, you could say at the end of the day, well, I'm not convinced. There's no like good reason to study ethics of technology. And if you have this kind of reason, argumentation against my thesis here, I'm open to it, of course. I mean, I could, yeah, I could think about maybe this is not correct and maybe then let's close the books and then we don't have to teach it in oral. If you have a good reason, put it in the group. You can, we can discuss this, but I don't think it's going to be convincing. All right, I think this was the last slide. Well, this is actually the last slide. Let's now move to the second chapter, unless you have questions about this first chapter. Matyas Hana (11:06.138) Anyone with questions? about the first chapter. Matyas Hana (11:16.774) No one, no questions. I'm moving, then I'm moving to the second chapter and please keep in mind that you have four chapters. This is the second already. This is a very important, let's say chapter because it's crucial. It's one of the biggest chapter that you have to study for the exam, but also it's the kind of material that is needed in order to prepare your interview with the engineer. So if you really have, a good idea of what is happening here in this lecture, in this chapter with normative ethics and the three theories that we will discuss, then you have like a very good ground in order to prepare your, let's say, interview. Okay. Then after this chapter, we go talk about enhancement and then about ethics of AI, ethical, what are the ethical issues, problems with artificial intelligence. Okay. This slide is very important because it's like, it summarizes a little bit what the point of this chapter is. Keep in mind, ethics is a subdomain of philosophy and normative ethics is a subdomain of ethics. It's not the essence of philosophy, it's also not the ethics, the essence of ethics in general, because we have meta-ethics and also descriptive ethics. So normative ethics, right? is about the question. It's a normative question. What should I do in order to behave well? In other words, to rephrase the same question, how can I make the distinction between something that is morally good and morally bad? How can I make the distinction between a technology that is desirable or undesirable? So is there any criteria that could help us in order to distinguish both technologies, policies, decisions, actions or whatever? So what we then need is we need some kind of criteria. In other words, we need some kind of argument, some kind of ground justification, reason to support the distinction, to support my choice, my moral choice for this technology, which is a good one versus another technology, which is a bad one, and not this. Matyas Hana (13:34.067) And so what I'll offer you in this, what I'm going to present in this lecture is I'm going to offer you three arguments, reasons, justifications for making that distinction, making that moral distinction and some kind of arguments that enables you to ground your decision for this or that technology and not another kind of technology, which is the undesirable one. The first kind of argument that I'm going to present is what is called the ideological argument or the duty ethics based arguments. That's gonna be the first one. The second one is a consequentialist arguments. Consequentialist is about the consequences of our decisions and actions. And then the last one is the most oldest one is what is called a virtue ethics based argument in favor of moral decisions. Okay, so the point is, let's talk about normative ethics. Let's try to... tackle the question, well, how can I ground my answer to the question? What is good, what is bad? That's the point of this lecture. So we have to find reasons. We have to find arguments that are convincing. So as you may remember or not, hopefully you did, arguing is a social practice. Arguing is a social practice, meaning you try to convince someone. You try to convince that A, B, C is correct or incorrect. And you're not going to convince someone else by saying this is correct. No, normally you're going to convince someone by saying this is correct because of this or that reason, because of this or that argument, justification, or grounds. So we got to argue and we're going to try to find arguments in favor or against something. I'm rephrasing again. The purpose of this lecture is to offer you some kind of framework, threefold framework in order to prepare reasoning, in order to prepare your arguments, to prepare your answers to the question, what should I do in order to behave well? So it's all about arguments. And in a way, I'm gonna give you a couple of reasons in here on this slide, why it's really something that is important. I mean, in the sense of, Matyas Hana (16:00.192) Well, while it's important that I'm teaching this, that I'm learning you to argue in favor or against something. And the reason is, this is not ethics, this is not about philosophy, this is what is called, these are three examples of cases or studies, experiments that show that people are in daily life, they are not arguing as they should do. They often have to make moral decisions and these moral decisions are in a lot of cases not based upon arguments, but based upon factors that are completely irrelevant. So this is not ethics because it's just describing how people de facto argue and formulate their convictions and beliefs and decisions, et cetera. This is not about ethics, it's scientific. Here what is presented on this slide is scientific work made more precise. It's situated in the domain of psychology and more precise in moral psychology. So moral psychology is not about ethics, it's not philosophy. It's not about more normative questions. It's just about describing in a descriptive way, trying to describe how people actually think, not about how they should think, but how they actually think, how they actually make their decisions. So in the point of these three research or these three papers that I'm gonna present, these three experiments show us that in many cases, people when they have to make decisions, they don't rely on argument, justifications, reasons, et cetera. They just rely on irrelevant factors, really relevant in the sense of they're just based upon properties and factors that are morally neutral, that are morally irrelevant. Here there are three cases. or three experiments. The first one is about Aj Jordan. And it's very interesting. It's about the spray. A spray that is producing something that doesn't smell at all. Well, it's very bad smell that is producing the spray. And it's an experiment that Jordan did a couple of 10, 16 years ago. So he was somewhere in the street, standing in the street and Matyas Hana (18:27.156) And there was a lot of people passing the street and they could, they should at that point where he was standing, they should cross. It was a crossover. You need to cross the street at that point where he was standing with his prey. And there were two situations. And the first one, there were only people coming or passing him or passing, crossing the street that were studying engineering in the first bachelor in Leuven, for example. And they were asked one question. So let's say 300 people, first case. They were asked one question, well, you have a case of someone who did something very bad. He or she killed someone. And then the judge, of course, should at some point make a decision. What's going to be the punishment for that person? So you're in that person, you're in that case, you're passing the street, the researcher is presenting you the case, and you have to make a decision. You're placed in the position of the judge. What should you do? What would you do? Well, most people tend, well, a lot of people tend to say, for example, well, he or she killed his or her wife, for example. There should be some kind of punishment of about 10 years in prison, let's say. Okay. That's what, that's a kind of average answer within the community of first bachelor students in Lilleven. But then the second case, and this is important now, the second case, another group of persons like take for example the second bachelor students from Reuven engineering students and they were crossing the same street the researcher Alec Johan was still there standing there and he was also presenting the same thesis the same case to these students from the second bachelor. However there was one difference and the story was the same there was one difference the difference was that at some point he used the spray. And so these people were hearing and answering the case about the judgment and about the person who killed his wife or her wife. were, and so they were hearing that case and but Alex Jordan was using the spray. So they were smelling something very, very bad, like a very bad smell. And when it became clear that there was Matyas Hana (20:51.256) a significant difference in the answers from the second group compared with the answers from the first group of persons. Those second bachelor students were more strict and more severe in their judgments, in their punishments. They didn't say, well, 10 years is enough. No, they talked about 20 or 30 years for being in prison, that the murderer should be present. So everything was the same except the kind of smell that was spread there in the air. Yeah, so that make a difference in let's say the kind of answer, the kind of solution decision that they would make when being a judge. Okay, so they had to make a very important decision, but that decision was based upon something that is irrelevant. Okay, let's move to another situation. That's not that this is not on the second line. not on the slide, by the way. But we know from empirical research that judges who have to make a decision about a very serious crime, that their decisions differ as a function of the time of the day where they have to make their decision. So say around 11 just before noon, they have to make a decision about the murder. And if they would have to make the same decision. after dinner, lunch, for example, around 2 p.m. About the same case, their decisions will be different, but they are different. And it has to do with the time and more precisely with the difference in time and more precisely with the fact that at 11, they were already a little bit tired, they were hungry. Whereas at 2 p.m., they already ate a lot of. already ate a little bit. And so this eating process does have or did have an influence on the way they decided about the crime or the thing that happened. So here again, as with the first case, smelling something is irrelevant, but also the time, et cetera, and being hungry had an influence on Matyas Hana (23:16.313) a very important decision. I would say that the smell but also the time is more irrelevant in an ideal world. wouldn't have or it shouldn't have any influence on my decisions. Another example, I'm going to skip the second line, Jonathan Haidt, I'm going to skip the discussed example. I'm going to skip it so you can leave it, throw it away. But I'm going to have our last example is from Jeffrey Cohen more than 20 years ago. Actually, it's a very interesting and relevant research question or research. It's about political preferences. And it says, well, there are two cases. the first case, is, so Republicans are here seen as right-wing conservatives and Democrats are liberals, left-wing progressive people. Okay, let's make the distinction. And in the first case, both groups, Republicans and also Democrats, had to listen to some kind of, or represent some kind of liberal progressive migration standpoints, right? And as you can expect, most right-wing people would say, well, this left-wing perspective is too liberal. We're not gonna accept that. Whereas if you look, you ask the Democrats, what their beliefs were about this left-wing migration program, they were like very open to it, they're very positive about it. There's nothing, let's say, surprising about that. However, in the second case, the same Republicans, the same Democrats were presented the same kind of program, which is an open left-wing progressive migration program. The Republicans were more positive about the liberal, democratic, left-wing progressive migration program. However, Not everything was the same. It had to do with the fact that they have been told that their fellows, that their colleagues, also their Republicans, were very open, positive about the program. So the fact that they hear that their colleagues were also positive, were positive about this kind of left-wing program, that that did influence their evaluation of the program. Matyas Hana (25:43.035) is also the case for the liberals. So they were less in the second case, they were less positive about the left wing program. And it had to do with the fact that they have been told that their left wing fellows, left wing colleagues were less open, less positive about the left wing program. So everything was the same except the fact that they were told that their fellows had positive or bad negative standpoints or ideas, evaluation about the same kind of migration program. So the conclusion is in this slide, there's a couple of reasons to say where people in normal life and daily life outside the context of AI and technology, they make evaluations, but these evaluations are probably not based upon something that we believe is important. Temperature may be, the smell, the time of the day, but also the social context. the social people make different evaluations dependent on not only the smell, but also on the fact that they belong to a specific group of people. And this is, guess, in general, seen as to be really relevant or undesirable, at least. So therefore, this slide pushes me in the direction of, well, focus on reasons. You have to teach them. You have to teach these engineering students you about ethics, about arguments, reasons, reasoning, argumentation. So that in the future when they When you, for example, as an engineer, in the future have to make a decision about the technology when you're developing a technology and you have to make a decision, that your decision is really based upon ethics and ethical reasons and arguments and not just about, my decision was made somewhere around noon and at that point I was hungry and thus my decision was to be different, would be different other than a decision made around 4 p.m. for example. Matyas Hana (27:53.546) or my colleagues were saying this and that and therefore I also follow them. But maybe this is undesirable. If I look at the arguments, I look at my reasons, justification, I should come to an other answer. And that's what I'm gonna teach you. So offer you the means in order to make proper reason-based statements and not just statements and thesis and ideas and et cetera. that are based upon the temperature, et cetera, or the smell or the opinions about others. Okay, the first framework that I'm gonna introduce is what is called the ontological framework. So duty ethics or the ontology both mean the same. And duty ethics as a theory was for the first time fully developed by a German philosopher called Immanuel Kant, who lived somewhere, mostly most of his life in 18th century, somewhere in Germany. And as most philosophers, he wrote about many many things, epistemological questions, questions about knowledge, about philosophy of art, aesthetics, anthropology, religion, but also about ethics. And so the kind of theory about ethics that he was developing has been published in his book, The Kritik der Praktikschung verneuft, which is German for the critic of a practical reason published at the end of the 18th century. very briefly, there's some backgrounds that is helpful in order to understand, let's say, Kant's ideas. First, it's important to say, what is the background? Well, it's important to say that at the back of his or the background of his ethical theory, the ontological theory, There's this notion of freedom in the sense of people are shaped by freedom. And so this freedom is needed to understand this the ontological framework. However, freedom as with philosophy, but also technology, sustainability, fairness, et cetera, can be understood in many ways or at least in two ways. And it can be understood as something that is negative or it can be understood in a negative way. Matyas Hana (30:21.734) but also in a positive way. And the negative here is not referring to something that is undesirable, no, it refers to something that is not there, that is absent. So when you say that something is, for example, freedom, that something is negative, it here refers to, or it could refer to, not to some kind of evaluation that is bad, but to the fact that something is not there, which could be there, and maybe should be there. It is something is missing, something is not there, and therefore it is negative. It's not there. Negative, and here in the means, in the sense of there's an absence of, external but also internal obstacles. Internal there is no mental issue, mental disorder that is pushing me in a way in order to do this or that. That's the absence of a... So here if I'm not having any kind of mental disorder, so no internal obstacle, then we could talk about freedom but only in a negative way. But also they could have external pressures, obstacles, influences, etc. So then these are the factors that could hinder or obstruct my choices from an external perspective. Could be peer pressure, for example, or pressure from family members, or medical issues, for example, health problems, financial shortfalls. All of these external issues could have like an influence on my decision-making process. And so if it is not making this kind of, or having this kind of influence, then we say that you're free, either internally or externally. You're free, so you have negative freedom. You're free from obstacles, internal or external obstacles. Matyas Hana (32:27.726) And Collins ethics, normative ethics assumes this kind of freedom in a negative sense. And what it in the sense that for being ethical, it's required that you are not hindered, pushed, obstructed by or driven by biological needs, desires, wishes, yeah, all those things that could you, that could. that could push you in a very specific direction. So being ethical, being morally responsible, for example, being fair, assumes that you're free from, and thus negative, and thus free in a negative sense, that you're free from an internal, external perspective, that you are not driven by hunger, for example, as with the example of the judge that I gave a minute ago, and that you're not driven by... some kind of internal needs, you're hungry, you're tired, so whatever, or also no desires, right? So you don't have any desire to do this or that. So you should be freed from it. You should be detached from all these biological desire processes. So if you have one person who has to make a decision, but you're pretty sure about him or her not being free in a negative sense, so him or her being driven by biological need or like a desire, then you have reason to say, that person cannot act or think in an ethical, like an entirely ethical way, because there is still this kind of... pressure coming from inside or outside. So the absence of this pressure should be there. Or there should be an absence, right? There should be no external or internal pressure. This question, is it happening, the desire to figure out an ethical law, a desire to in itself, though? No. Yeah, good point, George. No, it's not about that. Matyas Hana (34:39.301) It's about the desire to drink, to do some sports. It's not about, so ethics, yeah, assumes the desire, you could say between the desire to do ethical things, but that's different from the desire to sports, to drink, to go out, to watch a movie or whatever. So any desire should be excluded except of course, yeah, you're right, except the desire to do, to make ethical decisions. ethics assumes in the Kantian radiological way, assumes some kind of negative freedom, being freed from desire. To recall the famous song, freed from desire is like the, necessary condition for being ethical in a Kantian way, but there is not a sufficient condition. There should also be some kind of what we call positive freedom. And the distinction, by the way, between the positive and negative kind of freedom has been introduced by Isaiah Berlin, which was an American philosopher who wrote once this very famous text called The Two Concepts of So what does, what is the positive freedom? Positive freedom means that, let's say that there's several possibilities, several options and that you have the freedom to choose one of these options autonomously. So this is not negative freedom. It's not being freed from desire, nature, biological needs, et cetera. It's about being able to choose, being free to do something. And then of course assume some kind of variety of options that you can go for. So. Matyas Hana (36:41.746) Why do I say that? Well, you could say, well, I'm free in a negative sense. Nobody's pushing me or nothing is push

Use Quizgecko on...
Browser
Browser